Update README.md
This commit is contained in:
parent
2bf0d397ed
commit
1c53c5c764
2 changed files with 41 additions and 9 deletions
16
README-zh.md
16
README-zh.md
|
|
@ -135,6 +135,22 @@ pip install lightrag-hku
|
|||
|
||||
## 快速开始
|
||||
|
||||
### LightRAG的LLM及配套技术栈要求
|
||||
|
||||
LightRAG对大型语言模型(LLM)的能力要求远高于传统RAG,因为它需要LLM执行文档中的实体关系抽取任务。配置合适的Embedding和Reranker模型对提高查询表现也至关重要。
|
||||
|
||||
- **LLM选型**:
|
||||
- 推荐选用参数量至少为32B的LLM。
|
||||
- 上下文长度至少为32KB,推荐达到64KB。
|
||||
- **Embedding模型**:
|
||||
- 高性能的Embedding模型对RAG至关重要。
|
||||
- 推荐使用主流的多语言Embedding模型,例如:BAAI/bge-m3 和 text-embedding-3-large。
|
||||
- **重要提示**:在文档索引前必须确定使用的Embedding模型,且在文档查询阶段必须沿用与索引阶段相同的模型。
|
||||
- **Reranker模型配置**:
|
||||
- 配置Reranker模型能够显著提升LightRAG的检索效果。
|
||||
- 启用Reranker模型后,推荐将“mix模式”设为默认查询模式。
|
||||
- 推荐选用主流的Reranker模型,例如:BAAI/bge-reranker-v2-m3 或 Jina 等服务商提供的模型。
|
||||
|
||||
### 使用LightRAG服务器
|
||||
|
||||
**有关LightRAG服务器的更多信息,请参阅[LightRAG服务器](./lightrag/api/README.md)。**
|
||||
|
|
|
|||
16
README.md
16
README.md
|
|
@ -134,6 +134,22 @@ pip install lightrag-hku
|
|||
|
||||
## Quick Start
|
||||
|
||||
### LLM and Technology Stack Requirements for LightRAG
|
||||
|
||||
LightRAG's demands on the capabilities of Large Language Models (LLMs) are significantly higher than those of traditional RAG, as it requires the LLM to perform entity-relationship extraction tasks from documents. Configuring appropriate Embedding and Reranker models is also crucial for improving query performance.
|
||||
|
||||
- **LLM Selection**:
|
||||
- It is recommended to use an LLM with at least 32 billion parameters.
|
||||
- The context length should be at least 32KB, with 64KB being recommended.
|
||||
- **Embedding Model**:
|
||||
- A high-performance Embedding model is essential for RAG.
|
||||
- We recommend using mainstream multilingual Embedding models, such as: `BAAI/bge-m3` and `text-embedding-3-large`.
|
||||
- **Important Note**: The Embedding model must be determined before document indexing, and the same model must be used during the document query phase.
|
||||
- **Reranker Model Configuration**:
|
||||
- Configuring a Reranker model can significantly enhance LightRAG's retrieval performance.
|
||||
- When a Reranker model is enabled, it is recommended to set the "mix mode" as the default query mode.
|
||||
- We recommend using mainstream Reranker models, such as: `BAAI/bge-reranker-v2-m3` or models provided by services like Jina.
|
||||
|
||||
### Quick Start for LightRAG Server
|
||||
|
||||
* For more information about LightRAG Server, please refer to [LightRAG Server](./lightrag/api/README.md).
|
||||
|
|
|
|||
Loading…
Add table
Reference in a new issue