diff --git a/README-zh.md b/README-zh.md index 4c74f6e2..10b2dc38 100644 --- a/README-zh.md +++ b/README-zh.md @@ -145,7 +145,7 @@ LightRAG对大型语言模型(LLM)的能力要求远高于传统RAG,因为 - **Embedding模型**: - 高性能的Embedding模型对RAG至关重要。 - 推荐使用主流的多语言Embedding模型,例如:BAAI/bge-m3 和 text-embedding-3-large。 - - **重要提示**:在文档索引前必须确定使用的Embedding模型,且在文档查询阶段必须沿用与索引阶段相同的模型。 + - **重要提示**:在文档索引前必须确定使用的Embedding模型,且在文档查询阶段必须沿用与索引阶段相同的模型。有些存储(例如PostgreSQL)在首次建立数表的时候需要确定向量维度,因此更换Embedding模型后需要删除向量相关库表,以便让LightRAG重建新的库表。 - **Reranker模型配置**: - 配置Reranker模型能够显著提升LightRAG的检索效果。 - 启用Reranker模型后,推荐将“mix模式”设为默认查询模式。 diff --git a/README.md b/README.md index ce606554..fa44019f 100644 --- a/README.md +++ b/README.md @@ -144,7 +144,7 @@ LightRAG's demands on the capabilities of Large Language Models (LLMs) are signi - **Embedding Model**: - A high-performance Embedding model is essential for RAG. - We recommend using mainstream multilingual Embedding models, such as: `BAAI/bge-m3` and `text-embedding-3-large`. - - **Important Note**: The Embedding model must be determined before document indexing, and the same model must be used during the document query phase. + - **Important Note**: The Embedding model must be determined before document indexing, and the same model must be used during the document query phase. For certain storage solutions (e.g., PostgreSQL), the vector dimension must be defined upon initial table creation. Therefore, when changing embedding models, it is necessary to delete the existing vector-related tables and allow LightRAG to recreate them with the new dimensions. - **Reranker Model Configuration**: - Configuring a Reranker model can significantly enhance LightRAG's retrieval performance. - When a Reranker model is enabled, it is recommended to set the "mix mode" as the default query mode. diff --git a/lightrag/api/README-zh.md b/lightrag/api/README-zh.md index 6fe5f86c..174b4538 100644 --- a/lightrag/api/README-zh.md +++ b/lightrag/api/README-zh.md @@ -1,4 +1,4 @@ -# LightRAG 服务器和 Web 界面 +# LightRAG 服务器和 WebUI LightRAG 服务器旨在提供 Web 界面和 API 支持。Web 界面便于文档索引、知识图谱探索和简单的 RAG 查询界面。LightRAG 服务器还提供了与 Ollama 兼容的接口,旨在将 LightRAG 模拟为 Ollama 聊天模型。这使得 AI 聊天机器人(如 Open WebUI)可以轻松访问 LightRAG。 @@ -79,6 +79,8 @@ EMBEDDING_DIM=1024 # EMBEDDING_BINDING_API_KEY=your_api_key ``` +> **重要提示**:在文档索引前必须确定使用的Embedding模型,且在文档查询阶段必须沿用与索引阶段相同的模型。有些存储(例如PostgreSQL)在首次建立数表的时候需要确定向量维度,因此更换Embedding模型后需要删除向量相关库表,以便让LightRAG重建新的库表。 + ### 启动 LightRAG 服务器 LightRAG 服务器支持两种运行模式: diff --git a/lightrag/api/README.md b/lightrag/api/README.md index ce27baff..48d2c011 100644 --- a/lightrag/api/README.md +++ b/lightrag/api/README.md @@ -79,6 +79,8 @@ EMBEDDING_DIM=1024 # EMBEDDING_BINDING_API_KEY=your_api_key ``` +> **Important Note**: The Embedding model must be determined before document indexing, and the same model must be used during the document query phase. For certain storage solutions (e.g., PostgreSQL), the vector dimension must be defined upon initial table creation. Therefore, when changing embedding models, it is necessary to delete the existing vector-related tables and allow LightRAG to recreate them with the new dimensions. + ### Starting LightRAG Server The LightRAG Server supports two operational modes: