Fix links in README.md and README-zh.md
This commit is contained in:
parent
f0d67f166a
commit
8f1ff985d0
2 changed files with 14 additions and 10 deletions
12
README-zh.md
12
README-zh.md
|
|
@ -480,6 +480,7 @@ rag = LightRAG(
|
||||||
|
|
||||||
<details>
|
<details>
|
||||||
<summary> <b>使用Ollama模型</b> </summary>
|
<summary> <b>使用Ollama模型</b> </summary>
|
||||||
|
|
||||||
如果您想使用Ollama模型,您需要拉取计划使用的模型和嵌入模型,例如`nomic-embed-text`。
|
如果您想使用Ollama模型,您需要拉取计划使用的模型和嵌入模型,例如`nomic-embed-text`。
|
||||||
|
|
||||||
然后您只需要按如下方式设置LightRAG:
|
然后您只需要按如下方式设置LightRAG:
|
||||||
|
|
@ -569,7 +570,7 @@ rag = LightRAG(
|
||||||
LightRAG支持与LlamaIndex集成 (`llm/llama_index_impl.py`):
|
LightRAG支持与LlamaIndex集成 (`llm/llama_index_impl.py`):
|
||||||
|
|
||||||
- 通过LlamaIndex与OpenAI和其他提供商集成
|
- 通过LlamaIndex与OpenAI和其他提供商集成
|
||||||
- 详细设置和示例请参见[LlamaIndex文档](lightrag/llm/Readme.md)
|
- 详细设置和示例请参见[LlamaIndex文档](https://developers.llamaindex.ai/python/framework/)
|
||||||
|
|
||||||
**使用示例:**
|
**使用示例:**
|
||||||
|
|
||||||
|
|
@ -631,9 +632,10 @@ if __name__ == "__main__":
|
||||||
|
|
||||||
**详细文档和示例,请参见:**
|
**详细文档和示例,请参见:**
|
||||||
|
|
||||||
- [LlamaIndex文档](lightrag/llm/Readme.md)
|
- [LlamaIndex文档](https://developers.llamaindex.ai/python/framework/)
|
||||||
- [直接OpenAI示例](examples/lightrag_llamaindex_direct_demo.py)
|
- [直接OpenAI示例](examples/unofficial-sample/lightrag_llamaindex_direct_demo.py)
|
||||||
- [LiteLLM代理示例](examples/lightrag_llamaindex_litellm_demo.py)
|
- [LiteLLM代理示例](examples/unofficial-sample/lightrag_llamaindex_litellm_demo.py)
|
||||||
|
- [LiteLLM+OPIK代理示例](examples/unofficial-sample/lightrag_llamaindex_litellm_opik_demo.py)
|
||||||
|
|
||||||
</details>
|
</details>
|
||||||
|
|
||||||
|
|
@ -1536,7 +1538,7 @@ LANGFUSE_ENABLE_TRACE=true
|
||||||
|
|
||||||
## RAGAS评估
|
## RAGAS评估
|
||||||
|
|
||||||
**RAGAS**(Retrieval Augmented Generation Assessment,检索增强生成评估)是一个使用LLM对RAG系统进行无参考评估的框架。我们提供了基于RAGAS的评估脚本。详细信息请参阅[基于RAGAS的评估框架](lightrag/evaluation/README.md)。
|
**RAGAS**(Retrieval Augmented Generation Assessment,检索增强生成评估)是一个使用LLM对RAG系统进行无参考评估的框架。我们提供了基于RAGAS的评估脚本。详细信息请参阅[基于RAGAS的评估框架](lightrag/evaluation/README_EVALUASTION_RAGAS.md)。
|
||||||
|
|
||||||
## 评估
|
## 评估
|
||||||
|
|
||||||
|
|
|
||||||
12
README.md
12
README.md
|
|
@ -476,6 +476,7 @@ rag = LightRAG(
|
||||||
|
|
||||||
<details>
|
<details>
|
||||||
<summary> <b>Using Ollama Models</b> </summary>
|
<summary> <b>Using Ollama Models</b> </summary>
|
||||||
|
|
||||||
**Overview**
|
**Overview**
|
||||||
|
|
||||||
If you want to use Ollama models, you need to pull model you plan to use and embedding model, for example `nomic-embed-text`.
|
If you want to use Ollama models, you need to pull model you plan to use and embedding model, for example `nomic-embed-text`.
|
||||||
|
|
@ -567,7 +568,7 @@ In order to run this experiment on low RAM GPU you should select small model and
|
||||||
LightRAG supports integration with LlamaIndex (`llm/llama_index_impl.py`):
|
LightRAG supports integration with LlamaIndex (`llm/llama_index_impl.py`):
|
||||||
|
|
||||||
- Integrates with OpenAI and other providers through LlamaIndex
|
- Integrates with OpenAI and other providers through LlamaIndex
|
||||||
- See [LlamaIndex Documentation](lightrag/llm/Readme.md) for detailed setup and examples
|
- See [LlamaIndex Documentation](https://developers.llamaindex.ai/python/framework/) for detailed setup or the [examples](examples/unofficial-sample/)
|
||||||
|
|
||||||
**Example Usage**
|
**Example Usage**
|
||||||
|
|
||||||
|
|
@ -629,9 +630,10 @@ if __name__ == "__main__":
|
||||||
|
|
||||||
**For detailed documentation and examples, see:**
|
**For detailed documentation and examples, see:**
|
||||||
|
|
||||||
- [LlamaIndex Documentation](lightrag/llm/Readme.md)
|
- [LlamaIndex Documentation](https://developers.llamaindex.ai/python/framework/)
|
||||||
- [Direct OpenAI Example](examples/lightrag_llamaindex_direct_demo.py)
|
- [Direct OpenAI Example](examples/unofficial-sample/lightrag_llamaindex_direct_demo.py)
|
||||||
- [LiteLLM Proxy Example](examples/lightrag_llamaindex_litellm_demo.py)
|
- [LiteLLM Proxy Example](examples/unofficial-sample/lightrag_llamaindex_litellm_demo.py)
|
||||||
|
- [LiteLLM Proxy with Opik Example](examples/unofficial-sample/lightrag_llamaindex_litellm_opik_demo.py)
|
||||||
|
|
||||||
</details>
|
</details>
|
||||||
|
|
||||||
|
|
@ -1604,7 +1606,7 @@ Once installed and configured, Langfuse automatically traces all OpenAI LLM call
|
||||||
|
|
||||||
## RAGAS-based Evaluation
|
## RAGAS-based Evaluation
|
||||||
|
|
||||||
**RAGAS** (Retrieval Augmented Generation Assessment) is a framework for reference-free evaluation of RAG systems using LLMs. There is an evaluation script based on RAGAS. For detailed information, please refer to [RAGAS-based Evaluation Framework](lightrag/evaluation/README.md).
|
**RAGAS** (Retrieval Augmented Generation Assessment) is a framework for reference-free evaluation of RAG systems using LLMs. There is an evaluation script based on RAGAS. For detailed information, please refer to [RAGAS-based Evaluation Framework](lightrag/evaluation/README_EVALUASTION_RAGAS.md).
|
||||||
|
|
||||||
## Evaluation
|
## Evaluation
|
||||||
|
|
||||||
|
|
|
||||||
Loading…
Add table
Reference in a new issue