From 8f1ff985d0505087feb5ca1075c8ea8d8570e882 Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Joschka=20H=C3=BCllmann?= Date: Thu, 4 Dec 2025 15:49:20 +0100 Subject: [PATCH] Fix links in README.md and README-zh.md --- README-zh.md | 12 +++++++----- README.md | 12 +++++++----- 2 files changed, 14 insertions(+), 10 deletions(-) diff --git a/README-zh.md b/README-zh.md index 478d67ab..5a331b39 100644 --- a/README-zh.md +++ b/README-zh.md @@ -480,6 +480,7 @@ rag = LightRAG(
使用Ollama模型 + 如果您想使用Ollama模型,您需要拉取计划使用的模型和嵌入模型,例如`nomic-embed-text`。 然后您只需要按如下方式设置LightRAG: @@ -569,7 +570,7 @@ rag = LightRAG( LightRAG支持与LlamaIndex集成 (`llm/llama_index_impl.py`): - 通过LlamaIndex与OpenAI和其他提供商集成 -- 详细设置和示例请参见[LlamaIndex文档](lightrag/llm/Readme.md) +- 详细设置和示例请参见[LlamaIndex文档](https://developers.llamaindex.ai/python/framework/) **使用示例:** @@ -631,9 +632,10 @@ if __name__ == "__main__": **详细文档和示例,请参见:** -- [LlamaIndex文档](lightrag/llm/Readme.md) -- [直接OpenAI示例](examples/lightrag_llamaindex_direct_demo.py) -- [LiteLLM代理示例](examples/lightrag_llamaindex_litellm_demo.py) +- [LlamaIndex文档](https://developers.llamaindex.ai/python/framework/) +- [直接OpenAI示例](examples/unofficial-sample/lightrag_llamaindex_direct_demo.py) +- [LiteLLM代理示例](examples/unofficial-sample/lightrag_llamaindex_litellm_demo.py) +- [LiteLLM+OPIK代理示例](examples/unofficial-sample/lightrag_llamaindex_litellm_opik_demo.py)
@@ -1536,7 +1538,7 @@ LANGFUSE_ENABLE_TRACE=true ## RAGAS评估 -**RAGAS**(Retrieval Augmented Generation Assessment,检索增强生成评估)是一个使用LLM对RAG系统进行无参考评估的框架。我们提供了基于RAGAS的评估脚本。详细信息请参阅[基于RAGAS的评估框架](lightrag/evaluation/README.md)。 +**RAGAS**(Retrieval Augmented Generation Assessment,检索增强生成评估)是一个使用LLM对RAG系统进行无参考评估的框架。我们提供了基于RAGAS的评估脚本。详细信息请参阅[基于RAGAS的评估框架](lightrag/evaluation/README_EVALUASTION_RAGAS.md)。 ## 评估 diff --git a/README.md b/README.md index 3147e23c..b157c350 100644 --- a/README.md +++ b/README.md @@ -476,6 +476,7 @@ rag = LightRAG(
Using Ollama Models + **Overview** If you want to use Ollama models, you need to pull model you plan to use and embedding model, for example `nomic-embed-text`. @@ -567,7 +568,7 @@ In order to run this experiment on low RAM GPU you should select small model and LightRAG supports integration with LlamaIndex (`llm/llama_index_impl.py`): - Integrates with OpenAI and other providers through LlamaIndex -- See [LlamaIndex Documentation](lightrag/llm/Readme.md) for detailed setup and examples +- See [LlamaIndex Documentation](https://developers.llamaindex.ai/python/framework/) for detailed setup or the [examples](examples/unofficial-sample/) **Example Usage** @@ -629,9 +630,10 @@ if __name__ == "__main__": **For detailed documentation and examples, see:** -- [LlamaIndex Documentation](lightrag/llm/Readme.md) -- [Direct OpenAI Example](examples/lightrag_llamaindex_direct_demo.py) -- [LiteLLM Proxy Example](examples/lightrag_llamaindex_litellm_demo.py) +- [LlamaIndex Documentation](https://developers.llamaindex.ai/python/framework/) +- [Direct OpenAI Example](examples/unofficial-sample/lightrag_llamaindex_direct_demo.py) +- [LiteLLM Proxy Example](examples/unofficial-sample/lightrag_llamaindex_litellm_demo.py) +- [LiteLLM Proxy with Opik Example](examples/unofficial-sample/lightrag_llamaindex_litellm_opik_demo.py)
@@ -1604,7 +1606,7 @@ Once installed and configured, Langfuse automatically traces all OpenAI LLM call ## RAGAS-based Evaluation -**RAGAS** (Retrieval Augmented Generation Assessment) is a framework for reference-free evaluation of RAG systems using LLMs. There is an evaluation script based on RAGAS. For detailed information, please refer to [RAGAS-based Evaluation Framework](lightrag/evaluation/README.md). +**RAGAS** (Retrieval Augmented Generation Assessment) is a framework for reference-free evaluation of RAG systems using LLMs. There is an evaluation script based on RAGAS. For detailed information, please refer to [RAGAS-based Evaluation Framework](lightrag/evaluation/README_EVALUASTION_RAGAS.md). ## Evaluation