diff --git a/README-zh.md b/README-zh.md index 79710d3a..4bb8d4b5 100644 --- a/README-zh.md +++ b/README-zh.md @@ -53,24 +53,24 @@ ## 🎉 新闻 -- [x] [2025.11.05]🎯📢添加**基于RAGAS的**评估框架和**Langfuse**可观测性支持(API可随查询结果返回召回上下文)。 -- [x] [2025.10.22]🎯📢消除处理**大规模数据集**的性能瓶颈。 -- [x] [2025.09.15]🎯📢显著提升**小型LLM**(如Qwen3-30B-A3B)的知识图谱提取准确性。 -- [x] [2025.08.29]🎯📢现已支持**Reranker**,显著提升混合查询性能(现已设为默认查询模式)。 -- [x] [2025.08.04]🎯📢支持**文档删除**并重新生成知识图谱以确保查询性能。 -- [x] [2025.06.16]🎯📢我们的团队发布了[RAG-Anything](https://github.com/HKUDS/RAG-Anything),一个用于无缝处理文本、图像、表格和方程式的全功能多模态 RAG 系统。 -- [x] [2025.06.05]🎯📢LightRAG现已集成[RAG-Anything](https://github.com/HKUDS/RAG-Anything),支持全面的多模态文档解析与RAG能力(PDF、图片、Office文档、表格、公式等)。详见下方[多模态处理模块](https://github.com/HKUDS/LightRAG?tab=readme-ov-file#多模态文档处理rag-anything集成)。 -- [x] [2025.03.18]🎯📢LightRAG现已支持参考文献功能。 -- [x] [2025.02.12]🎯📢现在您可以使用MongoDB作为一体化存储解决方案。 -- [x] [2025.02.05]🎯📢我们团队发布了[VideoRAG](https://github.com/HKUDS/VideoRAG),用于理解超长上下文视频。 -- [x] [2025.01.13]🎯📢我们团队发布了[MiniRAG](https://github.com/HKUDS/MiniRAG),使用小型模型简化RAG。 -- [x] [2025.01.06]🎯📢现在您可以使用PostgreSQL作为一体化存储解决方案。 -- [x] [2024.11.19]🎯📢LightRAG的综合指南现已在[LearnOpenCV](https://learnopencv.com/lightrag)上发布。非常感谢博客作者。 -- [x] [2024.11.09]🎯📢推出LightRAG Webui,允许您插入、查询、可视化LightRAG知识。 -- [x] [2024.11.04]🎯📢现在您可以[使用Neo4J进行存储](https://github.com/HKUDS/LightRAG?tab=readme-ov-file#using-neo4j-for-storage)。 -- [x] [2024.10.18]🎯📢我们添加了[LightRAG介绍视频](https://youtu.be/oageL-1I0GE)的链接。感谢作者! -- [x] [2024.10.17]🎯📢我们创建了一个[Discord频道](https://discord.gg/yF2MmDJyGJ)!欢迎加入分享和讨论!🎉🎉 -- [x] [2024.10.16]🎯📢LightRAG现在支持[Ollama模型](https://github.com/HKUDS/LightRAG?tab=readme-ov-file#quick-start)! +- [x] [2025.11.05]🎯添加**基于RAGAS的**评估框架和**Langfuse**可观测性支持(API可随查询结果返回召回上下文)。 +- [x] [2025.10.22]🎯消除处理**大规模数据集**的性能瓶颈。 +- [x] [2025.09.15]🎯显著提升**小型LLM**(如Qwen3-30B-A3B)的知识图谱提取准确性。 +- [x] [2025.08.29]🎯现已支持**Reranker**,显著提升混合查询性能(现已设为默认查询模式)。 +- [x] [2025.08.04]🎯支持**文档删除**并重新生成知识图谱以确保查询性能。 +- [x] [2025.06.16]🎯我们的团队发布了[RAG-Anything](https://github.com/HKUDS/RAG-Anything),一个用于无缝处理文本、图像、表格和方程式的全功能多模态 RAG 系统。 +- [x] [2025.06.05]🎯LightRAG现已集成[RAG-Anything](https://github.com/HKUDS/RAG-Anything),支持全面的多模态文档解析与RAG能力(PDF、图片、Office文档、表格、公式等)。详见下方[多模态处理模块](https://github.com/HKUDS/LightRAG?tab=readme-ov-file#多模态文档处理rag-anything集成)。 +- [x] [2025.03.18]🎯LightRAG现已支持参考文献功能。 +- [x] [2025.02.12]🎯现在您可以使用MongoDB作为一体化存储解决方案。 +- [x] [2025.02.05]🎯我们团队发布了[VideoRAG](https://github.com/HKUDS/VideoRAG),用于理解超长上下文视频。 +- [x] [2025.01.13]🎯我们团队发布了[MiniRAG](https://github.com/HKUDS/MiniRAG),使用小型模型简化RAG。 +- [x] [2025.01.06]🎯现在您可以使用PostgreSQL作为一体化存储解决方案。 +- [x] [2024.11.19]🎯LightRAG的综合指南现已在[LearnOpenCV](https://learnopencv.com/lightrag)上发布。非常感谢博客作者。 +- [x] [2024.11.09]🎯推出LightRAG Webui,允许您插入、查询、可视化LightRAG知识。 +- [x] [2024.11.04]🎯现在您可以[使用Neo4J进行存储](https://github.com/HKUDS/LightRAG?tab=readme-ov-file#using-neo4j-for-storage)。 +- [x] [2024.10.18]🎯我们添加了[LightRAG介绍视频](https://youtu.be/oageL-1I0GE)的链接。感谢作者! +- [x] [2024.10.17]🎯我们创建了一个[Discord频道](https://discord.gg/yF2MmDJyGJ)!欢迎加入分享和讨论!🎉🎉 +- [x] [2024.10.16]🎯LightRAG现在支持[Ollama模型](https://github.com/HKUDS/LightRAG?tab=readme-ov-file#quick-start)!
diff --git a/README.md b/README.md index 01509e03..1a946e8d 100644 --- a/README.md +++ b/README.md @@ -51,24 +51,24 @@ --- ## 🎉 News -- [x] [2025.11.05]🎯📢Add **RAGAS-based** Evaluation Framework and **Langfuse** observability for LightRAG (API can return retrieved contexts with query results). -- [x] [2025.10.22]🎯📢Eliminate bottlenecks in processing **large-scale datasets**. -- [x] [2025.09.15]🎯📢Significantly enhances KG extraction accuracy for **small LLMs** like Qwen3-30B-A3B. -- [x] [2025.08.29]🎯📢**Reranker** is supported now , significantly boosting performance for mixed queries(Set as default query mode now). -- [x] [2025.08.04]🎯📢**Document deletion** with KG regeneration to ensure query performance. -- [x] [2025.06.16]🎯📢Our team has released [RAG-Anything](https://github.com/HKUDS/RAG-Anything) an All-in-One Multimodal RAG System for seamless text, image, table, and equation processing. -- [x] [2025.06.05]🎯📢LightRAG now supports comprehensive multimodal data handling through [RAG-Anything](https://github.com/HKUDS/RAG-Anything) integration, enabling seamless document parsing and RAG capabilities across diverse formats including PDFs, images, Office documents, tables, and formulas. Please refer to the new [multimodal section](https://github.com/HKUDS/LightRAG/?tab=readme-ov-file#multimodal-document-processing-rag-anything-integration) for details. -- [x] [2025.03.18]🎯📢LightRAG now supports citation functionality, enabling proper source attribution. -- [x] [2025.02.12]🎯📢You can now use MongoDB as all in-one Storage. -- [x] [2025.02.05]🎯📢Our team has released [VideoRAG](https://github.com/HKUDS/VideoRAG) understanding extremely long-context videos. -- [x] [2025.01.13]🎯📢Our team has released [MiniRAG](https://github.com/HKUDS/MiniRAG) making RAG simpler with small models. -- [x] [2025.01.06]🎯📢You can now use PostgreSQL as all in-one Storage. -- [x] [2024.11.19]🎯📢A comprehensive guide to LightRAG is now available on [LearnOpenCV](https://learnopencv.com/lightrag). Many thanks to the blog author. -- [x] [2024.11.09]🎯📢Introducing the LightRAG Webui, which allows you to insert, query, visualize LightRAG knowledge. -- [x] [2024.11.04]🎯📢You can now [use Neo4J for Storage](https://github.com/HKUDS/LightRAG?tab=readme-ov-file#using-neo4j-for-storage). -- [x] [2024.10.18]🎯📢We've added a link to a [LightRAG Introduction Video](https://youtu.be/oageL-1I0GE). Thanks to the author! -- [x] [2024.10.17]🎯📢We have created a [Discord channel](https://discord.gg/yF2MmDJyGJ)! Welcome to join for sharing and discussions! 🎉🎉 -- [x] [2024.10.16]🎯📢LightRAG now supports [Ollama models](https://github.com/HKUDS/LightRAG?tab=readme-ov-file#quick-start)! +- [x] [2025.11.05]🎯Add **RAGAS-based** Evaluation Framework and **Langfuse** observability for LightRAG (API can return retrieved contexts with query results). +- [x] [2025.10.22]🎯Eliminate bottlenecks in processing **large-scale datasets**. +- [x] [2025.09.15]🎯Significantly enhances KG extraction accuracy for **small LLMs** like Qwen3-30B-A3B. +- [x] [2025.08.29]🎯**Reranker** is supported now , significantly boosting performance for mixed queries(Set as default query mode now). +- [x] [2025.08.04]🎯**Document deletion** with KG regeneration to ensure query performance. +- [x] [2025.06.16]🎯Our team has released [RAG-Anything](https://github.com/HKUDS/RAG-Anything) an All-in-One Multimodal RAG System for seamless text, image, table, and equation processing. +- [x] [2025.06.05]🎯LightRAG now supports comprehensive multimodal data handling through [RAG-Anything](https://github.com/HKUDS/RAG-Anything) integration, enabling seamless document parsing and RAG capabilities across diverse formats including PDFs, images, Office documents, tables, and formulas. Please refer to the new [multimodal section](https://github.com/HKUDS/LightRAG/?tab=readme-ov-file#multimodal-document-processing-rag-anything-integration) for details. +- [x] [2025.03.18]🎯LightRAG now supports citation functionality, enabling proper source attribution. +- [x] [2025.02.12]🎯You can now use MongoDB as all in-one Storage. +- [x] [2025.02.05]🎯Our team has released [VideoRAG](https://github.com/HKUDS/VideoRAG) understanding extremely long-context videos. +- [x] [2025.01.13]🎯Our team has released [MiniRAG](https://github.com/HKUDS/MiniRAG) making RAG simpler with small models. +- [x] [2025.01.06]🎯You can now use PostgreSQL as all in-one Storage. +- [x] [2024.11.19]🎯A comprehensive guide to LightRAG is now available on [LearnOpenCV](https://learnopencv.com/lightrag). Many thanks to the blog author. +- [x] [2024.11.09]🎯Introducing the LightRAG Webui, which allows you to insert, query, visualize LightRAG knowledge. +- [x] [2024.11.04]🎯You can now [use Neo4J for Storage](https://github.com/HKUDS/LightRAG?tab=readme-ov-file#using-neo4j-for-storage). +- [x] [2024.10.18]🎯We've added a link to a [LightRAG Introduction Video](https://youtu.be/oageL-1I0GE). Thanks to the author! +- [x] [2024.10.17]🎯We have created a [Discord channel](https://discord.gg/yF2MmDJyGJ)! Welcome to join for sharing and discussions! 🎉🎉 +- [x] [2024.10.16]🎯LightRAG now supports [Ollama models](https://github.com/HKUDS/LightRAG?tab=readme-ov-file#quick-start)!