LightRAG/lightrag
Samuel Chan 6ae27d8f06 Some enhancements:
- Enable the llm_cache storage to support get_by_mode_and_id, to improve the performance for using real KV server
- Provide an option for the developers to cache the LLM response when extracting entities for a document. Solving the paint point that sometimes the process failed, the processed chunks we need to call LLM again, money and time wasted. With the new option (by default not enabled) enabling, we can cache that result, can significantly save the time and money for beginners.
2025-01-06 12:50:05 +08:00
..
api applyed linting 2025-01-04 02:23:39 +01:00
kg Some enhancements: 2025-01-06 12:50:05 +08:00
__init__.py Update README.md 2024-12-31 17:25:57 +08:00
base.py feat(lightrag): Implement mix search mode combining knowledge graph and vector retrieval 2024-12-28 11:56:28 +08:00
lightrag.py Some enhancements: 2025-01-06 12:50:05 +08:00
llm.py fix: fix formatting issues 2024-12-31 01:33:14 +08:00
operate.py Some enhancements: 2025-01-06 12:50:05 +08:00
prompt.py feat: 增强知识图谱关系的时序性支持 2024-12-29 15:25:57 +08:00
storage.py fix linting errors 2024-12-31 17:32:04 +08:00
utils.py Some enhancements: 2025-01-06 12:50:05 +08:00