Add 5 markdown documents that users can index to reproduce evaluation results. Changes: - Add sample_documents/ folder with 5 markdown files covering LightRAG features - Update sample_dataset.json with 3 improved, specific test questions - Shorten and correct evaluation README (removed outdated info about mock responses) - Add sample_documents reference with expected ~95% RAGAS score Test Results with sample documents: - Average RAGAS Score: 95.28% - Faithfulness: 100%, Answer Relevance: 96.67% - Context Recall: 88.89%, Context Precision: 95.56% |
||
|---|---|---|
| .. | ||
| api | ||
| evaluation | ||
| kg | ||
| llm | ||
| tools | ||
| __init__.py | ||
| base.py | ||
| constants.py | ||
| exceptions.py | ||
| lightrag.py | ||
| namespace.py | ||
| operate.py | ||
| prompt.py | ||
| rerank.py | ||
| types.py | ||
| utils.py | ||
| utils_graph.py | ||