Add 5 markdown documents that users can index to reproduce evaluation results. Changes: - Add sample_documents/ folder with 5 markdown files covering LightRAG features - Update sample_dataset.json with 3 improved, specific test questions - Shorten and correct evaluation README (removed outdated info about mock responses) - Add sample_documents reference with expected ~95% RAGAS score Test Results with sample documents: - Average RAGAS Score: 95.28% - Faithfulness: 100%, Answer Relevance: 96.67% - Context Recall: 88.89%, Context Precision: 95.56% |
||
|---|---|---|
| .. | ||
| 01_lightrag_overview.md | ||
| 02_rag_architecture.md | ||
| 03_lightrag_improvements.md | ||
| 04_supported_databases.md | ||
| 05_evaluation_and_deployment.md | ||
| README.md | ||
Sample Documents for Evaluation
These markdown files correspond to test questions in ../sample_dataset.json.
Usage
- Index documents into LightRAG (via WebUI, API, or Python)
- Run evaluation:
python lightrag/evaluation/eval_rag_quality.py - Expected results: ~91-100% RAGAS score per question
Files
01_lightrag_overview.md- LightRAG framework and hallucination problem02_rag_architecture.md- RAG system components03_lightrag_improvements.md- LightRAG vs traditional RAG04_supported_databases.md- Vector database support05_evaluation_and_deployment.md- Metrics and deployment
Note
Documents use clear entity-relationship patterns for LightRAG's default entity extraction prompts. For better results with your data, customize lightrag/prompt.py.