Add 5 markdown documents that users can index to reproduce evaluation results. Changes: - Add sample_documents/ folder with 5 markdown files covering LightRAG features - Update sample_dataset.json with 3 improved, specific test questions - Shorten and correct evaluation README (removed outdated info about mock responses) - Add sample_documents reference with expected ~95% RAGAS score Test Results with sample documents: - Average RAGAS Score: 95.28% - Faithfulness: 100%, Answer Relevance: 96.67% - Context Recall: 88.89%, Context Precision: 95.56%
21 lines
813 B
Markdown
21 lines
813 B
Markdown
# Sample Documents for Evaluation
|
|
|
|
These markdown files correspond to test questions in `../sample_dataset.json`.
|
|
|
|
## Usage
|
|
|
|
1. **Index documents** into LightRAG (via WebUI, API, or Python)
|
|
2. **Run evaluation**: `python lightrag/evaluation/eval_rag_quality.py`
|
|
3. **Expected results**: ~91-100% RAGAS score per question
|
|
|
|
## Files
|
|
|
|
- `01_lightrag_overview.md` - LightRAG framework and hallucination problem
|
|
- `02_rag_architecture.md` - RAG system components
|
|
- `03_lightrag_improvements.md` - LightRAG vs traditional RAG
|
|
- `04_supported_databases.md` - Vector database support
|
|
- `05_evaluation_and_deployment.md` - Metrics and deployment
|
|
|
|
## Note
|
|
|
|
Documents use clear entity-relationship patterns for LightRAG's default entity extraction prompts. For better results with your data, customize `lightrag/prompt.py`.
|