cherry-pick 7abc6877
This commit is contained in:
parent
2debc9288d
commit
9a11d689a2
4 changed files with 386 additions and 19 deletions
23
env.example
23
env.example
|
|
@ -385,3 +385,26 @@ MEMGRAPH_USERNAME=
|
|||
MEMGRAPH_PASSWORD=
|
||||
MEMGRAPH_DATABASE=memgraph
|
||||
# MEMGRAPH_WORKSPACE=forced_workspace_name
|
||||
|
||||
############################
|
||||
### Evaluation Configuration
|
||||
############################
|
||||
### RAGAS evaluation models (used for RAG quality assessment)
|
||||
### Default uses OpenAI models for evaluation
|
||||
# EVAL_LLM_MODEL=gpt-4.1
|
||||
# EVAL_EMBEDDING_MODEL=text-embedding-3-large
|
||||
### API key for evaluation (fallback to OPENAI_API_KEY if not set)
|
||||
# EVAL_LLM_BINDING_API_KEY=your_api_key
|
||||
### Custom endpoint for evaluation models (optional, for OpenAI-compatible services)
|
||||
# EVAL_LLM_BINDING_HOST=https://api.openai.com/v1
|
||||
|
||||
### Evaluation concurrency and rate limiting
|
||||
### Number of concurrent test case evaluations (default: 1 for serial evaluation)
|
||||
### Lower values reduce API rate limit issues but increase evaluation time
|
||||
# EVAL_MAX_CONCURRENT=3
|
||||
### TOP_K query parameter of LightRAG (default: 10)
|
||||
### Number of entities or relations retrieved from KG
|
||||
# EVAL_QUERY_TOP_K=10
|
||||
### LLM request retry and timeout settings for evaluation
|
||||
# EVAL_LLM_MAX_RETRIES=5
|
||||
# EVAL_LLM_TIMEOUT=120
|
||||
|
|
|
|||
323
lightrag/evaluation/README.md
Normal file
323
lightrag/evaluation/README.md
Normal file
|
|
@ -0,0 +1,323 @@
|
|||
# 📊 LightRAG Evaluation Framework
|
||||
|
||||
RAGAS-based offline evaluation of your LightRAG system.
|
||||
|
||||
## What is RAGAS?
|
||||
|
||||
**RAGAS** (Retrieval Augmented Generation Assessment) is a framework for reference-free evaluation of RAG systems using LLMs.
|
||||
|
||||
Instead of requiring human-annotated ground truth, RAGAS uses state-of-the-art evaluation metrics:
|
||||
|
||||
### Core Metrics
|
||||
|
||||
| Metric | What It Measures | Good Score |
|
||||
|--------|-----------------|-----------|
|
||||
| **Faithfulness** | Is the answer factually accurate based on retrieved context? | > 0.80 |
|
||||
| **Answer Relevance** | Is the answer relevant to the user's question? | > 0.80 |
|
||||
| **Context Recall** | Was all relevant information retrieved from documents? | > 0.80 |
|
||||
| **Context Precision** | Is retrieved context clean without irrelevant noise? | > 0.80 |
|
||||
| **RAGAS Score** | Overall quality metric (average of above) | > 0.80 |
|
||||
|
||||
---
|
||||
|
||||
## 📁 Structure
|
||||
|
||||
```
|
||||
lightrag/evaluation/
|
||||
├── eval_rag_quality.py # Main evaluation script
|
||||
├── sample_dataset.json # 3 test questions about LightRAG
|
||||
├── sample_documents/ # Matching markdown files for testing
|
||||
│ ├── 01_lightrag_overview.md
|
||||
│ ├── 02_rag_architecture.md
|
||||
│ ├── 03_lightrag_improvements.md
|
||||
│ ├── 04_supported_databases.md
|
||||
│ ├── 05_evaluation_and_deployment.md
|
||||
│ └── README.md
|
||||
├── __init__.py # Package init
|
||||
├── results/ # Output directory
|
||||
│ ├── results_YYYYMMDD_HHMMSS.json # Raw metrics in JSON
|
||||
│ └── results_YYYYMMDD_HHMMSS.csv # Metrics in CSV format
|
||||
└── README.md # This file
|
||||
```
|
||||
|
||||
**Quick Test:** Index files from `sample_documents/` into LightRAG, then run the evaluator to reproduce results (~89-100% RAGAS score per question).
|
||||
|
||||
---
|
||||
|
||||
## 🚀 Quick Start
|
||||
|
||||
### 1. Install Dependencies
|
||||
|
||||
```bash
|
||||
pip install ragas datasets langfuse
|
||||
```
|
||||
|
||||
Or use your project dependencies (already included in pyproject.toml):
|
||||
|
||||
```bash
|
||||
pip install -e ".[offline-llm]"
|
||||
```
|
||||
|
||||
### 2. Run Evaluation
|
||||
|
||||
```bash
|
||||
cd /path/to/LightRAG
|
||||
python -m lightrag.evaluation.eval_rag_quality
|
||||
```
|
||||
|
||||
Or directly:
|
||||
|
||||
```bash
|
||||
python lightrag/evaluation/eval_rag_quality.py
|
||||
```
|
||||
|
||||
### 3. View Results
|
||||
|
||||
Results are saved automatically in `lightrag/evaluation/results/`:
|
||||
|
||||
```
|
||||
results/
|
||||
├── results_20241023_143022.json ← Raw metrics in JSON format
|
||||
└── results_20241023_143022.csv ← Metrics in CSV format (for spreadsheets)
|
||||
```
|
||||
|
||||
**Results include:**
|
||||
- ✅ Overall RAGAS score
|
||||
- 📊 Per-metric averages (Faithfulness, Answer Relevance, Context Recall, Context Precision)
|
||||
- 📋 Individual test case results
|
||||
- 📈 Performance breakdown by question
|
||||
|
||||
---
|
||||
|
||||
## ⚙️ Configuration
|
||||
|
||||
### Environment Variables
|
||||
|
||||
The evaluation framework supports customization through environment variables:
|
||||
|
||||
| Variable | Default | Description |
|
||||
|----------|---------|-------------|
|
||||
| `EVAL_LLM_MODEL` | `gpt-4o-mini` | LLM model used for RAGAS evaluation |
|
||||
| `EVAL_EMBEDDING_MODEL` | `text-embedding-3-small` | Embedding model for evaluation |
|
||||
| `EVAL_LLM_BINDING_API_KEY` | (falls back to `OPENAI_API_KEY`) | API key for evaluation models |
|
||||
| `EVAL_LLM_BINDING_HOST` | (optional) | Custom endpoint URL for OpenAI-compatible services |
|
||||
| `EVAL_MAX_CONCURRENT` | `1` | Number of concurrent test case evaluations (1=serial) |
|
||||
| `EVAL_QUERY_TOP_K` | `10` | Number of documents to retrieve per query |
|
||||
| `EVAL_LLM_MAX_RETRIES` | `5` | Maximum LLM request retries |
|
||||
| `EVAL_LLM_TIMEOUT` | `120` | LLM request timeout in seconds |
|
||||
|
||||
### Usage Examples
|
||||
|
||||
**Default Configuration (OpenAI):**
|
||||
```bash
|
||||
export OPENAI_API_KEY=sk-xxx
|
||||
python lightrag/evaluation/eval_rag_quality.py
|
||||
```
|
||||
|
||||
**Custom Model:**
|
||||
```bash
|
||||
export OPENAI_API_KEY=sk-xxx
|
||||
export EVAL_LLM_MODEL=gpt-4.1
|
||||
export EVAL_EMBEDDING_MODEL=text-embedding-3-large
|
||||
python lightrag/evaluation/eval_rag_quality.py
|
||||
```
|
||||
|
||||
**OpenAI-Compatible Endpoint:**
|
||||
```bash
|
||||
export EVAL_LLM_BINDING_API_KEY=your-custom-key
|
||||
export EVAL_LLM_BINDING_HOST=https://api.openai.com/v1
|
||||
export EVAL_LLM_MODEL=qwen-plus
|
||||
python lightrag/evaluation/eval_rag_quality.py
|
||||
```
|
||||
|
||||
### Concurrency Control & Rate Limiting
|
||||
|
||||
The evaluation framework includes built-in concurrency control to prevent API rate limiting issues:
|
||||
|
||||
**Why Concurrency Control Matters:**
|
||||
- RAGAS internally makes many concurrent LLM calls for each test case
|
||||
- Context Precision metric calls LLM once per retrieved document
|
||||
- Without control, this can easily exceed API rate limits
|
||||
|
||||
**Default Configuration (Conservative):**
|
||||
```bash
|
||||
EVAL_MAX_CONCURRENT=1 # Serial evaluation (one test at a time)
|
||||
EVAL_QUERY_TOP_K=10 # OP_K query parameter of LightRAG
|
||||
EVAL_LLM_MAX_RETRIES=5 # Retry failed requests 5 times
|
||||
EVAL_LLM_TIMEOUT=180 # 2-minute timeout per request
|
||||
```
|
||||
|
||||
**If You Have Higher API Quotas:**
|
||||
```bash
|
||||
EVAL_MAX_CONCURRENT=2 # Evaluate 2 tests in parallel
|
||||
EVAL_QUERY_TOP_K=20 # OP_K query parameter of LightRAG
|
||||
```
|
||||
|
||||
**Common Issues and Solutions:**
|
||||
|
||||
| Issue | Solution |
|
||||
|-------|----------|
|
||||
| **Warning: "LM returned 1 generations instead of 3"** | Reduce `EVAL_MAX_CONCURRENT` to 1 or decrease `EVAL_QUERY_TOP_K` |
|
||||
| **Context Precision returns NaN** | Lower `EVAL_QUERY_TOP_K` to reduce LLM calls per test case |
|
||||
| **Rate limit errors (429)** | Increase `EVAL_LLM_MAX_RETRIES` and decrease `EVAL_MAX_CONCURRENT` |
|
||||
| **Request timeouts** | Increase `EVAL_LLM_TIMEOUT` to 180 or higher |
|
||||
|
||||
---
|
||||
|
||||
## 📝 Test Dataset
|
||||
|
||||
`sample_dataset.json` contains 3 generic questions about LightRAG. Replace with questions matching YOUR indexed documents.
|
||||
|
||||
**Custom Test Cases:**
|
||||
|
||||
```json
|
||||
{
|
||||
"test_cases": [
|
||||
{
|
||||
"question": "Your question here",
|
||||
"ground_truth": "Expected answer from your data",
|
||||
"context": "topic"
|
||||
}
|
||||
]
|
||||
}
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## 📊 Interpreting Results
|
||||
|
||||
### Score Ranges
|
||||
|
||||
- **0.80-1.00**: ✅ Excellent (Production-ready)
|
||||
- **0.60-0.80**: ⚠️ Good (Room for improvement)
|
||||
- **0.40-0.60**: ❌ Poor (Needs optimization)
|
||||
- **0.00-0.40**: 🔴 Critical (Major issues)
|
||||
|
||||
### What Low Scores Mean
|
||||
|
||||
| Metric | Low Score Indicates |
|
||||
|--------|-------------------|
|
||||
| **Faithfulness** | Responses contain hallucinations or incorrect information |
|
||||
| **Answer Relevance** | Answers don't match what users asked |
|
||||
| **Context Recall** | Missing important information in retrieval |
|
||||
| **Context Precision** | Retrieved documents contain irrelevant noise |
|
||||
|
||||
### Optimization Tips
|
||||
|
||||
1. **Low Faithfulness**:
|
||||
- Improve entity extraction quality
|
||||
- Better document chunking
|
||||
- Tune retrieval temperature
|
||||
|
||||
2. **Low Answer Relevance**:
|
||||
- Improve prompt engineering
|
||||
- Better query understanding
|
||||
- Check semantic similarity threshold
|
||||
|
||||
3. **Low Context Recall**:
|
||||
- Increase retrieval `top_k` results
|
||||
- Improve embedding model
|
||||
- Better document preprocessing
|
||||
|
||||
4. **Low Context Precision**:
|
||||
- Smaller, focused chunks
|
||||
- Better filtering
|
||||
- Improve chunking strategy
|
||||
|
||||
---
|
||||
|
||||
## 📚 Resources
|
||||
|
||||
- [RAGAS Documentation](https://docs.ragas.io/)
|
||||
- [RAGAS GitHub](https://github.com/explodinggradients/ragas)
|
||||
|
||||
---
|
||||
|
||||
## 🐛 Troubleshooting
|
||||
|
||||
### "ModuleNotFoundError: No module named 'ragas'"
|
||||
|
||||
```bash
|
||||
pip install ragas datasets
|
||||
```
|
||||
|
||||
### "Warning: LM returned 1 generations instead of requested 3" or Context Precision NaN
|
||||
|
||||
**Cause**: This warning indicates API rate limiting or concurrent request overload:
|
||||
- RAGAS makes multiple LLM calls per test case (faithfulness, relevancy, recall, precision)
|
||||
- Context Precision calls LLM once per retrieved document (with `EVAL_QUERY_TOP_K=10`, that's 10 calls)
|
||||
- Concurrent evaluation multiplies these calls: `EVAL_MAX_CONCURRENT × LLM calls per test`
|
||||
|
||||
**Solutions** (in order of effectiveness):
|
||||
|
||||
1. **Serial Evaluation** (Default):
|
||||
```bash
|
||||
export EVAL_MAX_CONCURRENT=1
|
||||
python lightrag/evaluation/eval_rag_quality.py
|
||||
```
|
||||
|
||||
2. **Reduce Retrieved Documents**:
|
||||
```bash
|
||||
export EVAL_QUERY_TOP_K=5 # Halves Context Precision LLM calls
|
||||
python lightrag/evaluation/eval_rag_quality.py
|
||||
```
|
||||
|
||||
3. **Increase Retry & Timeout**:
|
||||
```bash
|
||||
export EVAL_LLM_MAX_RETRIES=10
|
||||
export EVAL_LLM_TIMEOUT=180
|
||||
python lightrag/evaluation/eval_rag_quality.py
|
||||
```
|
||||
|
||||
4. **Use Higher Quota API** (if available):
|
||||
- Upgrade to OpenAI Tier 2+ for higher RPM limits
|
||||
- Use self-hosted OpenAI-compatible service with no rate limits
|
||||
|
||||
### "AttributeError: 'InstructorLLM' object has no attribute 'agenerate_prompt'" or NaN results
|
||||
|
||||
This error occurs with RAGAS 0.3.x when LLM and Embeddings are not explicitly configured. The evaluation framework now handles this automatically by:
|
||||
- Using environment variables to configure evaluation models
|
||||
- Creating proper LLM and Embeddings instances for RAGAS
|
||||
|
||||
**Solution**: Ensure you have set one of the following:
|
||||
- `OPENAI_API_KEY` environment variable (default)
|
||||
- `EVAL_LLM_BINDING_API_KEY` for custom API key
|
||||
|
||||
The framework will automatically configure the evaluation models.
|
||||
|
||||
### "No sample_dataset.json found"
|
||||
|
||||
Make sure you're running from the project root:
|
||||
|
||||
```bash
|
||||
cd /path/to/LightRAG
|
||||
python lightrag/evaluation/eval_rag_quality.py
|
||||
```
|
||||
|
||||
### "LLM API errors during evaluation"
|
||||
|
||||
The evaluation uses your configured LLM (OpenAI by default). Ensure:
|
||||
- API keys are set in `.env`
|
||||
- Have sufficient API quota
|
||||
- Network connection is stable
|
||||
|
||||
### Evaluation requires running LightRAG API
|
||||
|
||||
The evaluator queries a running LightRAG API server at `http://localhost:9621`. Make sure:
|
||||
1. LightRAG API server is running (`python lightrag/api/lightrag_server.py`)
|
||||
2. Documents are indexed in your LightRAG instance
|
||||
3. API is accessible at the configured URL
|
||||
|
||||
---
|
||||
|
||||
## 📝 Next Steps
|
||||
|
||||
1. Index documents into LightRAG (WebUI or API)
|
||||
2. Start LightRAG API server
|
||||
3. Run `python lightrag/evaluation/eval_rag_quality.py`
|
||||
4. Review results (JSON/CSV) in `results/` folder
|
||||
5. Adjust entity extraction prompts or retrieval settings based on scores
|
||||
|
||||
---
|
||||
|
||||
**Happy Evaluating! 🚀**
|
||||
|
|
@ -52,10 +52,10 @@ try:
|
|||
from datasets import Dataset
|
||||
from ragas import evaluate
|
||||
from ragas.metrics import (
|
||||
AnswerRelevancy,
|
||||
ContextPrecision,
|
||||
ContextRecall,
|
||||
Faithfulness,
|
||||
answer_relevancy,
|
||||
context_precision,
|
||||
context_recall,
|
||||
faithfulness,
|
||||
)
|
||||
from ragas.llms import LangchainLLMWrapper
|
||||
from langchain_openai import ChatOpenAI, OpenAIEmbeddings
|
||||
|
|
@ -185,9 +185,13 @@ class RAGEvaluator:
|
|||
|
||||
def _display_configuration(self):
|
||||
"""Display all evaluation configuration settings"""
|
||||
logger.info("EVALUATION CONFIGURATION")
|
||||
logger.info("")
|
||||
logger.info("%s", "=" * 70)
|
||||
logger.info("🔧 EVALUATION CONFIGURATION")
|
||||
logger.info("%s", "=" * 70)
|
||||
|
||||
logger.info(" Evaluation Models:")
|
||||
logger.info("")
|
||||
logger.info("Evaluation Models:")
|
||||
logger.info(" • LLM Model: %s", self.eval_model)
|
||||
logger.info(" • Embedding Model: %s", self.eval_embedding_model)
|
||||
if self.eval_base_url:
|
||||
|
|
@ -196,18 +200,29 @@ class RAGEvaluator:
|
|||
else:
|
||||
logger.info(" • Endpoint: OpenAI Official API")
|
||||
|
||||
logger.info(" Concurrency & Rate Limiting:")
|
||||
logger.info("")
|
||||
logger.info("Concurrency & Rate Limiting:")
|
||||
max_concurrent = int(os.getenv("EVAL_MAX_CONCURRENT", "1"))
|
||||
query_top_k = int(os.getenv("EVAL_QUERY_TOP_K", "10"))
|
||||
logger.info(
|
||||
" • Max Concurrent: %s %s",
|
||||
max_concurrent,
|
||||
"(serial evaluation)" if max_concurrent == 1 else "parallel evaluations",
|
||||
)
|
||||
logger.info(" • Query Top-K: %s Entities/Relations", query_top_k)
|
||||
logger.info(" • LLM Max Retries: %s", self.eval_max_retries)
|
||||
logger.info(" • LLM Timeout: %s seconds", self.eval_timeout)
|
||||
|
||||
logger.info(" Test Configuration:")
|
||||
logger.info("")
|
||||
logger.info("Test Configuration:")
|
||||
logger.info(" • Total Test Cases: %s", len(self.test_cases))
|
||||
logger.info(" • Test Dataset: %s", self.test_dataset_path.name)
|
||||
logger.info(" • LightRAG API: %s", self.rag_api_url)
|
||||
logger.info(" • Results Directory: %s", self.results_dir.name)
|
||||
|
||||
logger.info("%s", "=" * 70)
|
||||
logger.info("")
|
||||
|
||||
def _load_test_dataset(self) -> List[Dict[str, str]]:
|
||||
"""Load test cases from JSON file"""
|
||||
if not self.test_dataset_path.exists():
|
||||
|
|
@ -380,16 +395,14 @@ class RAGEvaluator:
|
|||
)
|
||||
|
||||
# Run RAGAS evaluation
|
||||
# IMPORTANT: Create fresh metric instances for each evaluation to avoid
|
||||
# concurrent state conflicts when multiple tasks run in parallel
|
||||
try:
|
||||
eval_results = evaluate(
|
||||
dataset=eval_dataset,
|
||||
metrics=[
|
||||
Faithfulness(),
|
||||
AnswerRelevancy(),
|
||||
ContextRecall(),
|
||||
ContextPrecision(),
|
||||
faithfulness,
|
||||
answer_relevancy,
|
||||
context_recall,
|
||||
context_precision,
|
||||
],
|
||||
llm=self.eval_llm,
|
||||
embeddings=self.eval_embeddings,
|
||||
|
|
@ -465,6 +478,7 @@ class RAGEvaluator:
|
|||
logger.info("🚀 Starting RAGAS Evaluation of Portfolio RAG System")
|
||||
logger.info("🔧 Concurrent evaluations: %s", max_async)
|
||||
logger.info("%s", "=" * 70)
|
||||
logger.info("")
|
||||
|
||||
# Create semaphore to limit concurrent evaluations
|
||||
semaphore = asyncio.Semaphore(max_async)
|
||||
|
|
@ -756,11 +770,12 @@ class RAGEvaluator:
|
|||
|
||||
# Add a small delay to ensure all buffered output is completely written
|
||||
await asyncio.sleep(0.2)
|
||||
|
||||
# Flush all output buffers to ensure RAGAS progress bars are fully displayed
|
||||
# before showing our results table
|
||||
sys.stdout.flush()
|
||||
sys.stderr.flush()
|
||||
|
||||
await asyncio.sleep(0.2)
|
||||
# Make sure the progress bar line ends before logging summary output
|
||||
sys.stderr.write("\n")
|
||||
sys.stderr.flush()
|
||||
|
||||
|
|
@ -852,9 +867,15 @@ async def main():
|
|||
if len(sys.argv) > 1:
|
||||
rag_api_url = sys.argv[1]
|
||||
|
||||
logger.info("")
|
||||
logger.info("%s", "=" * 70)
|
||||
logger.info("🔍 RAGAS Evaluation - Using Real LightRAG API")
|
||||
logger.info("%s", "=" * 70)
|
||||
if rag_api_url:
|
||||
logger.info("📡 RAG API URL: %s", rag_api_url)
|
||||
else:
|
||||
logger.info("📡 RAG API URL: http://localhost:9621 (default)")
|
||||
logger.info("%s", "=" * 70)
|
||||
|
||||
evaluator = RAGEvaluator(rag_api_url=rag_api_url)
|
||||
await evaluator.run()
|
||||
|
|
|
|||
|
|
@ -3,17 +3,17 @@
|
|||
{
|
||||
"question": "How does LightRAG solve the hallucination problem in large language models?",
|
||||
"ground_truth": "LightRAG solves the hallucination problem by combining large language models with external knowledge retrieval. The framework ensures accurate responses by grounding LLM outputs in actual documents. LightRAG provides contextual responses that reduce hallucinations significantly.",
|
||||
"context": "lightrag_overview"
|
||||
"project": "lightrag_overview"
|
||||
},
|
||||
{
|
||||
"question": "What are the three main components required in a RAG system?",
|
||||
"ground_truth": "A RAG system requires three main components: a retrieval system (vector database or search engine) to find relevant documents, an embedding model to convert text into vector representations for similarity search, and a large language model (LLM) to generate responses based on retrieved context.",
|
||||
"context": "rag_architecture"
|
||||
"project": "rag_architecture"
|
||||
},
|
||||
{
|
||||
"question": "How does LightRAG's retrieval performance compare to traditional RAG approaches?",
|
||||
"ground_truth": "LightRAG delivers faster retrieval performance than traditional RAG approaches. The framework optimizes document retrieval operations for speed, while traditional RAG systems often suffer from slow query response times. LightRAG achieves high quality results with improved performance.",
|
||||
"context": "lightrag_improvements"
|
||||
"project": "lightrag_improvements"
|
||||
}
|
||||
]
|
||||
}
|
||||
|
|
|
|||
Loading…
Add table
Reference in a new issue