Add comprehensive configuration and compatibility fixes for RAGAS
- Fix RAGAS LLM wrapper compatibility - Add concurrency control for rate limits - Add eval env vars for model config - Improve error handling and logging - Update documentation with examples
This commit is contained in:
parent
72db042667
commit
7abc687742
4 changed files with 438 additions and 70 deletions
23
env.example
23
env.example
|
|
@ -394,3 +394,26 @@ MEMGRAPH_USERNAME=
|
|||
MEMGRAPH_PASSWORD=
|
||||
MEMGRAPH_DATABASE=memgraph
|
||||
# MEMGRAPH_WORKSPACE=forced_workspace_name
|
||||
|
||||
############################
|
||||
### Evaluation Configuration
|
||||
############################
|
||||
### RAGAS evaluation models (used for RAG quality assessment)
|
||||
### Default uses OpenAI models for evaluation
|
||||
# EVAL_LLM_MODEL=gpt-4.1
|
||||
# EVAL_EMBEDDING_MODEL=text-embedding-3-large
|
||||
### API key for evaluation (fallback to OPENAI_API_KEY if not set)
|
||||
# EVAL_LLM_BINDING_API_KEY=your_api_key
|
||||
### Custom endpoint for evaluation models (optional, for OpenAI-compatible services)
|
||||
# EVAL_LLM_BINDING_HOST=https://api.openai.com/v1
|
||||
|
||||
### Evaluation concurrency and rate limiting
|
||||
### Number of concurrent test case evaluations (default: 1 for serial evaluation)
|
||||
### Lower values reduce API rate limit issues but increase evaluation time
|
||||
# EVAL_MAX_CONCURRENT=3
|
||||
### TOP_K query parameter of LightRAG (default: 10)
|
||||
### Number of entities or relations retrieved from KG
|
||||
# EVAL_QUERY_TOP_K=10
|
||||
### LLM request retry and timeout settings for evaluation
|
||||
# EVAL_LLM_MAX_RETRIES=5
|
||||
# EVAL_LLM_TIMEOUT=120
|
||||
|
|
|
|||
|
|
@ -89,6 +89,81 @@ results/
|
|||
|
||||
---
|
||||
|
||||
## ⚙️ Configuration
|
||||
|
||||
### Environment Variables
|
||||
|
||||
The evaluation framework supports customization through environment variables:
|
||||
|
||||
| Variable | Default | Description |
|
||||
|----------|---------|-------------|
|
||||
| `EVAL_LLM_MODEL` | `gpt-4o-mini` | LLM model used for RAGAS evaluation |
|
||||
| `EVAL_EMBEDDING_MODEL` | `text-embedding-3-small` | Embedding model for evaluation |
|
||||
| `EVAL_LLM_BINDING_API_KEY` | (falls back to `OPENAI_API_KEY`) | API key for evaluation models |
|
||||
| `EVAL_LLM_BINDING_HOST` | (optional) | Custom endpoint URL for OpenAI-compatible services |
|
||||
| `EVAL_MAX_CONCURRENT` | `1` | Number of concurrent test case evaluations (1=serial) |
|
||||
| `EVAL_QUERY_TOP_K` | `10` | Number of documents to retrieve per query |
|
||||
| `EVAL_LLM_MAX_RETRIES` | `5` | Maximum LLM request retries |
|
||||
| `EVAL_LLM_TIMEOUT` | `120` | LLM request timeout in seconds |
|
||||
|
||||
### Usage Examples
|
||||
|
||||
**Default Configuration (OpenAI):**
|
||||
```bash
|
||||
export OPENAI_API_KEY=sk-xxx
|
||||
python lightrag/evaluation/eval_rag_quality.py
|
||||
```
|
||||
|
||||
**Custom Model:**
|
||||
```bash
|
||||
export OPENAI_API_KEY=sk-xxx
|
||||
export EVAL_LLM_MODEL=gpt-4.1
|
||||
export EVAL_EMBEDDING_MODEL=text-embedding-3-large
|
||||
python lightrag/evaluation/eval_rag_quality.py
|
||||
```
|
||||
|
||||
**OpenAI-Compatible Endpoint:**
|
||||
```bash
|
||||
export EVAL_LLM_BINDING_API_KEY=your-custom-key
|
||||
export EVAL_LLM_BINDING_HOST=https://api.openai.com/v1
|
||||
export EVAL_LLM_MODEL=qwen-plus
|
||||
python lightrag/evaluation/eval_rag_quality.py
|
||||
```
|
||||
|
||||
### Concurrency Control & Rate Limiting
|
||||
|
||||
The evaluation framework includes built-in concurrency control to prevent API rate limiting issues:
|
||||
|
||||
**Why Concurrency Control Matters:**
|
||||
- RAGAS internally makes many concurrent LLM calls for each test case
|
||||
- Context Precision metric calls LLM once per retrieved document
|
||||
- Without control, this can easily exceed API rate limits
|
||||
|
||||
**Default Configuration (Conservative):**
|
||||
```bash
|
||||
EVAL_MAX_CONCURRENT=1 # Serial evaluation (one test at a time)
|
||||
EVAL_QUERY_TOP_K=10 # OP_K query parameter of LightRAG
|
||||
EVAL_LLM_MAX_RETRIES=5 # Retry failed requests 5 times
|
||||
EVAL_LLM_TIMEOUT=180 # 2-minute timeout per request
|
||||
```
|
||||
|
||||
**If You Have Higher API Quotas:**
|
||||
```bash
|
||||
EVAL_MAX_CONCURRENT=2 # Evaluate 2 tests in parallel
|
||||
EVAL_QUERY_TOP_K=20 # OP_K query parameter of LightRAG
|
||||
```
|
||||
|
||||
**Common Issues and Solutions:**
|
||||
|
||||
| Issue | Solution |
|
||||
|-------|----------|
|
||||
| **Warning: "LM returned 1 generations instead of 3"** | Reduce `EVAL_MAX_CONCURRENT` to 1 or decrease `EVAL_QUERY_TOP_K` |
|
||||
| **Context Precision returns NaN** | Lower `EVAL_QUERY_TOP_K` to reduce LLM calls per test case |
|
||||
| **Rate limit errors (429)** | Increase `EVAL_LLM_MAX_RETRIES` and decrease `EVAL_MAX_CONCURRENT` |
|
||||
| **Request timeouts** | Increase `EVAL_LLM_TIMEOUT` to 180 or higher |
|
||||
|
||||
---
|
||||
|
||||
## 📝 Test Dataset
|
||||
|
||||
`sample_dataset.json` contains 3 generic questions about LightRAG. Replace with questions matching YOUR indexed documents.
|
||||
|
|
@ -166,6 +241,50 @@ results/
|
|||
pip install ragas datasets
|
||||
```
|
||||
|
||||
### "Warning: LM returned 1 generations instead of requested 3" or Context Precision NaN
|
||||
|
||||
**Cause**: This warning indicates API rate limiting or concurrent request overload:
|
||||
- RAGAS makes multiple LLM calls per test case (faithfulness, relevancy, recall, precision)
|
||||
- Context Precision calls LLM once per retrieved document (with `EVAL_QUERY_TOP_K=10`, that's 10 calls)
|
||||
- Concurrent evaluation multiplies these calls: `EVAL_MAX_CONCURRENT × LLM calls per test`
|
||||
|
||||
**Solutions** (in order of effectiveness):
|
||||
|
||||
1. **Serial Evaluation** (Default):
|
||||
```bash
|
||||
export EVAL_MAX_CONCURRENT=1
|
||||
python lightrag/evaluation/eval_rag_quality.py
|
||||
```
|
||||
|
||||
2. **Reduce Retrieved Documents**:
|
||||
```bash
|
||||
export EVAL_QUERY_TOP_K=5 # Halves Context Precision LLM calls
|
||||
python lightrag/evaluation/eval_rag_quality.py
|
||||
```
|
||||
|
||||
3. **Increase Retry & Timeout**:
|
||||
```bash
|
||||
export EVAL_LLM_MAX_RETRIES=10
|
||||
export EVAL_LLM_TIMEOUT=180
|
||||
python lightrag/evaluation/eval_rag_quality.py
|
||||
```
|
||||
|
||||
4. **Use Higher Quota API** (if available):
|
||||
- Upgrade to OpenAI Tier 2+ for higher RPM limits
|
||||
- Use self-hosted OpenAI-compatible service with no rate limits
|
||||
|
||||
### "AttributeError: 'InstructorLLM' object has no attribute 'agenerate_prompt'" or NaN results
|
||||
|
||||
This error occurs with RAGAS 0.3.x when LLM and Embeddings are not explicitly configured. The evaluation framework now handles this automatically by:
|
||||
- Using environment variables to configure evaluation models
|
||||
- Creating proper LLM and Embeddings instances for RAGAS
|
||||
|
||||
**Solution**: Ensure you have set one of the following:
|
||||
- `OPENAI_API_KEY` environment variable (default)
|
||||
- `EVAL_LLM_BINDING_API_KEY` for custom API key
|
||||
|
||||
The framework will automatically configure the evaluation models.
|
||||
|
||||
### "No sample_dataset.json found"
|
||||
|
||||
Make sure you're running from the project root:
|
||||
|
|
|
|||
|
|
@ -16,6 +16,12 @@ Usage:
|
|||
Results are saved to: lightrag/evaluation/results/
|
||||
- results_YYYYMMDD_HHMMSS.csv (CSV export for analysis)
|
||||
- results_YYYYMMDD_HHMMSS.json (Full results with details)
|
||||
|
||||
Note on Custom OpenAI-Compatible Endpoints:
|
||||
This script uses bypass_n=True mode for answer_relevancy metric to ensure
|
||||
compatibility with custom endpoints that may not support OpenAI's 'n' parameter
|
||||
for multiple completions. This generates multiple outputs through repeated prompts
|
||||
instead, maintaining evaluation quality while supporting broader endpoint compatibility.
|
||||
"""
|
||||
|
||||
import asyncio
|
||||
|
|
@ -51,12 +57,16 @@ try:
|
|||
context_recall,
|
||||
faithfulness,
|
||||
)
|
||||
from ragas.llms import LangchainLLMWrapper
|
||||
from langchain_openai import ChatOpenAI, OpenAIEmbeddings
|
||||
|
||||
RAGAS_AVAILABLE = True
|
||||
|
||||
except ImportError:
|
||||
RAGAS_AVAILABLE = False
|
||||
Dataset = None
|
||||
evaluate = None
|
||||
LangchainLLMWrapper = None
|
||||
|
||||
|
||||
CONNECT_TIMEOUT_SECONDS = 180.0
|
||||
|
|
@ -81,10 +91,15 @@ class RAGEvaluator:
|
|||
rag_api_url: Base URL of LightRAG API (e.g., http://localhost:9621)
|
||||
If None, will try to read from environment or use default
|
||||
|
||||
Environment Variables:
|
||||
EVAL_LLM_MODEL: LLM model for evaluation (default: gpt-4o-mini)
|
||||
EVAL_EMBEDDING_MODEL: Embedding model for evaluation (default: text-embedding-3-small)
|
||||
EVAL_LLM_BINDING_API_KEY: API key for evaluation models (fallback to OPENAI_API_KEY)
|
||||
EVAL_LLM_BINDING_HOST: Custom endpoint URL for evaluation models (optional)
|
||||
|
||||
Raises:
|
||||
ImportError: If ragas or datasets packages are not installed
|
||||
ValueError: If LLM_BINDING is not set to 'openai'
|
||||
EnvironmentError: If LLM_BINDING_API_KEY is not set
|
||||
EnvironmentError: If EVAL_LLM_BINDING_API_KEY and OPENAI_API_KEY are both not set
|
||||
"""
|
||||
# Validate RAGAS dependencies are installed
|
||||
if not RAGAS_AVAILABLE:
|
||||
|
|
@ -93,25 +108,56 @@ class RAGEvaluator:
|
|||
"Install with: pip install ragas datasets"
|
||||
)
|
||||
|
||||
# Validate LLM_BINDING is set to openai (required for RAGAS)
|
||||
llm_binding = os.getenv("LLM_BINDING", "").lower()
|
||||
if llm_binding != "openai":
|
||||
raise ValueError(
|
||||
f"LLM_BINDING must be set to 'openai' for RAGAS evaluation. "
|
||||
f"Current value: '{llm_binding or '(not set)'}'"
|
||||
)
|
||||
|
||||
# Validate LLM_BINDING_API_KEY exists
|
||||
llm_binding_key = os.getenv("LLM_BINDING_API_KEY")
|
||||
if not llm_binding_key:
|
||||
# Configure evaluation models (for RAGAS scoring)
|
||||
eval_api_key = os.getenv("EVAL_LLM_BINDING_API_KEY") or os.getenv(
|
||||
"OPENAI_API_KEY"
|
||||
)
|
||||
if not eval_api_key:
|
||||
raise EnvironmentError(
|
||||
"LLM_BINDING_API_KEY environment variable is not set. "
|
||||
"This is required for RAGAS evaluation."
|
||||
"EVAL_LLM_BINDING_API_KEY or OPENAI_API_KEY is required for evaluation. "
|
||||
"Set EVAL_LLM_BINDING_API_KEY to use a custom API key, "
|
||||
"or ensure OPENAI_API_KEY is set."
|
||||
)
|
||||
|
||||
# Set OPENAI_API_KEY from LLM_BINDING_API_KEY for RAGAS
|
||||
os.environ["OPENAI_API_KEY"] = llm_binding_key
|
||||
logger.info("✅ LLM_BINDING: openai")
|
||||
eval_model = os.getenv("EVAL_LLM_MODEL", "gpt-4.1")
|
||||
eval_embedding_model = os.getenv(
|
||||
"EVAL_EMBEDDING_MODEL", "text-embedding-3-large"
|
||||
)
|
||||
eval_base_url = os.getenv("EVAL_LLM_BINDING_HOST")
|
||||
|
||||
# Create LLM and Embeddings instances for RAGAS
|
||||
llm_kwargs = {
|
||||
"model": eval_model,
|
||||
"api_key": eval_api_key,
|
||||
"max_retries": int(os.getenv("EVAL_LLM_MAX_RETRIES", "5")),
|
||||
"request_timeout": int(os.getenv("EVAL_LLM_TIMEOUT", "180")),
|
||||
}
|
||||
embedding_kwargs = {"model": eval_embedding_model, "api_key": eval_api_key}
|
||||
|
||||
if eval_base_url:
|
||||
llm_kwargs["base_url"] = eval_base_url
|
||||
embedding_kwargs["base_url"] = eval_base_url
|
||||
|
||||
# Create base LangChain LLM
|
||||
base_llm = ChatOpenAI(**llm_kwargs)
|
||||
self.eval_embeddings = OpenAIEmbeddings(**embedding_kwargs)
|
||||
|
||||
# Wrap LLM with LangchainLLMWrapper and enable bypass_n mode for custom endpoints
|
||||
# This ensures compatibility with endpoints that don't support the 'n' parameter
|
||||
# by generating multiple outputs through repeated prompts instead of using 'n' parameter
|
||||
try:
|
||||
self.eval_llm = LangchainLLMWrapper(
|
||||
langchain_llm=base_llm,
|
||||
bypass_n=True, # Enable bypass_n to avoid passing 'n' to OpenAI API
|
||||
)
|
||||
logger.debug("Successfully configured bypass_n mode for LLM wrapper")
|
||||
except Exception as e:
|
||||
logger.warning(
|
||||
"Could not configure LangchainLLMWrapper with bypass_n: %s. "
|
||||
"Using base LLM directly, which may cause warnings with custom endpoints.",
|
||||
e,
|
||||
)
|
||||
self.eval_llm = base_llm
|
||||
|
||||
if test_dataset_path is None:
|
||||
test_dataset_path = Path(__file__).parent / "sample_dataset.json"
|
||||
|
|
@ -127,6 +173,56 @@ class RAGEvaluator:
|
|||
# Load test dataset
|
||||
self.test_cases = self._load_test_dataset()
|
||||
|
||||
# Store configuration values for display
|
||||
self.eval_model = eval_model
|
||||
self.eval_embedding_model = eval_embedding_model
|
||||
self.eval_base_url = eval_base_url
|
||||
self.eval_max_retries = llm_kwargs["max_retries"]
|
||||
self.eval_timeout = llm_kwargs["request_timeout"]
|
||||
|
||||
# Display configuration
|
||||
self._display_configuration()
|
||||
|
||||
def _display_configuration(self):
|
||||
"""Display all evaluation configuration settings"""
|
||||
logger.info("")
|
||||
logger.info("%s", "=" * 70)
|
||||
logger.info("🔧 EVALUATION CONFIGURATION")
|
||||
logger.info("%s", "=" * 70)
|
||||
|
||||
logger.info("")
|
||||
logger.info("Evaluation Models:")
|
||||
logger.info(" • LLM Model: %s", self.eval_model)
|
||||
logger.info(" • Embedding Model: %s", self.eval_embedding_model)
|
||||
if self.eval_base_url:
|
||||
logger.info(" • Custom Endpoint: %s", self.eval_base_url)
|
||||
logger.info(" • Bypass N-Parameter: Enabled (for compatibility)")
|
||||
else:
|
||||
logger.info(" • Endpoint: OpenAI Official API")
|
||||
|
||||
logger.info("")
|
||||
logger.info("Concurrency & Rate Limiting:")
|
||||
max_concurrent = int(os.getenv("EVAL_MAX_CONCURRENT", "1"))
|
||||
query_top_k = int(os.getenv("EVAL_QUERY_TOP_K", "10"))
|
||||
logger.info(
|
||||
" • Max Concurrent: %s %s",
|
||||
max_concurrent,
|
||||
"(serial evaluation)" if max_concurrent == 1 else "parallel evaluations",
|
||||
)
|
||||
logger.info(" • Query Top-K: %s Entities/Relations", query_top_k)
|
||||
logger.info(" • LLM Max Retries: %s", self.eval_max_retries)
|
||||
logger.info(" • LLM Timeout: %s seconds", self.eval_timeout)
|
||||
|
||||
logger.info("")
|
||||
logger.info("Test Configuration:")
|
||||
logger.info(" • Total Test Cases: %s", len(self.test_cases))
|
||||
logger.info(" • Test Dataset: %s", self.test_dataset_path.name)
|
||||
logger.info(" • LightRAG API: %s", self.rag_api_url)
|
||||
logger.info(" • Results Directory: %s", self.results_dir.name)
|
||||
|
||||
logger.info("%s", "=" * 70)
|
||||
logger.info("")
|
||||
|
||||
def _load_test_dataset(self) -> List[Dict[str, str]]:
|
||||
"""Load test cases from JSON file"""
|
||||
if not self.test_dataset_path.exists():
|
||||
|
|
@ -163,12 +259,12 @@ class RAGEvaluator:
|
|||
"include_references": True,
|
||||
"include_chunk_content": True, # NEW: Request chunk content in references
|
||||
"response_type": "Multiple Paragraphs",
|
||||
"top_k": 10,
|
||||
"top_k": int(os.getenv("EVAL_QUERY_TOP_K", "10")),
|
||||
}
|
||||
|
||||
# Get API key from environment for authentication
|
||||
api_key = os.getenv("LIGHTRAG_API_KEY")
|
||||
|
||||
|
||||
# Prepare headers with optional authentication
|
||||
headers = {}
|
||||
if api_key:
|
||||
|
|
@ -244,6 +340,7 @@ class RAGEvaluator:
|
|||
test_case: Dict[str, str],
|
||||
semaphore: asyncio.Semaphore,
|
||||
client: httpx.AsyncClient,
|
||||
progress_counter: Dict[str, int],
|
||||
) -> Dict[str, Any]:
|
||||
"""
|
||||
Evaluate a single test case with concurrency control
|
||||
|
|
@ -253,34 +350,39 @@ class RAGEvaluator:
|
|||
test_case: Test case dictionary with question and ground_truth
|
||||
semaphore: Semaphore to control concurrency
|
||||
client: Shared httpx AsyncClient for connection pooling
|
||||
progress_counter: Shared dictionary for progress tracking
|
||||
|
||||
Returns:
|
||||
Evaluation result dictionary
|
||||
"""
|
||||
total_cases = len(self.test_cases)
|
||||
|
||||
async with semaphore:
|
||||
question = test_case["question"]
|
||||
ground_truth = test_case["ground_truth"]
|
||||
|
||||
logger.info("[%s/%s] Evaluating: %s...", idx, total_cases, question[:60])
|
||||
|
||||
# Generate RAG response by calling actual LightRAG API
|
||||
rag_response = await self.generate_rag_response(
|
||||
question=question, client=client
|
||||
)
|
||||
try:
|
||||
rag_response = await self.generate_rag_response(
|
||||
question=question, client=client
|
||||
)
|
||||
except Exception as e:
|
||||
logger.error("Error generating response for test %s: %s", idx, str(e))
|
||||
progress_counter["completed"] += 1
|
||||
return {
|
||||
"test_number": idx,
|
||||
"question": question,
|
||||
"error": str(e),
|
||||
"metrics": {},
|
||||
"ragas_score": 0,
|
||||
"timestamp": datetime.now().isoformat(),
|
||||
}
|
||||
|
||||
# *** CRITICAL FIX: Use actual retrieved contexts, NOT ground_truth ***
|
||||
retrieved_contexts = rag_response["contexts"]
|
||||
|
||||
# DEBUG: Print what was actually retrieved
|
||||
logger.debug("📝 Retrieved %s contexts", len(retrieved_contexts))
|
||||
if retrieved_contexts:
|
||||
logger.debug(
|
||||
"📄 First context preview: %s...", retrieved_contexts[0][:100]
|
||||
)
|
||||
else:
|
||||
logger.warning("⚠️ No contexts retrieved!")
|
||||
# DEBUG: Print what was actually retrieved (only in debug mode)
|
||||
logger.debug(
|
||||
"📝 Test %s: Retrieved %s contexts", idx, len(retrieved_contexts)
|
||||
)
|
||||
|
||||
# Prepare dataset for RAGAS evaluation with CORRECT contexts
|
||||
eval_dataset = Dataset.from_dict(
|
||||
|
|
@ -302,6 +404,8 @@ class RAGEvaluator:
|
|||
context_recall,
|
||||
context_precision,
|
||||
],
|
||||
llm=self.eval_llm,
|
||||
embeddings=self.eval_embeddings,
|
||||
)
|
||||
|
||||
# Convert to DataFrame (RAGAS v0.3+ API)
|
||||
|
|
@ -312,6 +416,7 @@ class RAGEvaluator:
|
|||
|
||||
# Extract scores (RAGAS v0.3+ uses .to_pandas())
|
||||
result = {
|
||||
"test_number": idx,
|
||||
"question": question,
|
||||
"answer": rag_response["answer"][:200] + "..."
|
||||
if len(rag_response["answer"]) > 200
|
||||
|
|
@ -319,7 +424,7 @@ class RAGEvaluator:
|
|||
"ground_truth": ground_truth[:200] + "..."
|
||||
if len(ground_truth) > 200
|
||||
else ground_truth,
|
||||
"project": test_case.get("project_context", "unknown"),
|
||||
"project": test_case.get("project", "unknown"),
|
||||
"metrics": {
|
||||
"faithfulness": float(scores_row.get("faithfulness", 0)),
|
||||
"answer_relevance": float(
|
||||
|
|
@ -333,22 +438,24 @@ class RAGEvaluator:
|
|||
"timestamp": datetime.now().isoformat(),
|
||||
}
|
||||
|
||||
# Calculate RAGAS score (average of all metrics)
|
||||
# Calculate RAGAS score (average of all metrics, excluding NaN values)
|
||||
metrics = result["metrics"]
|
||||
ragas_score = sum(metrics.values()) / len(metrics) if metrics else 0
|
||||
valid_metrics = [v for v in metrics.values() if not _is_nan(v)]
|
||||
ragas_score = (
|
||||
sum(valid_metrics) / len(valid_metrics) if valid_metrics else 0
|
||||
)
|
||||
result["ragas_score"] = round(ragas_score, 4)
|
||||
|
||||
logger.info("✅ Faithfulness: %.4f", metrics["faithfulness"])
|
||||
logger.info("✅ Answer Relevance: %.4f", metrics["answer_relevance"])
|
||||
logger.info("✅ Context Recall: %.4f", metrics["context_recall"])
|
||||
logger.info("✅ Context Precision: %.4f", metrics["context_precision"])
|
||||
logger.info("📊 RAGAS Score: %.4f", result["ragas_score"])
|
||||
# Update progress counter
|
||||
progress_counter["completed"] += 1
|
||||
|
||||
return result
|
||||
|
||||
except Exception as e:
|
||||
logger.exception("❌ Error evaluating: %s", e)
|
||||
logger.error("Error evaluating test %s: %s", idx, str(e))
|
||||
progress_counter["completed"] += 1
|
||||
return {
|
||||
"test_number": idx,
|
||||
"question": question,
|
||||
"error": str(e),
|
||||
"metrics": {},
|
||||
|
|
@ -363,18 +470,22 @@ class RAGEvaluator:
|
|||
Returns:
|
||||
List of evaluation results with metrics
|
||||
"""
|
||||
# Get MAX_ASYNC from environment (default to 4 if not set)
|
||||
max_async = int(os.getenv("MAX_ASYNC", "4"))
|
||||
# Get evaluation concurrency from environment (default to 1 for serial evaluation)
|
||||
max_async = int(os.getenv("EVAL_MAX_CONCURRENT", "3"))
|
||||
|
||||
logger.info("")
|
||||
logger.info("%s", "=" * 70)
|
||||
logger.info("🚀 Starting RAGAS Evaluation of Portfolio RAG System")
|
||||
logger.info("🔧 Parallel evaluations: %s", max_async)
|
||||
logger.info("🔧 Concurrent evaluations: %s", max_async)
|
||||
logger.info("%s", "=" * 70)
|
||||
logger.info("")
|
||||
|
||||
# Create semaphore to limit concurrent evaluations
|
||||
semaphore = asyncio.Semaphore(max_async)
|
||||
|
||||
# Create progress counter (shared across all tasks)
|
||||
progress_counter = {"completed": 0}
|
||||
|
||||
# Create shared HTTP client with connection pooling and proper timeouts
|
||||
# Timeout: 3 minutes for connect, 5 minutes for read (LLM can be slow)
|
||||
timeout = httpx.Timeout(
|
||||
|
|
@ -390,7 +501,9 @@ class RAGEvaluator:
|
|||
async with httpx.AsyncClient(timeout=timeout, limits=limits) as client:
|
||||
# Create tasks for all test cases
|
||||
tasks = [
|
||||
self.evaluate_single_case(idx, test_case, semaphore, client)
|
||||
self.evaluate_single_case(
|
||||
idx, test_case, semaphore, client, progress_counter
|
||||
)
|
||||
for idx, test_case in enumerate(self.test_cases, 1)
|
||||
]
|
||||
|
||||
|
|
@ -459,6 +572,95 @@ class RAGEvaluator:
|
|||
|
||||
return csv_path
|
||||
|
||||
def _format_metric(self, value: float, width: int = 6) -> str:
|
||||
"""
|
||||
Format a metric value for display, handling NaN gracefully
|
||||
|
||||
Args:
|
||||
value: The metric value to format
|
||||
width: The width of the formatted string
|
||||
|
||||
Returns:
|
||||
Formatted string (e.g., "0.8523" or " N/A ")
|
||||
"""
|
||||
if _is_nan(value):
|
||||
return "N/A".center(width)
|
||||
return f"{value:.4f}".rjust(width)
|
||||
|
||||
def _display_results_table(self, results: List[Dict[str, Any]]):
|
||||
"""
|
||||
Display evaluation results in a formatted table
|
||||
|
||||
Args:
|
||||
results: List of evaluation results
|
||||
"""
|
||||
logger.info("")
|
||||
logger.info("%s", "=" * 115)
|
||||
logger.info("📊 EVALUATION RESULTS SUMMARY")
|
||||
logger.info("%s", "=" * 115)
|
||||
|
||||
# Table header
|
||||
logger.info(
|
||||
"%-4s | %-50s | %6s | %7s | %6s | %7s | %6s | %6s",
|
||||
"#",
|
||||
"Question",
|
||||
"Faith",
|
||||
"AnswRel",
|
||||
"CtxRec",
|
||||
"CtxPrec",
|
||||
"RAGAS",
|
||||
"Status",
|
||||
)
|
||||
logger.info("%s", "-" * 115)
|
||||
|
||||
# Table rows
|
||||
for result in results:
|
||||
test_num = result.get("test_number", 0)
|
||||
question = result.get("question", "")
|
||||
# Truncate question to 50 chars
|
||||
question_display = (
|
||||
(question[:47] + "...") if len(question) > 50 else question
|
||||
)
|
||||
|
||||
metrics = result.get("metrics", {})
|
||||
if metrics:
|
||||
# Success case - format each metric, handling NaN values
|
||||
faith = metrics.get("faithfulness", 0)
|
||||
ans_rel = metrics.get("answer_relevance", 0)
|
||||
ctx_rec = metrics.get("context_recall", 0)
|
||||
ctx_prec = metrics.get("context_precision", 0)
|
||||
ragas = result.get("ragas_score", 0)
|
||||
status = "✓"
|
||||
|
||||
logger.info(
|
||||
"%-4d | %-50s | %s | %s | %s | %s | %s | %6s",
|
||||
test_num,
|
||||
question_display,
|
||||
self._format_metric(faith, 6),
|
||||
self._format_metric(ans_rel, 7),
|
||||
self._format_metric(ctx_rec, 6),
|
||||
self._format_metric(ctx_prec, 7),
|
||||
self._format_metric(ragas, 6),
|
||||
status,
|
||||
)
|
||||
else:
|
||||
# Error case
|
||||
error = result.get("error", "Unknown error")
|
||||
error_display = (error[:20] + "...") if len(error) > 23 else error
|
||||
logger.info(
|
||||
"%-4d | %-50s | %6s | %7s | %6s | %7s | %6s | ✗ %s",
|
||||
test_num,
|
||||
question_display,
|
||||
"N/A",
|
||||
"N/A",
|
||||
"N/A",
|
||||
"N/A",
|
||||
"N/A",
|
||||
error_display,
|
||||
)
|
||||
|
||||
logger.info("%s", "=" * 115)
|
||||
|
||||
def _calculate_benchmark_stats(
|
||||
self, results: List[Dict[str, Any]]
|
||||
) -> Dict[str, Any]:
|
||||
|
|
@ -485,45 +687,55 @@ class RAGEvaluator:
|
|||
"success_rate": 0.0,
|
||||
}
|
||||
|
||||
# Calculate averages for each metric (handling NaN values)
|
||||
metrics_sum = {
|
||||
"faithfulness": 0.0,
|
||||
"answer_relevance": 0.0,
|
||||
"context_recall": 0.0,
|
||||
"context_precision": 0.0,
|
||||
"ragas_score": 0.0,
|
||||
# Calculate averages for each metric (handling NaN values correctly)
|
||||
# Track both sum and count for each metric to handle NaN values properly
|
||||
metrics_data = {
|
||||
"faithfulness": {"sum": 0.0, "count": 0},
|
||||
"answer_relevance": {"sum": 0.0, "count": 0},
|
||||
"context_recall": {"sum": 0.0, "count": 0},
|
||||
"context_precision": {"sum": 0.0, "count": 0},
|
||||
"ragas_score": {"sum": 0.0, "count": 0},
|
||||
}
|
||||
|
||||
for result in valid_results:
|
||||
metrics = result.get("metrics", {})
|
||||
# Skip NaN values when summing
|
||||
|
||||
# For each metric, sum non-NaN values and count them
|
||||
faithfulness = metrics.get("faithfulness", 0)
|
||||
if not _is_nan(faithfulness):
|
||||
metrics_sum["faithfulness"] += faithfulness
|
||||
metrics_data["faithfulness"]["sum"] += faithfulness
|
||||
metrics_data["faithfulness"]["count"] += 1
|
||||
|
||||
answer_relevance = metrics.get("answer_relevance", 0)
|
||||
if not _is_nan(answer_relevance):
|
||||
metrics_sum["answer_relevance"] += answer_relevance
|
||||
metrics_data["answer_relevance"]["sum"] += answer_relevance
|
||||
metrics_data["answer_relevance"]["count"] += 1
|
||||
|
||||
context_recall = metrics.get("context_recall", 0)
|
||||
if not _is_nan(context_recall):
|
||||
metrics_sum["context_recall"] += context_recall
|
||||
metrics_data["context_recall"]["sum"] += context_recall
|
||||
metrics_data["context_recall"]["count"] += 1
|
||||
|
||||
context_precision = metrics.get("context_precision", 0)
|
||||
if not _is_nan(context_precision):
|
||||
metrics_sum["context_precision"] += context_precision
|
||||
metrics_data["context_precision"]["sum"] += context_precision
|
||||
metrics_data["context_precision"]["count"] += 1
|
||||
|
||||
ragas_score = result.get("ragas_score", 0)
|
||||
if not _is_nan(ragas_score):
|
||||
metrics_sum["ragas_score"] += ragas_score
|
||||
metrics_data["ragas_score"]["sum"] += ragas_score
|
||||
metrics_data["ragas_score"]["count"] += 1
|
||||
|
||||
# Calculate averages
|
||||
n = len(valid_results)
|
||||
# Calculate averages using actual counts for each metric
|
||||
avg_metrics = {}
|
||||
for k, v in metrics_sum.items():
|
||||
avg_val = v / n if n > 0 else 0
|
||||
# Handle NaN in average
|
||||
avg_metrics[k] = round(avg_val, 4) if not _is_nan(avg_val) else 0.0
|
||||
for metric_name, data in metrics_data.items():
|
||||
if data["count"] > 0:
|
||||
avg_val = data["sum"] / data["count"]
|
||||
avg_metrics[metric_name] = (
|
||||
round(avg_val, 4) if not _is_nan(avg_val) else 0.0
|
||||
)
|
||||
else:
|
||||
avg_metrics[metric_name] = 0.0
|
||||
|
||||
# Find min and max RAGAS scores (filter out NaN)
|
||||
ragas_scores = []
|
||||
|
|
@ -556,6 +768,20 @@ class RAGEvaluator:
|
|||
|
||||
elapsed_time = time.time() - start_time
|
||||
|
||||
# Add a small delay to ensure all buffered output is completely written
|
||||
await asyncio.sleep(0.2)
|
||||
|
||||
# Flush all output buffers to ensure RAGAS progress bars are fully displayed
|
||||
# before showing our results table
|
||||
sys.stdout.flush()
|
||||
sys.stderr.flush()
|
||||
# Make sure the progress bar line ends before logging summary output
|
||||
sys.stderr.write("\n")
|
||||
sys.stderr.flush()
|
||||
|
||||
# Display results table
|
||||
self._display_results_table(results)
|
||||
|
||||
# Calculate benchmark statistics
|
||||
benchmark_stats = self._calculate_benchmark_stats(results)
|
||||
|
||||
|
|
|
|||
|
|
@ -3,17 +3,17 @@
|
|||
{
|
||||
"question": "How does LightRAG solve the hallucination problem in large language models?",
|
||||
"ground_truth": "LightRAG solves the hallucination problem by combining large language models with external knowledge retrieval. The framework ensures accurate responses by grounding LLM outputs in actual documents. LightRAG provides contextual responses that reduce hallucinations significantly.",
|
||||
"context": "lightrag_overview"
|
||||
"project": "lightrag_overview"
|
||||
},
|
||||
{
|
||||
"question": "What are the three main components required in a RAG system?",
|
||||
"ground_truth": "A RAG system requires three main components: a retrieval system (vector database or search engine) to find relevant documents, an embedding model to convert text into vector representations for similarity search, and a large language model (LLM) to generate responses based on retrieved context.",
|
||||
"context": "rag_architecture"
|
||||
"project": "rag_architecture"
|
||||
},
|
||||
{
|
||||
"question": "How does LightRAG's retrieval performance compare to traditional RAG approaches?",
|
||||
"ground_truth": "LightRAG delivers faster retrieval performance than traditional RAG approaches. The framework optimizes document retrieval operations for speed, while traditional RAG systems often suffer from slow query response times. LightRAG achieves high quality results with improved performance.",
|
||||
"context": "lightrag_improvements"
|
||||
"project": "lightrag_improvements"
|
||||
}
|
||||
]
|
||||
}
|
||||
|
|
|
|||
Loading…
Add table
Reference in a new issue