LightRAG/lightrag/evaluation/sample_documents/05_evaluation_and_deployment.md
anouarbm a172cf893d feat(evaluation): Add sample documents for reproducible RAGAS testing
Add 5 markdown documents that users can index to reproduce evaluation results.

Changes:
- Add sample_documents/ folder with 5 markdown files covering LightRAG features
- Update sample_dataset.json with 3 improved, specific test questions
- Shorten and correct evaluation README (removed outdated info about mock responses)
- Add sample_documents reference with expected ~95% RAGAS score

Test Results with sample documents:
- Average RAGAS Score: 95.28%
- Faithfulness: 100%, Answer Relevance: 96.67%
- Context Recall: 88.89%, Context Precision: 95.56%
2025-11-03 13:28:46 +01:00

2.4 KiB

RAG Evaluation Metrics and Deployment

Key RAG Evaluation Metrics

RAG system quality is measured through four key metrics.

Faithfulness Metric

Faithfulness measures whether answers are factually grounded in retrieved context. The faithfulness metric detects hallucinations in LLM responses. High faithfulness scores indicate answers based on actual document content. The metric evaluates factual accuracy of generated responses.

Answer Relevance Metric

Answer Relevance measures how well answers address the user question. The answer relevance metric evaluates response quality and appropriateness. High answer relevance scores show responses that directly answer user queries. The metric assesses the connection between questions and generated answers.

Context Recall Metric

Context Recall measures completeness of retrieval from documents. The context recall metric evaluates whether all relevant information was retrieved. High context recall scores indicate comprehensive document retrieval. The metric assesses retrieval system effectiveness.

Context Precision Metric

Context Precision measures quality and relevance of retrieved documents. The context precision metric evaluates retrieval accuracy without noise. High context precision scores show clean retrieval without irrelevant content. The metric measures retrieval system selectivity.

LightRAG Deployment Options

LightRAG can be deployed in production through multiple approaches.

Docker Container Deployment

Docker containers enable consistent LightRAG deployment across environments. Docker provides isolated runtime environments for the framework. Container deployment simplifies dependency management and scaling.

REST API Server with FastAPI

FastAPI serves as the REST API framework for LightRAG deployment. The FastAPI server exposes LightRAG functionality through HTTP endpoints. REST API deployment enables client-server architecture for RAG applications.

Direct Python Integration

Direct Python integration embeds LightRAG into Python applications. Python integration provides programmatic access to RAG capabilities. Direct integration supports custom application workflows and pipelines.

Deployment Features

LightRAG supports environment-based configuration for different deployment scenarios. The framework integrates with multiple LLM providers for flexibility. LightRAG enables horizontal scaling for production workloads.