Add Langfuse observability integration documentation

This commit is contained in:
yangdx 2025-11-06 10:24:15 +08:00
parent bd62bb3024
commit b0d44d283b
2 changed files with 90 additions and 2 deletions

View file

@ -53,7 +53,7 @@
## 🎉 新闻 ## 🎉 新闻
- [x] [2025.11.05]🎯📢添加**基于RAGAS的**LightRAG评估框架。 - [x] [2025.11.05]🎯📢添加**基于RAGAS的**评估框架和**Langfuse**可观测性支持
- [x] [2025.10.22]🎯📢消除处理**大规模数据集**的瓶颈。 - [x] [2025.10.22]🎯📢消除处理**大规模数据集**的瓶颈。
- [x] [2025.09.15]🎯📢显著提升**小型LLM**如Qwen3-30B-A3B的知识图谱提取准确性。 - [x] [2025.09.15]🎯📢显著提升**小型LLM**如Qwen3-30B-A3B的知识图谱提取准确性。
- [x] [2025.08.29]🎯📢现已支持**Reranker**,显著提升混合查询性能。 - [x] [2025.08.29]🎯📢现已支持**Reranker**,显著提升混合查询性能。
@ -1463,6 +1463,50 @@ LightRAG服务器提供全面的知识图谱可视化功能。它支持各种重
![iShot_2025-03-23_12.40.08](./README.assets/iShot_2025-03-23_12.40.08.png) ![iShot_2025-03-23_12.40.08](./README.assets/iShot_2025-03-23_12.40.08.png)
## Langfuse 可观测性集成
Langfuse 为 OpenAI 客户端提供了直接替代方案,可自动跟踪所有 LLM 交互,使开发者能够在无需修改代码的情况下监控、调试和优化其 RAG 系统。
### 安装 Langfuse 可选依赖
```
pip install lightrag-hku
pip install lightrag-hku[observability]
# 或从源代码安装并启用调试模式
pip install -e .
pip install -e ".[observability]"
```
### 配置 Langfuse 环境变量
修改 .env 文件:
```
## Langfuse 可观测性(可选)
# LLM 可观测性和追踪平台
# 安装命令: pip install lightrag-hku[observability]
# 注册地址: https://cloud.langfuse.com 或自托管部署
LANGFUSE_SECRET_KEY=""
LANGFUSE_PUBLIC_KEY=""
LANGFUSE_HOST="https://cloud.langfuse.com" # 或您的自托管实例地址
LANGFUSE_ENABLE_TRACE=true
```
### Langfuse 使用说明
安装并配置完成后Langfuse 会自动追踪所有 OpenAI LLM 调用。Langfuse 仪表板功能包括:
- **追踪**:查看完整的 LLM 调用链
- **分析**Token 使用量、延迟、成本指标
- **调试**:检查提示词和响应内容
- **评估**:比较模型输出结果
- **监控**:实时告警功能
### 重要提示
**注意**LightRAG 目前仅把 OpenAI 兼容的 API 调用接入了 Langfuse。Ollama、Azure 和 AWS Bedrock 等 API 还无法使用 Langfuse 的可观测性功能。
## RAGAS评估 ## RAGAS评估
**RAGAS**Retrieval Augmented Generation Assessment检索增强生成评估是一个使用LLM对RAG系统进行无参考评估的框架。我们提供了基于RAGAS的评估脚本。详细信息请参阅[基于RAGAS的评估框架](lightrag/evaluation/README.md)。 **RAGAS**Retrieval Augmented Generation Assessment检索增强生成评估是一个使用LLM对RAG系统进行无参考评估的框架。我们提供了基于RAGAS的评估脚本。详细信息请参阅[基于RAGAS的评估框架](lightrag/evaluation/README.md)。

View file

@ -51,7 +51,7 @@
--- ---
## 🎉 News ## 🎉 News
- [x] [2025.11.05]🎯📢Add **RAGAS-based** Evaluation Framework for LightRAG. - [x] [2025.11.05]🎯📢Add **RAGAS-based** Evaluation Framework and **Langfuse** observability for LightRAG.
- [x] [2025.10.22]🎯📢Eliminate bottlenecks in processing **large-scale datasets**. - [x] [2025.10.22]🎯📢Eliminate bottlenecks in processing **large-scale datasets**.
- [x] [2025.09.15]🎯📢Significantly enhances KG extraction accuracy for **small LLMs** like Qwen3-30B-A3B. - [x] [2025.09.15]🎯📢Significantly enhances KG extraction accuracy for **small LLMs** like Qwen3-30B-A3B.
- [x] [2025.08.29]🎯📢**Reranker** is supported now , significantly boosting performance for mixed queries. - [x] [2025.08.29]🎯📢**Reranker** is supported now , significantly boosting performance for mixed queries.
@ -1543,6 +1543,50 @@ The LightRAG Server offers a comprehensive knowledge graph visualization feature
![iShot_2025-03-23_12.40.08](./README.assets/iShot_2025-03-23_12.40.08.png) ![iShot_2025-03-23_12.40.08](./README.assets/iShot_2025-03-23_12.40.08.png)
## Langfuse observability integration
Langfuse provides a drop-in replacement for the OpenAI client that automatically tracks all LLM interactions, enabling developers to monitor, debug, and optimize their RAG systems without code changes.
### Installation with Langfuse option
```
pip install lightrag-hku
pip install lightrag-hku[observability]
# Or install from souce code with debug mode enabled
pip install -e .
pip install -e ".[observability]"
```
### Config Langfuse env vars
modify .env file:
```
## Langfuse Observability (Optional)
# LLM observability and tracing platform
# Install with: pip install lightrag-hku[observability]
# Sign up at: https://cloud.langfuse.com or self-host
LANGFUSE_SECRET_KEY=""
LANGFUSE_PUBLIC_KEY=""
LANGFUSE_HOST="https://cloud.langfuse.com" # or your self-hosted instance
LANGFUSE_ENABLE_TRACE=true
```
### Langfuse Usage
Once installed and configured, Langfuse automatically traces all OpenAI LLM calls. Langfuse dashboard features include:
- **Tracing**: View complete LLM call chains
- **Analytics**: Token usage, latency, cost metrics
- **Debugging**: Inspect prompts and responses
- **Evaluation**: Compare model outputs
- **Monitoring**: Real-time alerting
### Important Notice
**Note**: LightRAG currently only integrates OpenAI-compatible API calls with Langfuse. APIs such as Ollama, Azure, and AWS Bedrock are not yet supported for Langfuse observability.
## RAGAS-based Evaluation ## RAGAS-based Evaluation
**RAGAS** (Retrieval Augmented Generation Assessment) is a framework for reference-free evaluation of RAG systems using LLMs. There is an evaluation script based on RAGAS. For detailed information, please refer to [RAGAS-based Evaluation Framework](lightrag/evaluation/README.md). **RAGAS** (Retrieval Augmented Generation Assessment) is a framework for reference-free evaluation of RAG systems using LLMs. There is an evaluation script based on RAGAS. For detailed information, please refer to [RAGAS-based Evaluation Framework](lightrag/evaluation/README.md).