This contribution adds optional Langfuse support for LLM observability and tracing. Langfuse provides a drop-in replacement for the OpenAI client that automatically tracks all LLM interactions without requiring code changes. Features: - Optional Langfuse integration with graceful fallback - Automatic LLM request/response tracing - Token usage tracking - Latency metrics - Error tracking - Zero code changes required for existing functionality Implementation: - Modified lightrag/llm/openai.py to conditionally use Langfuse's AsyncOpenAI - Falls back to standard OpenAI client if Langfuse is not installed - Logs observability status on import Configuration: To enable Langfuse tracing, install the observability extras and set environment variables: ```bash pip install lightrag-hku[observability] export LANGFUSE_PUBLIC_KEY="your_public_key" export LANGFUSE_SECRET_KEY="your_secret_key" export LANGFUSE_HOST="https://cloud.langfuse.com" # or your self-hosted instance ``` If Langfuse is not installed or environment variables are not set, LightRAG will use the standard OpenAI client without any functionality changes. Changes: - Modified lightrag/llm/openai.py (added optional Langfuse import) - Updated pyproject.toml with optional 'observability' dependencies Dependencies (optional): - langfuse>=3.8.1 |
||
|---|---|---|
| .. | ||
| api | ||
| kg | ||
| llm | ||
| tools | ||
| __init__.py | ||
| base.py | ||
| constants.py | ||
| exceptions.py | ||
| lightrag.py | ||
| namespace.py | ||
| operate.py | ||
| prompt.py | ||
| rerank.py | ||
| types.py | ||
| utils.py | ||
| utils_graph.py | ||