* Add OpenTelemetry distributed tracing support - Add tracer abstraction with no-op and OpenTelemetry implementations - Instrument add_episode and add_episode_bulk with tracing spans - Instrument LLM client with cache-aware tracing - Add configurable span name prefix support - Refactor add_episode methods to improve code quality - Add OTEL_TRACING.md documentation 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude <noreply@anthropic.com> * Fix linting errors in tracing implementation - Remove unused episodes_by_uuid variable - Fix tracer type annotations for context manager support - Replace isinstance tuple with union syntax - Use contextlib.suppress for exception handling - Fix import ordering and use AbstractContextManager 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude <noreply@anthropic.com> * Address PR review feedback on tracing implementation Critical fixes: - Remove flawed error span creation in graphiti.py that created orphaned spans - Restructure LLM client tracing to create span once at start, eliminating code duplication - Initialize LLM client tracer to NoOpTracer by default to fix type checking Enhancements: - Add comprehensive span attributes to add_episode: reference_time, entity/edge type counts, previous episodes count, invalidated edge count, community count - Optimize isinstance check for better performance 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude <noreply@anthropic.com> * Add prompt name tracking to OpenTelemetry tracing spans Add prompt_name parameter to all LLM client generate_response() methods and set it as a span attribute in the llm.generate span. This enables better observability by identifying which prompt template was used for each LLM call. Changes: - Add prompt_name parameter to LLMClient.generate_response() base method - Add prompt_name parameter and tracing to OpenAIBaseClient, AnthropicClient, GeminiClient, and OpenAIGenericClient - Update all 14 LLM call sites across maintenance operations to include prompt_name: - edge_operations.py: 4 calls - node_operations.py: 6 calls (note: 7 listed but only 6 unique) - temporal_operations.py: 2 calls - community_operations.py: 2 calls 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude <noreply@anthropic.com> * Fix exception handling in add_episode to record errors in OpenTelemetry span Moved try-except block inside the OpenTelemetry span context and added proper error recording with span.set_status() and span.record_exception(). This ensures exceptions are captured in the distributed trace, matching the pattern used in add_episode_bulk. 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude <noreply@anthropic.com> --------- Co-authored-by: Claude <noreply@anthropic.com>
1.2 KiB
1.2 KiB
OpenTelemetry Tracing in Graphiti
Overview
Graphiti supports OpenTelemetry distributed tracing through dependency injection. Tracing is optional - without a tracer, operations are no-op with zero overhead.
Installation
To use tracing, install the OpenTelemetry SDK:
pip install opentelemetry-sdk
Basic Usage
from opentelemetry import trace
from opentelemetry.sdk.trace import TracerProvider
from opentelemetry.sdk.trace.export import ConsoleSpanExporter, SimpleSpanProcessor
# Set up OpenTelemetry
provider = TracerProvider()
processor = SimpleSpanProcessor(ConsoleSpanExporter())
provider.add_span_processor(processor)
trace.set_tracer_provider(provider)
# Get a tracer
tracer = trace.get_tracer(__name__)
# Create Graphiti with tracing enabled
from graphiti_core import Graphiti
graphiti = Graphiti(
uri="bolt://localhost:7687",
user="neo4j",
password="password",
tracer=tracer,
trace_span_prefix="myapp.graphiti" # Optional, defaults to "graphiti"
)
Configuration
Span Name Prefix
You can configure the prefix for all span names:
graphiti = Graphiti(
uri="bolt://localhost:7687",
user="neo4j",
password="password",
tracer=tracer,
trace_span_prefix="myapp.kg"
)