* Add OpenTelemetry distributed tracing support
- Add tracer abstraction with no-op and OpenTelemetry implementations
- Instrument add_episode and add_episode_bulk with tracing spans
- Instrument LLM client with cache-aware tracing
- Add configurable span name prefix support
- Refactor add_episode methods to improve code quality
- Add OTEL_TRACING.md documentation
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude <noreply@anthropic.com>
* Fix linting errors in tracing implementation
- Remove unused episodes_by_uuid variable
- Fix tracer type annotations for context manager support
- Replace isinstance tuple with union syntax
- Use contextlib.suppress for exception handling
- Fix import ordering and use AbstractContextManager
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude <noreply@anthropic.com>
* Address PR review feedback on tracing implementation
Critical fixes:
- Remove flawed error span creation in graphiti.py that created orphaned spans
- Restructure LLM client tracing to create span once at start, eliminating code duplication
- Initialize LLM client tracer to NoOpTracer by default to fix type checking
Enhancements:
- Add comprehensive span attributes to add_episode: reference_time, entity/edge type counts, previous episodes count, invalidated edge count, community count
- Optimize isinstance check for better performance
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude <noreply@anthropic.com>
* Add prompt name tracking to OpenTelemetry tracing spans
Add prompt_name parameter to all LLM client generate_response() methods
and set it as a span attribute in the llm.generate span. This enables
better observability by identifying which prompt template was used for
each LLM call.
Changes:
- Add prompt_name parameter to LLMClient.generate_response() base method
- Add prompt_name parameter and tracing to OpenAIBaseClient,
AnthropicClient, GeminiClient, and OpenAIGenericClient
- Update all 14 LLM call sites across maintenance operations to include
prompt_name:
- edge_operations.py: 4 calls
- node_operations.py: 6 calls (note: 7 listed but only 6 unique)
- temporal_operations.py: 2 calls
- community_operations.py: 2 calls
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude <noreply@anthropic.com>
* Fix exception handling in add_episode to record errors in OpenTelemetry span
Moved try-except block inside the OpenTelemetry span context and added
proper error recording with span.set_status() and span.record_exception().
This ensures exceptions are captured in the distributed trace, matching
the pattern used in add_episode_bulk.
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude <noreply@anthropic.com>
---------
Co-authored-by: Claude <noreply@anthropic.com>
* Add group_id parameter to get_extraction_language_instruction
Enable consumers to provide group-specific language extraction
instructions by passing group_id through the call chain.
Changes:
- Add optional group_id parameter to get_extraction_language_instruction()
- Add group_id parameter to all LLMClient.generate_response() methods
- Pass group_id through to language instruction function
- Maintain backward compatibility with default None value
Users can now customize extraction per group:
```python
def custom_instruction(group_id: str | None = None) -> str:
if group_id == 'spanish-users':
return '\n\nExtract in Spanish.'
return '\n\nExtract in original language.'
client.get_extraction_language_instruction = custom_instruction
```
🤖 Generated with [Claude Code](https://claude.ai/code)
Co-Authored-By: Claude <noreply@anthropic.com>
* Pass group_id to generate_response in extraction operations
Thread group_id parameter through all extraction-related generate_response()
calls where it's naturally available (via episode.group_id or node.group_id).
This enables consumers to override get_extraction_language_instruction() with
group-specific language preferences.
Changes:
- edge_operations.py: Pass group_id in extract_edges()
- node_operations.py: Pass episode.group_id in extract_nodes() and
node.group_id in extract_attributes_from_node()
- node_operations.py: Add group_id parameter to extract_nodes_reflexion()
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude <noreply@anthropic.com>
* Fix type inconsistency in extract_nodes_reflexion parameter
Change group_id parameter from str = '' to str | None = None to match
the pattern used throughout the codebase and align with the optional
nature of group_id in generate_response().
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude <noreply@anthropic.com>
* Remove ensure_ascii parameter and uv.lock file
* Reset uv.lock to main branch version
---------
Co-authored-by: Claude <noreply@anthropic.com>
Replace MULTILINGUAL_EXTRACTION_RESPONSES constant with configurable
get_extraction_language_instruction() function to improve determinism
and allow customization.
Changes:
- Replace constant with function in client.py
- Update all LLM client implementations to use new function
- Maintain backward compatibility with same default behavior
- Enable users to override function for custom language requirements
Users can now customize extraction behavior by monkey-patching:
```python
import graphiti_core.llm_client.client as client
client.get_extraction_language_instruction = lambda: "Custom instruction"
```
🤖 Generated with [Claude Code](https://claude.ai/code)
Co-authored-by: Claude <noreply@anthropic.com>
* feat(gemini): embedding batch size & lite default
The new `gemini-embedding-001` model only allows one embedding input per batch
(instance), but has other impressive statistics:
https://cloud.google.com/vertex-ai/generative-ai/docs/model-reference/text-embeddings-api
The -DEFAULT_SMALL_MODEL must not have the 'models/' prefix.
* Refactor: Improve Gemini Client Error Handling and Reliability
This commit introduces several improvements to the Gemini client to enhance its robustness and reliability.
- Implemented more specific error handling for various Gemini API responses, including rate limits and safety blocks.
- Added a JSON salvaging mechanism to gracefully handle incomplete or malformed JSON responses from the API.
- Introduced detailed logging for failed LLM generations to simplify debugging and troubleshooting.
- Refined the Gemini embedder to better handle empty or invalid embedding responses.
- Updated and corrected tests to align with the improved error handling and reliability features.
* fix: cleanup in _log_failed_generation()
* fix: cleanup in _log_failed_generation()
* Fix ruff B904 error in gemini_client.py
* fix(gemini): correct retry logic and enhance error logging
Updated the retry mechanism in the GeminiClient to ensure it retries the maximum number of times specified. Improved error logging to provide clearer insights when all retries are exhausted, including detailed information about the last error encountered.
* fix(gemini): enhance error handling for safety blocks and update tests
Refined error handling in the GeminiClient to improve detection of safety block conditions. Updated test cases to reflect changes in exception messages and ensure proper retry logic is enforced. Enhanced mock responses in tests to better simulate real-world scenarios, including handling of invalid JSON responses.
* revert default gemini to text-embedding-001
---------
Co-authored-by: Daniel Chalef <131175+danielchalef@users.noreply.github.com>
* Fix: use self.max_tokens when max_token isnt specified
* Fix: use self.max_tokens in OpenAI clients
* Fix: use self.max_tokens in Anthropic client
* Fix: use self.max_tokens in Gemini client
* Bump version from 0.9.0 to 0.9.1 in pyproject.toml and update google-genai dependency to >=0.1.0
* Bump version from 0.9.1 to 0.9.2 in pyproject.toml
* Update google-genai dependency version to >=0.8.0 in pyproject.toml
* loc file
* Update pyproject.toml to version 0.9.3, restructure dependencies, and modify author format. Remove outdated Google API key note from README.md.
* upgrade poetry and ruff
* implement so
* bug fixes and typing
* inject schema for non-openai clients
* correct datetime format
* remove List keyword
* Refactor node_operations.py to use updated prompt_library functions
* update example
* feat: Update project name and description
The project name and description in the `pyproject.toml` file have been updated to reflect the changes made to the project.
* chore: Update pyproject.toml to include core package
The `pyproject.toml` file has been updated to include the `core` package in the list of packages. This change ensures that the `core` package is included when building the project.
* fix imports
* fix importats