Replace MULTILINGUAL_EXTRACTION_RESPONSES constant with configurable get_extraction_language_instruction() function to improve determinism and allow customization. Changes: - Replace constant with function in client.py - Update all LLM client implementations to use new function - Maintain backward compatibility with same default behavior - Enable users to override function for custom language requirements Users can now customize extraction behavior by monkey-patching: ```python import graphiti_core.llm_client.client as client client.get_extraction_language_instruction = lambda: "Custom instruction" ``` 🤖 Generated with [Claude Code](https://claude.ai/code) Co-authored-by: Claude <noreply@anthropic.com> |
||
|---|---|---|
| .. | ||
| __init__.py | ||
| anthropic_client.py | ||
| azure_openai_client.py | ||
| client.py | ||
| config.py | ||
| errors.py | ||
| gemini_client.py | ||
| groq_client.py | ||
| openai_base_client.py | ||
| openai_client.py | ||
| openai_generic_client.py | ||
| utils.py | ||