graphiti/graphiti_core/llm_client
supmo668 db7c179991 Fix: Remove unnecessary max_tokens cap in AnthropicClient
Removes the hacky min() workaround that was capping max_tokens to
DEFAULT_MAX_TOKENS (8192) in the AnthropicClient. This fix allows
the client to respect the max_tokens parameter passed by callers,
particularly for edge extraction operations that may require higher
token limits (e.g., 16384).

The new implementation aligns with how other LLM clients (OpenAI,
Gemini) handle max_tokens by using the provided value or falling
back to the instance max_tokens without an arbitrary cap.

Resolves TODO in anthropic_client.py:207-208.
2025-11-02 21:26:48 -08:00
..
__init__.py Add support for falkordb (#575) 2025-06-13 12:06:57 -04:00
anthropic_client.py Fix: Remove unnecessary max_tokens cap in AnthropicClient 2025-11-02 21:26:48 -08:00
azure_openai_client.py Fix Azure structured completions (#1039) 2025-11-01 18:40:43 -07:00
client.py Add OpenTelemetry distributed tracing support (#982) 2025-10-05 12:26:14 -07:00
config.py Gpt 5 default (#849) 2025-08-21 12:10:57 -04:00
errors.py Anthropic client (#361) 2025-04-16 12:35:07 -07:00
gemini_client.py Add OpenTelemetry distributed tracing support (#982) 2025-10-05 12:26:14 -07:00
groq_client.py Refactor imports (#675) 2025-07-05 08:57:07 -07:00
openai_base_client.py feat: MCP Server v1.0.0 - Modular architecture with multi-provider support (#1024) 2025-10-30 22:59:01 -07:00
openai_client.py feat: MCP Server v1.0.0 - Modular architecture with multi-provider support (#1024) 2025-10-30 22:59:01 -07:00
openai_generic_client.py Remove JSON indentation from prompts to reduce token usage (#985) 2025-10-06 16:08:43 -07:00
utils.py update new names with input_data (#204) 2024-10-29 11:03:31 -04:00