Removes the hacky min() workaround that was capping max_tokens to DEFAULT_MAX_TOKENS (8192) in the AnthropicClient. This fix allows the client to respect the max_tokens parameter passed by callers, particularly for edge extraction operations that may require higher token limits (e.g., 16384). The new implementation aligns with how other LLM clients (OpenAI, Gemini) handle max_tokens by using the provided value or falling back to the instance max_tokens without an arbitrary cap. Resolves TODO in anthropic_client.py:207-208. |
||
|---|---|---|
| .. | ||
| __init__.py | ||
| anthropic_client.py | ||
| azure_openai_client.py | ||
| client.py | ||
| config.py | ||
| errors.py | ||
| gemini_client.py | ||
| groq_client.py | ||
| openai_base_client.py | ||
| openai_client.py | ||
| openai_generic_client.py | ||
| utils.py | ||