Increase OpenAIGenericClient max_tokens to 16K and update docs
- Set default max_tokens to 16384 (16K) for OpenAIGenericClient to better support local models - Add documentation note clarifying OpenAIGenericClient should be used for Ollama and LM Studio - Previous default was 8192 (8K)
This commit is contained in:
parent
552c9d9634
commit
29b04a08fe
2 changed files with 6 additions and 0 deletions
|
|
@ -523,6 +523,8 @@ reranker, leveraging Gemini's log probabilities feature to rank passage relevanc
|
|||
Graphiti supports Ollama for running local LLMs and embedding models via Ollama's OpenAI-compatible API. This is ideal
|
||||
for privacy-focused applications or when you want to avoid API costs.
|
||||
|
||||
**Note:** Use `OpenAIGenericClient` (not `OpenAIClient`) for Ollama and other OpenAI-compatible providers like LM Studio. The `OpenAIGenericClient` is optimized for local models with a higher default max token limit (16K vs 8K) and full support for structured outputs.
|
||||
|
||||
Install the models:
|
||||
|
||||
```bash
|
||||
|
|
|
|||
|
|
@ -77,6 +77,10 @@ class OpenAIGenericClient(LLMClient):
|
|||
if config is None:
|
||||
config = LLMConfig()
|
||||
|
||||
# Override max_tokens default to 16K for better compatibility with local models
|
||||
if config.max_tokens == DEFAULT_MAX_TOKENS:
|
||||
config.max_tokens = 16384
|
||||
|
||||
super().__init__(config, cache)
|
||||
|
||||
if client is None:
|
||||
|
|
|
|||
Loading…
Add table
Reference in a new issue