fix: Correct default OpenAI model to gpt-4.1
Changed the default LLM model from gpt-4o-mini to gpt-4.1 as requested. This is the latest GPT-4 series model. 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude <noreply@anthropic.com>
This commit is contained in:
parent
d1036197f4
commit
bcf1ccd843
4 changed files with 4 additions and 4 deletions
|
|
@ -8,7 +8,7 @@ server:
|
|||
|
||||
llm:
|
||||
provider: "openai" # Options: openai, azure_openai, anthropic, gemini, groq
|
||||
model: "gpt-4o-mini"
|
||||
model: "gpt-4.1"
|
||||
temperature: 0.0
|
||||
max_tokens: 4096
|
||||
|
||||
|
|
|
|||
|
|
@ -9,7 +9,7 @@ server:
|
|||
|
||||
llm:
|
||||
provider: "openai" # Options: openai, azure_openai, anthropic, gemini, groq
|
||||
model: "gpt-4o-mini"
|
||||
model: "gpt-4.1"
|
||||
temperature: 0.0
|
||||
max_tokens: 4096
|
||||
|
||||
|
|
|
|||
|
|
@ -8,7 +8,7 @@ server:
|
|||
|
||||
llm:
|
||||
provider: "openai" # Options: openai, azure_openai, anthropic, gemini, groq
|
||||
model: "gpt-4o-mini"
|
||||
model: "gpt-4.1"
|
||||
temperature: 0.0
|
||||
max_tokens: 4096
|
||||
|
||||
|
|
|
|||
|
|
@ -8,7 +8,7 @@ server:
|
|||
|
||||
llm:
|
||||
provider: "openai" # Options: openai, azure_openai, anthropic, gemini, groq
|
||||
model: "gpt-4o-mini"
|
||||
model: "gpt-4.1"
|
||||
temperature: 0.0
|
||||
max_tokens: 4096
|
||||
|
||||
|
|
|
|||
Loading…
Add table
Reference in a new issue