Add temperature guidance for Qwen3 models in env example
This commit is contained in:
parent
d7e0701b63
commit
d39afcb831
1 changed files with 1 additions and 0 deletions
|
|
@ -175,6 +175,7 @@ LLM_BINDING_API_KEY=your_api_key
|
|||
# LLM_BINDING=openai
|
||||
|
||||
### OpenAI Specific Parameters
|
||||
### To mitigate endless output loops and prevent greedy decoding for Qwen3, set the temperature parameter to a value between 0.8 and 1.0
|
||||
# OPENAI_LLM_TEMPERATURE=1.0
|
||||
# OPENAI_LLM_REASONING_EFFORT=low
|
||||
### For models like Qwen3 with fewer than 32B param, it is recommended to set the presence penalty to 1.5
|
||||
|
|
|
|||
Loading…
Add table
Reference in a new issue