Add temperature guidance for Qwen3 models in env example

This commit is contained in:
yangdx 2025-08-29 15:13:52 +08:00
parent d7e0701b63
commit d39afcb831

View file

@ -175,6 +175,7 @@ LLM_BINDING_API_KEY=your_api_key
# LLM_BINDING=openai
### OpenAI Specific Parameters
### To mitigate endless output loops and prevent greedy decoding for Qwen3, set the temperature parameter to a value between 0.8 and 1.0
# OPENAI_LLM_TEMPERATURE=1.0
# OPENAI_LLM_REASONING_EFFORT=low
### For models like Qwen3 with fewer than 32B param, it is recommended to set the presence penalty to 1.5