Add OpenAI reasoning effort and max completion tokens config options

This commit is contained in:
yangdx 2025-08-21 11:04:06 +08:00
parent 0e67ead8fa
commit 0dd245e847

View file

@ -147,8 +147,10 @@ LLM_BINDING_API_KEY=your_api_key
# LLM_BINDING=openai
### OpenAI Specific Parameters
### Apply frequency penalty to prevent the LLM from generating repetitive or looping outputs
# OPENAI_LLM_TEMPERATURE=1.0
# OPENAI_LLM_REASONING_EFFORT=low
### Set the maximum number of completion tokens if your LLM generates repetitive or unconstrained output
# OPENAI_LLM_MAX_COMPLETION_TOKENS=16384
### use the following command to see all support options for openai and azure_openai
### lightrag-server --llm-binding openai --help