Add OpenAI reasoning effort and max completion tokens config options
This commit is contained in:
parent
0e67ead8fa
commit
0dd245e847
1 changed files with 3 additions and 1 deletions
|
|
@ -147,8 +147,10 @@ LLM_BINDING_API_KEY=your_api_key
|
||||||
# LLM_BINDING=openai
|
# LLM_BINDING=openai
|
||||||
|
|
||||||
### OpenAI Specific Parameters
|
### OpenAI Specific Parameters
|
||||||
### Apply frequency penalty to prevent the LLM from generating repetitive or looping outputs
|
|
||||||
# OPENAI_LLM_TEMPERATURE=1.0
|
# OPENAI_LLM_TEMPERATURE=1.0
|
||||||
|
# OPENAI_LLM_REASONING_EFFORT=low
|
||||||
|
### Set the maximum number of completion tokens if your LLM generates repetitive or unconstrained output
|
||||||
|
# OPENAI_LLM_MAX_COMPLETION_TOKENS=16384
|
||||||
### use the following command to see all support options for openai and azure_openai
|
### use the following command to see all support options for openai and azure_openai
|
||||||
### lightrag-server --llm-binding openai --help
|
### lightrag-server --llm-binding openai --help
|
||||||
|
|
||||||
|
|
|
||||||
Loading…
Add table
Reference in a new issue