Add OpenAI frequency penalty sample env params
This commit is contained in:
parent
5b0e26d9da
commit
9a62101e9d
1 changed files with 7 additions and 1 deletions
|
|
@ -134,7 +134,13 @@ LLM_BINDING_API_KEY=your_api_key
|
|||
# LLM_BINDING_API_KEY=your_api_key
|
||||
# LLM_BINDING=openai
|
||||
|
||||
### Most Commont Parameters for Ollama Server
|
||||
### OpenAI Specific Parameters
|
||||
### Apply frequency penalty to prevent the LLM from generating repetitive or looping outputs
|
||||
# OPENAI_LLM_FREQUENCY_PENALTY=1.1
|
||||
### use the following command to see all support options for openai and azure_openai
|
||||
### lightrag-server --llm-binding openai --help
|
||||
|
||||
### Ollama Server Specific Parameters
|
||||
### Time out in seconds, None for infinite timeout
|
||||
TIMEOUT=240
|
||||
### OLLAMA_LLM_NUM_CTX must be larger than MAX_TOTAL_TOKENS + 2000
|
||||
|
|
|
|||
Loading…
Add table
Reference in a new issue