Add OpenAI frequency penalty sample env params
This commit is contained in:
parent
bac09118d5
commit
2a46667ac9
1 changed files with 7 additions and 1 deletions
|
|
@ -136,7 +136,13 @@ LLM_BINDING_API_KEY=your_api_key
|
||||||
# LLM_BINDING_API_KEY=your_api_key
|
# LLM_BINDING_API_KEY=your_api_key
|
||||||
# LLM_BINDING=openai
|
# LLM_BINDING=openai
|
||||||
|
|
||||||
### Most Commont Parameters for Ollama Server
|
### OpenAI Specific Parameters
|
||||||
|
### Apply frequency penalty to prevent the LLM from generating repetitive or looping outputs
|
||||||
|
# OPENAI_LLM_FREQUENCY_PENALTY=1.1
|
||||||
|
### use the following command to see all support options for openai and azure_openai
|
||||||
|
### lightrag-server --llm-binding openai --help
|
||||||
|
|
||||||
|
### Ollama Server Specific Parameters
|
||||||
### Time out in seconds, None for infinite timeout
|
### Time out in seconds, None for infinite timeout
|
||||||
TIMEOUT=240
|
TIMEOUT=240
|
||||||
### OLLAMA_LLM_NUM_CTX must be larger than MAX_TOTAL_TOKENS + 2000
|
### OLLAMA_LLM_NUM_CTX must be larger than MAX_TOTAL_TOKENS + 2000
|
||||||
|
|
|
||||||
Loading…
Add table
Reference in a new issue