Add OpenAI frequency penalty sample env params

This commit is contained in:
yangdx 2025-08-14 01:50:27 +08:00
parent bac09118d5
commit 2a46667ac9

View file

@ -136,7 +136,13 @@ LLM_BINDING_API_KEY=your_api_key
# LLM_BINDING_API_KEY=your_api_key
# LLM_BINDING=openai
### Most Commont Parameters for Ollama Server
### OpenAI Specific Parameters
### Apply frequency penalty to prevent the LLM from generating repetitive or looping outputs
# OPENAI_LLM_FREQUENCY_PENALTY=1.1
### use the following command to see all support options for openai and azure_openai
### lightrag-server --llm-binding openai --help
### Ollama Server Specific Parameters
### Time out in seconds, None for infinite timeout
TIMEOUT=240
### OLLAMA_LLM_NUM_CTX must be larger than MAX_TOTAL_TOKENS + 2000