Update env.example
This commit is contained in:
parent
77569ddea2
commit
4a97e9f469
1 changed files with 1 additions and 1 deletions
|
|
@ -175,6 +175,7 @@ LLM_BINDING_API_KEY=your_api_key
|
||||||
# LLM_BINDING=openai
|
# LLM_BINDING=openai
|
||||||
|
|
||||||
### OpenAI Compatible API Specific Parameters
|
### OpenAI Compatible API Specific Parameters
|
||||||
|
OPENAI_LLM_TEMPERATURE=0.8
|
||||||
### Set the max_tokens to mitigate endless output of some LLM (less than LLM_TIMEOUT * llm_output_tokens/second, i.e. 9000 = 180s * 50 tokens/s)
|
### Set the max_tokens to mitigate endless output of some LLM (less than LLM_TIMEOUT * llm_output_tokens/second, i.e. 9000 = 180s * 50 tokens/s)
|
||||||
### Typically, max_tokens does not include prompt content, though some models, such as Gemini Models, are exceptions
|
### Typically, max_tokens does not include prompt content, though some models, such as Gemini Models, are exceptions
|
||||||
### For vLLM/SGLang doployed models, or most of OpenAI compatible API provider
|
### For vLLM/SGLang doployed models, or most of OpenAI compatible API provider
|
||||||
|
|
@ -183,7 +184,6 @@ LLM_BINDING_API_KEY=your_api_key
|
||||||
OPENAI_LLM_MAX_COMPLETION_TOKENS=9000
|
OPENAI_LLM_MAX_COMPLETION_TOKENS=9000
|
||||||
|
|
||||||
#### OpenAI's new API utilizes max_completion_tokens instead of max_tokens
|
#### OpenAI's new API utilizes max_completion_tokens instead of max_tokens
|
||||||
# OPENAI_LLM_MAX_TOKENS=9000
|
|
||||||
# OPENAI_LLM_MAX_COMPLETION_TOKENS=9000
|
# OPENAI_LLM_MAX_COMPLETION_TOKENS=9000
|
||||||
|
|
||||||
### OpenRouter Specific Parameters
|
### OpenRouter Specific Parameters
|
||||||
|
|
|
||||||
Loading…
Add table
Reference in a new issue