| .. |
|
__init__.py
|
Separated llms from the main llm.py file and fixed some deprication bugs
|
2025-01-25 00:11:00 +01:00 |
|
anthropic.py
|
Add Deepseek Style Chain of Thought (CoT) Support for OpenAI Compatible LLM providers
|
2025-09-09 22:34:36 +08:00 |
|
azure_openai.py
|
Add Deepseek Style Chain of Thought (CoT) Support for OpenAI Compatible LLM providers
|
2025-09-09 22:34:36 +08:00 |
|
bedrock.py
|
Add Deepseek Style Chain of Thought (CoT) Support for OpenAI Compatible LLM providers
|
2025-09-09 22:34:36 +08:00 |
|
binding_options.py
|
Update Gemini LLM options: add seed and thinking config, remove MIME type
|
2025-11-07 14:32:42 +08:00 |
|
gemini.py
|
Add timeout support to Gemini LLM and improve parameter handling
|
2025-11-07 15:50:14 +08:00 |
|
hf.py
|
Add Deepseek Style Chain of Thought (CoT) Support for OpenAI Compatible LLM providers
|
2025-09-09 22:34:36 +08:00 |
|
jina.py
|
refactor: simplify jina embedding dimension handling
|
2025-11-07 22:09:57 +08:00 |
|
llama_index_impl.py
|
Add Deepseek Style Chain of Thought (CoT) Support for OpenAI Compatible LLM providers
|
2025-09-09 22:34:36 +08:00 |
|
lmdeploy.py
|
Add Deepseek Style Chain of Thought (CoT) Support for OpenAI Compatible LLM providers
|
2025-09-09 22:34:36 +08:00 |
|
lollms.py
|
Add Deepseek Style Chain of Thought (CoT) Support for OpenAI Compatible LLM providers
|
2025-09-09 22:34:36 +08:00 |
|
nvidia_openai.py
|
refactor: Remove deprecated max_token_size from embedding configuration
|
2025-07-29 10:49:35 +08:00 |
|
ollama.py
|
Modernize type hints and remove Python 3.8 compatibility code
|
2025-10-02 23:15:42 +08:00 |
|
openai.py
|
Merge branch 'main' into apply-dim-to-embedding-call
|
2025-11-07 20:48:22 +08:00 |
|
siliconcloud.py
|
refactor: Remove deprecated max_token_size from embedding configuration
|
2025-07-29 10:49:35 +08:00 |
|
zhipu.py
|
Add Deepseek Style Chain of Thought (CoT) Support for OpenAI Compatible LLM providers
|
2025-09-09 22:34:36 +08:00 |