| .. |
|
__init__.py
|
Separated llms from the main llm.py file and fixed some deprication bugs
|
2025-01-25 00:11:00 +01:00 |
|
anthropic.py
|
Add Deepseek Style Chain of Thought (CoT) Support for OpenAI Compatible LLM providers
|
2025-09-09 22:34:36 +08:00 |
|
azure_openai.py
|
Add Deepseek Style Chain of Thought (CoT) Support for OpenAI Compatible LLM providers
|
2025-09-09 22:34:36 +08:00 |
|
bedrock.py
|
Add Deepseek Style Chain of Thought (CoT) Support for OpenAI Compatible LLM providers
|
2025-09-09 22:34:36 +08:00 |
|
binding_options.py
|
extended to use gemini, sswitched to use gemini-flash-latest
|
2025-10-20 13:17:16 +03:00 |
|
gemini.py
|
extended to use gemini, sswitched to use gemini-flash-latest
|
2025-10-20 13:17:16 +03:00 |
|
hf.py
|
Add Deepseek Style Chain of Thought (CoT) Support for OpenAI Compatible LLM providers
|
2025-09-09 22:34:36 +08:00 |
|
jina.py
|
feat: improve Jina API error handling to show clean messages instead of HTML
|
2025-08-05 11:46:02 +08:00 |
|
llama_index_impl.py
|
Add Deepseek Style Chain of Thought (CoT) Support for OpenAI Compatible LLM providers
|
2025-09-09 22:34:36 +08:00 |
|
lmdeploy.py
|
Add Deepseek Style Chain of Thought (CoT) Support for OpenAI Compatible LLM providers
|
2025-09-09 22:34:36 +08:00 |
|
lollms.py
|
Add Deepseek Style Chain of Thought (CoT) Support for OpenAI Compatible LLM providers
|
2025-09-09 22:34:36 +08:00 |
|
nvidia_openai.py
|
refactor: Remove deprecated max_token_size from embedding configuration
|
2025-07-29 10:49:35 +08:00 |
|
ollama.py
|
Modernize type hints and remove Python 3.8 compatibility code
|
2025-10-02 23:15:42 +08:00 |
|
openai.py
|
fix: Remove trailing whitespace for pre-commit linting
|
2025-10-09 15:01:53 +11:00 |
|
siliconcloud.py
|
refactor: Remove deprecated max_token_size from embedding configuration
|
2025-07-29 10:49:35 +08:00 |
|
zhipu.py
|
Add Deepseek Style Chain of Thought (CoT) Support for OpenAI Compatible LLM providers
|
2025-09-09 22:34:36 +08:00 |