| .. |
|
deprecated
|
Add max_token_size parameter to embedding function decorators
|
2025-11-14 18:41:43 +08:00 |
|
__init__.py
|
Separated llms from the main llm.py file and fixed some deprication bugs
|
2025-01-25 00:11:00 +01:00 |
|
anthropic.py
|
chore/support-voyageai-embed-directly: chore: resolve comments
|
2025-12-12 06:38:08 -08:00 |
|
azure_openai.py
|
Consolidate Azure OpenAI implementation into main OpenAI module
|
2025-11-21 17:12:33 +08:00 |
|
bedrock.py
|
Improve Bedrock error handling with retry logic and custom exceptions
|
2025-11-14 18:51:41 +08:00 |
|
binding_options.py
|
Add Gemini embedding support
|
2025-11-08 03:34:30 +08:00 |
|
gemini.py
|
Add max_token_size parameter to embedding function decorators
|
2025-11-14 18:41:43 +08:00 |
|
hf.py
|
Add max_token_size parameter to embedding function decorators
|
2025-11-14 18:41:43 +08:00 |
|
jina.py
|
Add configurable model parameter to jina_embed function
|
2025-11-28 15:38:29 +08:00 |
|
llama_index_impl.py
|
Add max_token_size parameter to embedding function decorators
|
2025-11-14 18:41:43 +08:00 |
|
lmdeploy.py
|
Add Deepseek Style Chain of Thought (CoT) Support for OpenAI Compatible LLM providers
|
2025-09-09 22:34:36 +08:00 |
|
lollms.py
|
Add max_token_size parameter to embedding function decorators
|
2025-11-14 18:41:43 +08:00 |
|
nvidia_openai.py
|
Add max_token_size parameter to embedding function decorators
|
2025-11-14 18:41:43 +08:00 |
|
ollama.py
|
Allow embedding models to use provider defaults when unspecified
|
2025-11-28 16:57:33 +08:00 |
|
openai.py
|
Add max_token_size parameter to azure_openai_embed wrapper
|
2025-11-28 13:41:01 +08:00 |
|
voyageai.py
|
chore/support-voyageai-embed-directly: feat: voyageai embed support
|
2025-12-04 16:18:52 -08:00 |
|
zhipu.py
|
Add Deepseek Style Chain of Thought (CoT) Support for OpenAI Compatible LLM providers
|
2025-09-09 22:34:36 +08:00 |