LightRAG/lightrag/llm
yangdx 46ce6d9a13 Fix Azure OpenAI embedding model parameter fallback
- Use model param if provided
- Fall back to deployment name
- Fix embedding API call
- Improve parameter handling
2025-11-20 18:20:22 +08:00
..
deprecated Add max_token_size parameter to embedding function decorators 2025-11-14 18:41:43 +08:00
__init__.py Separated llms from the main llm.py file and fixed some deprication bugs 2025-01-25 00:11:00 +01:00
anthropic.py Add Deepseek Style Chain of Thought (CoT) Support for OpenAI Compatible LLM providers 2025-09-09 22:34:36 +08:00
azure_openai.py Fix Azure OpenAI embedding model parameter fallback 2025-11-20 18:20:22 +08:00
bedrock.py Improve Bedrock error handling with retry logic and custom exceptions 2025-11-14 18:51:41 +08:00
binding_options.py Add Gemini embedding support 2025-11-08 03:34:30 +08:00
gemini.py Add max_token_size parameter to embedding function decorators 2025-11-14 18:41:43 +08:00
hf.py Add max_token_size parameter to embedding function decorators 2025-11-14 18:41:43 +08:00
jina.py Add max_token_size parameter to embedding function decorators 2025-11-14 18:41:43 +08:00
llama_index_impl.py Add max_token_size parameter to embedding function decorators 2025-11-14 18:41:43 +08:00
lmdeploy.py Add Deepseek Style Chain of Thought (CoT) Support for OpenAI Compatible LLM providers 2025-09-09 22:34:36 +08:00
lollms.py Add max_token_size parameter to embedding function decorators 2025-11-14 18:41:43 +08:00
nvidia_openai.py Add max_token_size parameter to embedding function decorators 2025-11-14 18:41:43 +08:00
ollama.py Add max_token_size parameter to embedding function decorators 2025-11-14 18:41:43 +08:00
openai.py Add max_token_size parameter to embedding function decorators 2025-11-14 18:41:43 +08:00
zhipu.py Add Deepseek Style Chain of Thought (CoT) Support for OpenAI Compatible LLM providers 2025-09-09 22:34:36 +08:00