LightRAG/lightrag/llm
yangdx f5b48587ed Improve Bedrock error handling with retry logic and custom exceptions
• Add specific exception types
• Implement proper retry mechanism
• Better error classification
• Enhanced logging and validation
• Enable embedding retry decorator
2025-11-17 12:54:32 +08:00
..
deprecated Add max_token_size parameter to embedding function decorators 2025-11-17 12:54:32 +08:00
__init__.py Separated llms from the main llm.py file and fixed some deprication bugs 2025-01-25 00:11:00 +01:00
anthropic.py Add Deepseek Style Chain of Thought (CoT) Support for OpenAI Compatible LLM providers 2025-09-09 22:34:36 +08:00
azure_openai.py Add Deepseek Style Chain of Thought (CoT) Support for OpenAI Compatible LLM providers 2025-09-09 22:34:36 +08:00
bedrock.py Improve Bedrock error handling with retry logic and custom exceptions 2025-11-17 12:54:32 +08:00
binding_options.py Add Gemini embedding support 2025-11-08 03:34:30 +08:00
gemini.py Add max_token_size parameter to embedding function decorators 2025-11-17 12:54:32 +08:00
hf.py Add max_token_size parameter to embedding function decorators 2025-11-17 12:54:32 +08:00
jina.py Add max_token_size parameter to embedding function decorators 2025-11-17 12:54:32 +08:00
llama_index_impl.py Add max_token_size parameter to embedding function decorators 2025-11-17 12:54:32 +08:00
lmdeploy.py Add Deepseek Style Chain of Thought (CoT) Support for OpenAI Compatible LLM providers 2025-09-09 22:34:36 +08:00
lollms.py Add max_token_size parameter to embedding function decorators 2025-11-17 12:54:32 +08:00
nvidia_openai.py Add max_token_size parameter to embedding function decorators 2025-11-17 12:54:32 +08:00
ollama.py Add max_token_size parameter to embedding function decorators 2025-11-17 12:54:32 +08:00
openai.py Add max_token_size parameter to embedding function decorators 2025-11-17 12:54:32 +08:00
zhipu.py Add Deepseek Style Chain of Thought (CoT) Support for OpenAI Compatible LLM providers 2025-09-09 22:34:36 +08:00