LightRAG/lightrag/llm
yangdx 24effb127d Improve error handling and response consistency in streaming endpoints
• Add error message forwarding to client
• Handle stream cancellations gracefully
• Add logging for stream errors
• Ensure clean stream termination
• Add try-catch in OpenAI streaming
2025-02-05 10:44:48 +08:00
..
__init__.py Separated llms from the main llm.py file and fixed some deprication bugs 2025-01-25 00:11:00 +01:00
azure_openai.py Fixed missing imports bug and fixed linting 2025-01-25 00:55:07 +01:00
bedrock.py Fixed missing imports bug and fixed linting 2025-01-25 00:55:07 +01:00
hf.py Fixed missing imports bug and fixed linting 2025-01-25 00:55:07 +01:00
jina.py Fixed missing imports bug and fixed linting 2025-01-25 00:55:07 +01:00
lmdeploy.py Fixed missing imports bug and fixed linting 2025-01-25 00:55:07 +01:00
lollms.py Fixed missing imports bug and fixed linting 2025-01-25 00:55:07 +01:00
nvidia_openai.py Fixed missing imports bug and fixed linting 2025-01-25 00:55:07 +01:00
ollama.py Fixed missing imports bug and fixed linting 2025-01-25 00:55:07 +01:00
openai.py Improve error handling and response consistency in streaming endpoints 2025-02-05 10:44:48 +08:00
siliconcloud.py Fixed missing imports bug and fixed linting 2025-01-25 00:55:07 +01:00
zhipu.py Fixed missing imports bug and fixed linting 2025-01-25 00:55:07 +01:00