ragflow/rag/llm
Yongteng Lei b6c4722687
Refa: make RAGFlow more asynchronous (#11601)
### What problem does this PR solve?

Try to make this more asynchronous. Verified in chat and agent
scenarios, reducing blocking behavior. #11551, #11579.

However, the impact of these changes still requires further
investigation to ensure everything works as expected.

### Type of change

- [x] Refactoring
2025-12-01 14:24:06 +08:00
..
__init__.py Refa: make RAGFlow more asynchronous (#11601) 2025-12-01 14:24:06 +08:00
chat_model.py Refa: make RAGFlow more asynchronous (#11601) 2025-12-01 14:24:06 +08:00
cv_model.py Fix: uv lock updates (#11511) 2025-11-25 16:01:12 +08:00
embedding_model.py feat: add new LLM provider Jiekou.AI (#11300) 2025-11-17 19:47:46 +08:00
rerank_model.py fix cohere rerank base_url default (#11353) 2025-11-20 09:46:39 +08:00
sequence2txt_model.py Move token related functions to common (#10942) 2025-11-03 08:50:05 +08:00
tts_model.py Move token related functions to common (#10942) 2025-11-03 08:50:05 +08:00