ragflow/rag/llm
Stephen Hu ca320a8c30
Refactor: for total_token_count method use if to check first. (#9707)
### What problem does this PR solve?

for total_token_count method use if to check first, to improve the
performance when we need to handle exception cases

### Type of change

- [x] Refactoring
2025-08-26 10:47:20 +08:00
..
__init__.py Refa: replace Chat Ollama implementation with LiteLLM (#9693) 2025-08-25 17:56:31 +08:00
chat_model.py Refa: replace Chat Ollama implementation with LiteLLM (#9693) 2025-08-25 17:56:31 +08:00
cv_model.py Fix: Gemini parameters error (#9520) 2025-08-18 14:51:10 +08:00
embedding_model.py Add **kwargs to model base class constructors (#9252) 2025-08-07 09:45:37 +08:00
rerank_model.py Refactor: for total_token_count method use if to check first. (#9707) 2025-08-26 10:47:20 +08:00
sequence2txt_model.py Refa: OpenAI whisper-1 (#9552) 2025-08-19 16:41:18 +08:00
tts_model.py Add **kwargs to model base class constructors (#9252) 2025-08-07 09:45:37 +08:00