ragflow/rag/llm
Günter Lukas fee757eb41
Fix: Disable reasoning on Gemini 2.5 Flash by default (#10477)
### What problem does this PR solve?

Gemini 2.5 Flash Models use reasoning by default. There is currently no
way to disable this behaviour. This leads to very long response times (>
1min). The default behaviour should be, that reasoning is disabled and
configurable

issue #10474 

### Type of change

- [X] Bug Fix (non-breaking change which fixes an issue)
2025-10-11 10:22:51 +08:00
..
__init__.py Feat: add support for Anthropic third-party API (#10173) 2025-09-19 19:06:14 +08:00
chat_model.py Fix: Disable reasoning on Gemini 2.5 Flash by default (#10477) 2025-10-11 10:22:51 +08:00
cv_model.py Refactor: improve how NvidiaCV calculate res total token counts (#10455) 2025-10-10 11:03:40 +08:00
embedding_model.py Feat: Use data pipeline to visualize the parsing configuration of the knowledge base (#10423) 2025-10-09 12:36:19 +08:00
rerank_model.py Refactor: use the same implement for total token count from res (#10197) 2025-09-22 17:17:06 +08:00
sequence2txt_model.py Feat: add DeerAPI support (#10303) 2025-10-09 11:14:49 +08:00
tts_model.py Feat: add DeerAPI support (#10303) 2025-10-09 11:14:49 +08:00