ragflow/rag
N0bodycan 9863862348
fix: prevent redundant retries in async_chat_streamly upon success (#11832)
## What changes were proposed in this pull request?
Added a return statement after the successful completion of the async
for loop in async_chat_streamly.

## Why are the changes needed?
Previously, the code lacked a break/return mechanism inside the try
block. This caused the retry loop (for attempt in range...) to continue
executing even after the LLM response was successfully generated and
yielded, resulting in duplicate requests (up to max_retries times).

## Does this PR introduce any user-facing change?
No (it fixes an internal logic bug).
2025-12-09 17:14:30 +08:00
..
app Refa: migrate CV model chat to Async (#11828) 2025-12-09 13:08:37 +08:00
flow Refa: migrate CV model chat to Async (#11828) 2025-12-09 13:08:37 +08:00
llm fix: prevent redundant retries in async_chat_streamly upon success (#11832) 2025-12-09 17:14:30 +08:00
nlp Fix: parent-child chunking method (#11810) 2025-12-09 09:34:01 +08:00
prompts Fix:[ERROR][Exception]: list index out of range (#11826) 2025-12-09 09:58:34 +08:00
res Remove huqie.txt from RAGFflow and bump infinity to 0.6.10 (#11661) 2025-12-04 14:53:57 +08:00
svr Fix: parent-child chunking method (#11810) 2025-12-09 09:34:01 +08:00
utils Bump infinity to v0.6.11. Requires python>=3.11 (#11814) 2025-12-09 16:23:37 +08:00
__init__.py Fix: incorrect async chat streamly output (#11679) 2025-12-03 11:15:45 +08:00
benchmark.py Move api.settings to common.settings (#11036) 2025-11-06 09:36:38 +08:00
raptor.py Feat: add fault-tolerant mechanism to RAPTOR (#11206) 2025-11-13 18:48:07 +08:00
settings.py Move api.settings to common.settings (#11036) 2025-11-06 09:36:38 +08:00