Commit graph

788 commits

Author SHA1 Message Date
Stephen Hu
92cfbcb382
Fix: when parse markdown support extract image at local (#8906)
### What problem does this PR solve?

https://github.com/infiniflow/ragflow/issues/8902
### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2025-07-18 17:06:58 +08:00
Kevin Hu
ecdb1701df
Perf: test llm before RAPTOR. (#8897)
### What problem does this PR solve?


### Type of change

- [x] Performance Improvement
2025-07-17 16:48:50 +08:00
湛露先生
fd97ce3e5a
fix s3 init config . (#8886)
### What problem does this PR solve?

when``` if 'signature_version' in self.s3_config:``` and ```if
'addressing_style' in self.s3_config:``` both true.
the config init is error, will be overwrite by last one. 

this pr is for fix that case.

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)

Signed-off-by: zhanluxianshen <zhanluxianshen@163.com>
2025-07-17 12:10:15 +08:00
Stephen Hu
38b34116dd
Refa: Remove useless conver and fix a bug for DefaultRerank (#8887)
### What problem does this PR solve?

1. bug when re-try, we need to reset i.
2. remove useless convert

### Type of change

- [x] Refactoring
2025-07-17 12:09:50 +08:00
Kevin Hu
fbd115773b
Perf: set timeout of some steps in KG. (#8873)
### What problem does this PR solve?

### Type of change


- [x] Performance Improvement
2025-07-16 18:06:03 +08:00
Liu An
9e45fcfdb3 Fix: fix typo in OpenAI error logging message (#8865)
### What problem does this PR solve?

Correct the logging message from "OpenAI cat_with_tools" to "OpenAI
chat_with_tools" in the `_exceptions` method of the `Base` class to
accurately reflect the method name and improve error traceability.

### Type of change

- [x] Typo
2025-07-16 15:31:57 +08:00
Yongteng Lei
e9b14142a5
Fix: fixed invalid save() arguments for slide thumbnails (#8851)
### What problem does this PR solve?

Fixed invalid save() arguments for slide thumbnails.

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2025-07-15 17:19:45 +08:00
Kevin Hu
aa4a725529
Pref: use redis to check if canceled. (#8853)
### What problem does this PR solve?

### Type of change

- [x] Performance Improvement
2025-07-15 17:19:27 +08:00
Kevin Hu
24c41d2a61
Perf: make do_cancel quicker. (#8846)
### What problem does this PR solve?

### Type of change

- [x] Performance Improvement
2025-07-15 14:35:00 +08:00
Stephen Hu
5fa6f2f151
Update embedding_model.py (#8836)
### What problem does this PR solve?

Remove useless covert for bge encode_queries

### Type of change

- [x] Performance Improvement
2025-07-15 14:04:58 +08:00
Yongteng Lei
51a8604dcb
Fix: fixed context loss caused by separating markdown tables from original text (#8844)
### What problem does this PR solve?

Fix context loss caused by separating markdown tables from original
text. #6871, #8804.

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2025-07-15 13:03:01 +08:00
Yongteng Lei
dbc2a8689a
Fix: no chunks parsed out for Law (#8842)
### What problem does this PR solve?

Fixes no chunks parsed out for Law. #5113 

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2025-07-15 13:01:56 +08:00
Kevin Hu
c642dbefca
Perf: Enhance timeout handling. (#8826)
### What problem does this PR solve?


### Type of change

- [x] Performance Improvement
2025-07-15 09:36:45 +08:00
Stephen Hu
ce140f1393
Fix:Better Support Table Value Type (#8822)
### What problem does this PR solve?

https://github.com/infiniflow/ragflow/issues/8782

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2025-07-14 17:51:26 +08:00
Stephen Hu
5383e254c4
Perf:Remove Useless Convert When BGE Embedding (#8816)
### What problem does this PR solve?

FlagModel internal support returns as numpy

### Type of change
- [x] Performance Improvement
2025-07-14 14:02:48 +08:00
Stephen Hu
f569401398
Fix: better_handle_different_types (#8775)
### What problem does this PR solve?


https://github.com/infiniflow/ragflow/issues/8719#issuecomment-3055883271

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2025-07-11 18:21:39 +08:00
Stephen Hu
2b7adbd2d1
Fix: Improve Memory Usage For Presentation (#8792)
### What problem does this PR solve?
https://github.com/infiniflow/ragflow/issues/8791


### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2025-07-11 11:35:25 +08:00
Stephen Hu
07208e519b
Fix: Wrong_Input_type_for_Gemin (#8783)
### What problem does this PR solve?

https://github.com/infiniflow/ragflow/issues/8763#issuecomment-3055317110

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2025-07-11 11:34:04 +08:00
Yongteng Lei
1895667573
Feat: add xAI provider (#8781)
### What problem does this PR solve?

Add xAI provider (experimental feature, requires user feedback).

### Type of change

- [x] New Feature (non-breaking change which adds functionality)
2025-07-11 10:35:23 +08:00
Kevin Hu
8281ceb406
Refa: refine retry gap. (#8773)
### What problem does this PR solve?

### Type of change

- [x] Refactoring
- [x] Performance Improvement
2025-07-10 14:28:57 +08:00
Stephen Hu
8d027813f5
Refactor: Improve How To Handle QWenEmbed (#8765)
### What problem does this PR solve?

Based on https://github.com/infiniflow/ragflow/issues/8740 
1. A better handle for 'NoneType' object is not subscriptable
2. Add some logs to get the internal message

### Type of change

- [x] Refactoring
2025-07-10 10:30:18 +08:00
Stephen Hu
19419281c3
Fix: Change Ollama Embedding Keep Alive (#8734)
### What problem does this PR solve?
https://github.com/infiniflow/ragflow/issues/8733

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2025-07-09 12:17:26 +08:00
Stephen Hu
00c954755e
Fix:use the same logic to handle pos in tokenize_chunks_with_images (#8732)
### What problem does this PR solve?

https://github.com/infiniflow/ragflow/issues/8719

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2025-07-09 09:31:40 +08:00
Stephen Hu
8af0d04ad0
Refactor:Improve the logic in search.py (#8716)
### What problem does this PR solve?

1. Remove the useless pop logic due to already been checked at the if
logic
2. merge log logic

### Type of change

- [x] Refactoring
2025-07-08 12:32:01 +08:00
Stephen Hu
e60ec0a31b
Fix:disallowed special token while embedding (#8692)
### What problem does this PR solve?

https://github.com/infiniflow/ragflow/issues/8567

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2025-07-07 14:13:37 +08:00
6607changchun
9580e99650
fix: retry embedding with Qwen family models when limits temporarily reached. (#8690)
fix: retry embedding with Qwen family models when limits temporarily
reached.

APIs of Qwen family models are limited by calling rates. When reached,
the "output" attribute of the "resp" will be None, and in turn cause
TypeError when trying to retrieve "embeddings". Since these limits are
almost temporary, I have added a simple retry mechanism to avoid it.
Besides, if retry_max reached, the error can be early raised, instead of
hidden behind "TypeError".

### What problem does this PR solve?

Sometimes Qwen blocks calling due to rate limits, but it will cause the
whole parsing procedure stops when creating knowledge base. In this
situation, resp["output"] will be None, and resp["output"]["embeddings"]
will cause TypeError. Since the limits are temporary, I apply a simple
retry mechanism to solve it.

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)

---------

Co-authored-by: Kevin Hu <kevinhu.sh@gmail.com>
2025-07-07 12:15:52 +08:00
Kevin Hu
1e6bda735a
Fix: add ES re-connect once request timeout. (#8678)
### What problem does this PR solve?

#8669

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2025-07-07 09:22:25 +08:00
kwrobel.eth
8a3b5d1d76
Fix a small typo in count of used fragments (#8673)
### What problem does this PR solve?

Fix a small typo in count of used fragments.

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)

---------

Co-authored-by: Kevin Hu <kevinhu.sh@gmail.com>
2025-07-04 19:46:31 +08:00
Yongteng Lei
a306a6f158
Refa: refactor prompts into markdown-style structure using Jinja2 (#8667)
### What problem does this PR solve?

Refactor prompts into markdown-style structure using Jinja2.

### Type of change

- [x] Refactoring
2025-07-04 15:59:41 +08:00
Stephen Hu
d5f6335f99
Fix: The data set created by API call failed to parse after uploading the file. (#8657)
### What problem does this PR solve?

https://github.com/infiniflow/ragflow/issues/8656

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2025-07-04 12:41:28 +08:00
Yongteng Lei
f8a6987f1e
Refa: automatic LLMs registration (#8651)
### What problem does this PR solve?

Support automatic LLMs registration.

### Type of change

- [x] Refactoring
2025-07-03 19:05:31 +08:00
Yongteng Lei
62b63acbb5
Refa: more robust mcp tool call (#8631)
### What problem does this PR solve?

More robust MCP tool call conn.

### Type of change

- [x] Refactoring
2025-07-02 18:37:54 +08:00
Kevin Hu
fffb7c0bba
Fix: anthropic llm issue. (#8633)
### What problem does this PR solve?

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2025-07-02 18:37:34 +08:00
He Wang
898da23caa
make dirs with 'exist_ok=True' (#8629)
### What problem does this PR solve?

The following error occurred during local testing, which should be fixed
by configuring 'exist_ok=True'.

```log
set_progress(7461edc2535c11f0a2aa0242c0a82009), progress: -1, progress_msg: 21:41:41 Page(1~100000001): [ERROR][Errno 17] File exists: '/ragflow/tmp'
```

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2025-07-02 18:35:16 +08:00
He Wang
695bfe34a2
fix opendal config 'oss_table' and 'max_allowed_packet' (#8611)
### What problem does this PR solve?

Fix the config option name of the opendal table name and setting of
'max_allowed_packet'.

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)

Signed-off-by: He Wang <wanghechn@qq.com>
2025-07-02 16:45:01 +08:00
Tuan Le
d343cb4deb
Add Google Cloud Vision API Integration (Image2Text) (#8608)
### What problem does this PR solve?

This PR introduces Google Cloud Vision API integration to enhance image
understanding capabilities in the application. It addresses the need for
advanced image description and chat functionalities by implementing a
new `GoogleCV` class to handle API interactions and updating relevant
configurations. This enables users to leverage Google Cloud Vision for
image-to-text tasks, improving the application's ability to process and
interpret visual data.

### Type of change

- [x] New Feature (non-breaking change which adds functionality)
2025-07-02 10:02:01 +08:00
wenxuan.zhang
f586dd0a96
Fix: docx parse error. (#8600)
### What problem does this PR solve?

docx parse error.

![image](https://github.com/user-attachments/assets/efbe6d1b-10c8-415e-b693-a86f73e1ffa6)

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)

### What problem does this PR solve?

Some docx parse with naive cause error. `block.style.name` in Function
`__get_nearest_title` will be None in some case.

### Type of change

- [ ] Bug Fix (non-breaking change which fixes an issue)

Co-authored-by: wenxuan.zhang <wenxuan.zhang@chinacreator.com>
2025-07-01 17:38:11 +08:00
Tuan Le
1c77b4ed9b
fix: Correctly format message parts in GoogleChat (#8596)
### What problem does this PR solve?

This PR addresses an incompatibility issue with the Google Chat API by
correcting the message content format in the `GoogleChat` class.
Previously, the content was directly assigned to the "parts" field,
which did not align with the API's expected format. This change ensures
that messages are properly formatted with a "text" key within a
dictionary, as required by the API.

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2025-07-01 14:06:07 +08:00
Kevin Hu
e3edcc3064
Trivals. (#8597)
### What problem does this PR solve?

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2025-07-01 14:05:18 +08:00
symvation
32f8b3ad77
Fix: the output log is incorrect (#8577)
### What problem does this PR solve?

Fix: the output log is incorrect

### Type of change

- [ ] Bug Fix (non-breaking change which fixes an issue)

Co-authored-by: liang <xiaofeng.liang@landstech.com.cn>
2025-07-01 10:49:43 +08:00
Yongteng Lei
8801de2772
Refa: change mcp_client module to rag/utils/conn (#8578)
### What problem does this PR solve?

Change mcp_client module to rag/utils/conn.

### Type of change

- [x] Refactoring
2025-07-01 09:29:19 +08:00
Kevin Hu
d46c24045f
Feat: add GiteeAI as a llm provider. (#8572)
### What problem does this PR solve?

#1853

### Type of change

- [x] New Feature (non-breaking change which adds functionality)
2025-06-30 11:22:11 +08:00
Kevin Hu
aafeffa292
Feat: add gitee as LLM provider. (#8545)
### What problem does this PR solve?


### Type of change

- [x] New Feature (non-breaking change which adds functionality)
2025-06-30 09:22:31 +08:00
Kevin Hu
e441c17c2c
Refa: limit embedding concurrency and fix chat_with_tool (#8543)
### What problem does this PR solve?

#8538

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
- [x] Refactoring
2025-06-27 19:28:41 +08:00
Kevin Hu
a10f05f4d7
Fix: chat with tools bug. (#8528)
### What problem does this PR solve?


### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2025-06-27 12:10:53 +08:00
Tuan Le
303c6dd1a8
Fix memory leaks in PIL image and BytesIO handling during chunk processing (#8522)
### What problem does this PR solve?
This PR addresses critical memory leaks in the task executor's image
processing pipeline. The current implementation fails to properly
dispose of PIL Image objects and BytesIO buffers during chunk
processing, leading to progressive memory accumulation that can cause
the task executor to consume excessive memory over time.

### Background context
- The `upload_to_minio` function processes images from document chunks
and converts them to JPEG format for storage.
- PIL Image objects hold significant memory resources that must be
explicitly closed to prevent memory leaks.
- BytesIO objects also consume memory and should be properly disposed of
after use.
- In high-throughput scenarios with many image-containing documents,
these memory leaks can lead to out-of-memory errors and degraded
performance.

### Specific issues fixed
- PIL Image objects were not being explicitly closed after processing.
- BytesIO buffers lacked proper cleanup in all code paths.
- Converted images (RGBA/P to RGB) were not disposing of the original
image object.
- Memory references to large image data were not being cleared promptly.

### Type of change
- [x] Bug Fix (non-breaking change which fixes an issue)
- [x] Performance Improvement


### Changes made
- Added explicit `d["image"].close()` calls after image processing
operations.
- Implemented proper cleanup of converted images when changing formats
from RGBA/P to RGB.
- Enhanced BytesIO cleanup with `try/finally` blocks to ensure disposal
in all code paths.
- Added explicit `del d["image"]` to clear memory references after
processing.

This fix ensures stable memory usage during long-running document
processing tasks and prevents potential out-of-memory conditions in
production environments.
2025-06-27 10:23:21 +08:00
Stephen Hu
be712714af
Refactor:improve the logic to check cancel (#8524)
### What problem does this PR solve?

improve the logic to check cancel

### Type of change

- [x] Refactoring

---------

Co-authored-by: Kevin Hu <kevinhu.sh@gmail.com>
2025-06-27 10:22:53 +08:00
Kevin Hu
6d256ff0f5
Perf: ignore concate between rows. (#8507)
### What problem does this PR solve?


### Type of change

- [x] Performance Improvement
2025-06-26 14:55:37 +08:00
Tuan Le
6b1221d2f6
Fix parser_config access for layout_recognize in presentation.py (#8492)
### What problem does this PR solve?
This PR addresses an issue in the presentation parser where the
`layout_recognize` configuration was incorrectly retrieved from
`kwargs.get("layout_recognize", "DeepDOC")`. Instead, it should be
sourced from the `parser_config` parameter, specifically
`parser_config.get("layout_recognize", "DeepDOC")`.

This mismatch could cause the parser to default to the "DeepDOC" layout
recognizer, ignoring any alternative recognition method specified in the
parser configuration. As a result, PDF document parsing might use an
incorrect recognition engine.

The fix ensures the presentation parser consistently uses the
`layout_recognize` setting from `parser_config`, aligning with the
configuration access patterns used elsewhere in the codebase.

### Type of change
- [x] Bug Fix (non-breaking change which fixes an issue)
2025-06-26 11:54:43 +08:00
Rainman
340354b79c
fix the error 'Unknown field for GenerationConfig: max_tokens' when u… (#8473)
### What problem does this PR solve?
[https://github.com/infiniflow/ragflow/issues/8324](url)

docker image version: v0.19.1

The `_clean_conf` function was not implemented in the `_chat` and
`chat_streamly` methods of the `GeminiChat` class, causing the error
"Unknown field for GenerationConfig: max_tokens" when the default LLM
config includes the "max_tokens" parameter.

**Buggy Code(ragflow/rag/llm/chat_model.py)**
```python
class GeminiChat(Base):
    def __init__(self, key, model_name, base_url=None, **kwargs):
        super().__init__(key, model_name, base_url=base_url, **kwargs)

        from google.generativeai import GenerativeModel, client

        client.configure(api_key=key)
        _client = client.get_default_generative_client()
        self.model_name = "models/" + model_name
        self.model = GenerativeModel(model_name=self.model_name)
        self.model._client = _client

    def _clean_conf(self, gen_conf):
        for k in list(gen_conf.keys()):
            if k not in ["temperature", "top_p"]:
                del gen_conf[k]
        return gen_conf

    def _chat(self, history, gen_conf):
        from google.generativeai.types import content_types

        system = history[0]["content"] if history and history[0]["role"] == "system" else ""
        hist = []
        for item in history:
            if item["role"] == "system":
                continue
            hist.append(deepcopy(item))
            item = hist[-1]
            if "role" in item and item["role"] == "assistant":
                item["role"] = "model"
            if "role" in item and item["role"] == "system":
                item["role"] = "user"
            if "content" in item:
                item["parts"] = item.pop("content")

        if system:
            self.model._system_instruction = content_types.to_content(system)
        response = self.model.generate_content(hist, generation_config=gen_conf)
        ans = response.text
        return ans, response.usage_metadata.total_token_count

    def chat_streamly(self, system, history, gen_conf):
        from google.generativeai.types import content_types

        if system:
            self.model._system_instruction = content_types.to_content(system)
        #_clean_conf was not implemented 
        for k in list(gen_conf.keys()):
            if k not in ["temperature", "top_p", "max_tokens"]:
                del gen_conf[k]
        for item in history:
            if "role" in item and item["role"] == "assistant":
                item["role"] = "model"
            if "content" in item:
                item["parts"] = item.pop("content")
        ans = ""
        try:
            response = self.model.generate_content(history, generation_config=gen_conf, stream=True)
            for resp in response:
                ans = resp.text
                yield ans

            yield response._chunks[-1].usage_metadata.total_token_count
        except Exception as e:
            yield ans + "\n**ERROR**: " + str(e)

        yield 0
```
**Implement the _clean_conf function**
```python
class GeminiChat(Base):
    def __init__(self, key, model_name, base_url=None, **kwargs):
        super().__init__(key, model_name, base_url=base_url, **kwargs)

        from google.generativeai import GenerativeModel, client

        client.configure(api_key=key)
        _client = client.get_default_generative_client()
        self.model_name = "models/" + model_name
        self.model = GenerativeModel(model_name=self.model_name)
        self.model._client = _client

    def _clean_conf(self, gen_conf):
        for k in list(gen_conf.keys()):
            if k not in ["temperature", "top_p"]:
                del gen_conf[k]
        return gen_conf

    def _chat(self, history, gen_conf):
        from google.generativeai.types import content_types
        # implement _clean_conf to remove the wrong parameters
        gen_conf = self._clean_conf(gen_conf)

        system = history[0]["content"] if history and history[0]["role"] == "system" else ""
        hist = []
        for item in history:
            if item["role"] == "system":
                continue
            hist.append(deepcopy(item))
            item = hist[-1]
            if "role" in item and item["role"] == "assistant":
                item["role"] = "model"
            if "role" in item and item["role"] == "system":
                item["role"] = "user"
            if "content" in item:
                item["parts"] = item.pop("content")

        if system:
            self.model._system_instruction = content_types.to_content(system)
        response = self.model.generate_content(hist, generation_config=gen_conf)
        ans = response.text
        return ans, response.usage_metadata.total_token_count

    def chat_streamly(self, system, history, gen_conf):
        from google.generativeai.types import content_types
        # implement _clean_conf to remove the wrong parameters
        gen_conf = self._clean_conf(gen_conf)

        if system:
            self.model._system_instruction = content_types.to_content(system)
        #Removed duplicate parameter filtering logic "for k in list(gen_conf.keys()):"
        for item in history:
            if "role" in item and item["role"] == "assistant":
                item["role"] = "model"
            if "content" in item:
                item["parts"] = item.pop("content")
        ans = ""
        try:
            response = self.model.generate_content(history, generation_config=gen_conf, stream=True)
            for resp in response:
                ans = resp.text
                yield ans

            yield response._chunks[-1].usage_metadata.total_token_count
        except Exception as e:
            yield ans + "\n**ERROR**: " + str(e)

        yield 0
```

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)

---------

Co-authored-by: Kevin Hu <kevinhu.sh@gmail.com>
2025-06-25 16:23:35 +08:00