Merge branch 'main' of github.com:infiniflow/ragflow into feature/1111
This commit is contained in:
commit
d148aad82b
36 changed files with 855 additions and 234 deletions
|
|
@ -86,6 +86,7 @@ Try our demo at [https://demo.ragflow.io](https://demo.ragflow.io).
|
|||
|
||||
## 🔥 Latest Updates
|
||||
|
||||
- 2025-11-12 Supports data synchronization from Confluence, AWS S3, Discord, Google Drive.
|
||||
- 2025-10-23 Supports MinerU & Docling as document parsing methods.
|
||||
- 2025-10-15 Supports orchestrable ingestion pipeline.
|
||||
- 2025-08-08 Supports OpenAI's latest GPT-5 series models.
|
||||
|
|
@ -93,7 +94,6 @@ Try our demo at [https://demo.ragflow.io](https://demo.ragflow.io).
|
|||
- 2025-05-23 Adds a Python/JavaScript code executor component to Agent.
|
||||
- 2025-05-05 Supports cross-language query.
|
||||
- 2025-03-19 Supports using a multi-modal model to make sense of images within PDF or DOCX files.
|
||||
- 2025-02-28 Combined with Internet search (Tavily), supports reasoning like Deep Research for any LLMs.
|
||||
- 2024-12-18 Upgrades Document Layout Analysis model in DeepDoc.
|
||||
- 2024-08-22 Support text to SQL statements through RAG.
|
||||
|
||||
|
|
@ -193,6 +193,9 @@ releases! 🌟
|
|||
|
||||
```bash
|
||||
$ cd ragflow/docker
|
||||
|
||||
# Optional: use a stable tag (see releases: https://github.com/infiniflow/ragflow/releases), e.g.: git checkout v0.21.1
|
||||
|
||||
# Use CPU for embedding and DeepDoc tasks:
|
||||
$ docker compose -f docker-compose.yml up -d
|
||||
|
||||
|
|
|
|||
|
|
@ -86,6 +86,7 @@ Coba demo kami di [https://demo.ragflow.io](https://demo.ragflow.io).
|
|||
|
||||
## 🔥 Pembaruan Terbaru
|
||||
|
||||
- 2025-11-12 Mendukung sinkronisasi data dari Confluence, AWS S3, Discord, Google Drive.
|
||||
- 2025-10-23 Mendukung MinerU & Docling sebagai metode penguraian dokumen.
|
||||
- 2025-10-15 Dukungan untuk jalur data yang terorkestrasi.
|
||||
- 2025-08-08 Mendukung model seri GPT-5 terbaru dari OpenAI.
|
||||
|
|
@ -93,7 +94,6 @@ Coba demo kami di [https://demo.ragflow.io](https://demo.ragflow.io).
|
|||
- 2025-05-23 Menambahkan komponen pelaksana kode Python/JS ke Agen.
|
||||
- 2025-05-05 Mendukung kueri lintas bahasa.
|
||||
- 2025-03-19 Mendukung penggunaan model multi-modal untuk memahami gambar di dalam file PDF atau DOCX.
|
||||
- 2025-02-28 dikombinasikan dengan pencarian Internet (TAVILY), mendukung penelitian mendalam untuk LLM apa pun.
|
||||
- 2024-12-18 Meningkatkan model Analisis Tata Letak Dokumen di DeepDoc.
|
||||
- 2024-08-22 Dukungan untuk teks ke pernyataan SQL melalui RAG.
|
||||
|
||||
|
|
@ -191,6 +191,9 @@ Coba demo kami di [https://demo.ragflow.io](https://demo.ragflow.io).
|
|||
|
||||
```bash
|
||||
$ cd ragflow/docker
|
||||
|
||||
# Opsional: gunakan tag stabil (lihat releases: https://github.com/infiniflow/ragflow/releases), contoh: git checkout v0.21.1
|
||||
|
||||
# Use CPU for embedding and DeepDoc tasks:
|
||||
$ docker compose -f docker-compose.yml up -d
|
||||
|
||||
|
|
|
|||
|
|
@ -66,6 +66,7 @@
|
|||
|
||||
## 🔥 最新情報
|
||||
|
||||
- 2025-11-12 Confluence、AWS S3、Discord、Google Drive からのデータ同期をサポートします。
|
||||
- 2025-10-23 ドキュメント解析方法として MinerU と Docling をサポートします。
|
||||
- 2025-10-15 オーケストレーションされたデータパイプラインのサポート。
|
||||
- 2025-08-08 OpenAI の最新 GPT-5 シリーズモデルをサポートします。
|
||||
|
|
@ -73,7 +74,6 @@
|
|||
- 2025-05-23 エージェントに Python/JS コードエグゼキュータコンポーネントを追加しました。
|
||||
- 2025-05-05 言語間クエリをサポートしました。
|
||||
- 2025-03-19 PDFまたはDOCXファイル内の画像を理解するために、多モーダルモデルを使用することをサポートします。
|
||||
- 2025-02-28 インターネット検索 (TAVILY) と組み合わせて、あらゆる LLM の詳細な調査をサポートします。
|
||||
- 2024-12-18 DeepDoc のドキュメント レイアウト分析モデルをアップグレードします。
|
||||
- 2024-08-22 RAG を介して SQL ステートメントへのテキストをサポートします。
|
||||
|
||||
|
|
@ -170,6 +170,9 @@
|
|||
|
||||
```bash
|
||||
$ cd ragflow/docker
|
||||
|
||||
# 任意: 安定版タグを利用 (一覧: https://github.com/infiniflow/ragflow/releases) 例: git checkout v0.21.1
|
||||
|
||||
# Use CPU for embedding and DeepDoc tasks:
|
||||
$ docker compose -f docker-compose.yml up -d
|
||||
|
||||
|
|
@ -177,6 +180,7 @@
|
|||
# sed -i '1i DEVICE=gpu' .env
|
||||
# docker compose -f docker-compose.yml up -d
|
||||
```
|
||||
|
||||
|
||||
| RAGFlow image tag | Image size (GB) | Has embedding models? | Stable? |
|
||||
| ----------------- | --------------- | --------------------- | -------------------------- |
|
||||
|
|
|
|||
|
|
@ -67,6 +67,7 @@
|
|||
|
||||
## 🔥 업데이트
|
||||
|
||||
- 2025-11-12 Confluence, AWS S3, Discord, Google Drive에서 데이터 동기화를 지원합니다.
|
||||
- 2025-10-23 문서 파싱 방법으로 MinerU 및 Docling을 지원합니다.
|
||||
- 2025-10-15 조정된 데이터 파이프라인 지원.
|
||||
- 2025-08-08 OpenAI의 최신 GPT-5 시리즈 모델을 지원합니다.
|
||||
|
|
@ -74,7 +75,6 @@
|
|||
- 2025-05-23 Agent에 Python/JS 코드 실행기 구성 요소를 추가합니다.
|
||||
- 2025-05-05 언어 간 쿼리를 지원합니다.
|
||||
- 2025-03-19 PDF 또는 DOCX 파일 내의 이미지를 이해하기 위해 다중 모드 모델을 사용하는 것을 지원합니다.
|
||||
- 2025-02-28 인터넷 검색(TAVILY)과 결합되어 모든 LLM에 대한 심층 연구를 지원합니다.
|
||||
- 2024-12-18 DeepDoc의 문서 레이아웃 분석 모델 업그레이드.
|
||||
- 2024-08-22 RAG를 통해 SQL 문에 텍스트를 지원합니다.
|
||||
|
||||
|
|
@ -172,6 +172,9 @@
|
|||
|
||||
```bash
|
||||
$ cd ragflow/docker
|
||||
|
||||
# Optional: use a stable tag (see releases: https://github.com/infiniflow/ragflow/releases), e.g.: git checkout v0.21.1
|
||||
|
||||
# Use CPU for embedding and DeepDoc tasks:
|
||||
$ docker compose -f docker-compose.yml up -d
|
||||
|
||||
|
|
|
|||
|
|
@ -86,6 +86,7 @@ Experimente nossa demo em [https://demo.ragflow.io](https://demo.ragflow.io).
|
|||
|
||||
## 🔥 Últimas Atualizações
|
||||
|
||||
- 12-11-2025 Suporta a sincronização de dados do Confluence, AWS S3, Discord e Google Drive.
|
||||
- 23-10-2025 Suporta MinerU e Docling como métodos de análise de documentos.
|
||||
- 15-10-2025 Suporte para pipelines de dados orquestrados.
|
||||
- 08-08-2025 Suporta a mais recente série GPT-5 da OpenAI.
|
||||
|
|
@ -93,7 +94,6 @@ Experimente nossa demo em [https://demo.ragflow.io](https://demo.ragflow.io).
|
|||
- 23-05-2025 Adicione o componente executor de código Python/JS ao Agente.
|
||||
- 05-05-2025 Suporte a consultas entre idiomas.
|
||||
- 19-03-2025 Suporta o uso de um modelo multi-modal para entender imagens dentro de arquivos PDF ou DOCX.
|
||||
- 28-02-2025 combinado com a pesquisa na Internet (T AVI LY), suporta pesquisas profundas para qualquer LLM.
|
||||
- 18-12-2024 Atualiza o modelo de Análise de Layout de Documentos no DeepDoc.
|
||||
- 22-08-2024 Suporta conversão de texto para comandos SQL via RAG.
|
||||
|
||||
|
|
@ -190,6 +190,9 @@ Experimente nossa demo em [https://demo.ragflow.io](https://demo.ragflow.io).
|
|||
|
||||
```bash
|
||||
$ cd ragflow/docker
|
||||
|
||||
# Opcional: use uma tag estável (veja releases: https://github.com/infiniflow/ragflow/releases), ex.: git checkout v0.21.1
|
||||
|
||||
# Use CPU for embedding and DeepDoc tasks:
|
||||
$ docker compose -f docker-compose.yml up -d
|
||||
|
||||
|
|
|
|||
|
|
@ -85,6 +85,7 @@
|
|||
|
||||
## 🔥 近期更新
|
||||
|
||||
- 2025-11-12 支援從 Confluence、AWS S3、Discord、Google Drive 進行資料同步。
|
||||
- 2025-10-23 支援 MinerU 和 Docling 作為文件解析方法。
|
||||
- 2025-10-15 支援可編排的資料管道。
|
||||
- 2025-08-08 支援 OpenAI 最新的 GPT-5 系列模型。
|
||||
|
|
@ -92,7 +93,6 @@
|
|||
- 2025-05-23 為 Agent 新增 Python/JS 程式碼執行器元件。
|
||||
- 2025-05-05 支援跨語言查詢。
|
||||
- 2025-03-19 PDF和DOCX中的圖支持用多模態大模型去解析得到描述.
|
||||
- 2025-02-28 結合網路搜尋(Tavily),對於任意大模型實現類似 Deep Research 的推理功能.
|
||||
- 2024-12-18 升級了 DeepDoc 的文檔佈局分析模型。
|
||||
- 2024-08-22 支援用 RAG 技術實現從自然語言到 SQL 語句的轉換。
|
||||
|
||||
|
|
@ -189,6 +189,9 @@
|
|||
|
||||
```bash
|
||||
$ cd ragflow/docker
|
||||
|
||||
# 可選:使用穩定版標籤(查看發佈:https://github.com/infiniflow/ragflow/releases),例:git checkout v0.21.1
|
||||
|
||||
# Use CPU for embedding and DeepDoc tasks:
|
||||
$ docker compose -f docker-compose.yml up -d
|
||||
|
||||
|
|
|
|||
22
README_zh.md
22
README_zh.md
|
|
@ -22,7 +22,7 @@
|
|||
<img alt="Static Badge" src="https://img.shields.io/badge/Online-Demo-4e6b99">
|
||||
</a>
|
||||
<a href="https://hub.docker.com/r/infiniflow/ragflow" target="_blank">
|
||||
<img src="https://img.shields.io/docker/pulls/infiniflow/ragflow?label=Docker%20Pulls&color=0db7ed&logo=docker&logoColor=white&style=flat-square" alt="docker pull infiniflow/ragflow:v0.21.1">
|
||||
<img src="https://img.shields.io/docker/pulls/infiniflow/ragflow?label=Docker%20Pulls&color=0db7ed&logo=docker&logoColor=white&style=flat-square" alt="docker pull infiniflow/ragflow:v0.21.2">
|
||||
</a>
|
||||
<a href="https://github.com/infiniflow/ragflow/releases/latest">
|
||||
<img src="https://img.shields.io/github/v/release/infiniflow/ragflow?color=blue&label=Latest%20Release" alt="Latest Release">
|
||||
|
|
@ -85,6 +85,7 @@
|
|||
|
||||
## 🔥 近期更新
|
||||
|
||||
- 2025-11-12 支持从 Confluence、AWS S3、Discord、Google Drive 进行数据同步。
|
||||
- 2025-10-23 支持 MinerU 和 Docling 作为文档解析方法。
|
||||
- 2025-10-15 支持可编排的数据管道。
|
||||
- 2025-08-08 支持 OpenAI 最新的 GPT-5 系列模型。
|
||||
|
|
@ -92,7 +93,6 @@
|
|||
- 2025-05-23 Agent 新增 Python/JS 代码执行器组件。
|
||||
- 2025-05-05 支持跨语言查询。
|
||||
- 2025-03-19 PDF 和 DOCX 中的图支持用多模态大模型去解析得到描述.
|
||||
- 2025-02-28 结合互联网搜索(Tavily),对于任意大模型实现类似 Deep Research 的推理功能.
|
||||
- 2024-12-18 升级了 DeepDoc 的文档布局分析模型。
|
||||
- 2024-08-22 支持用 RAG 技术实现从自然语言到 SQL 语句的转换。
|
||||
|
||||
|
|
@ -186,25 +186,29 @@
|
|||
> 请注意,目前官方提供的所有 Docker 镜像均基于 x86 架构构建,并不提供基于 ARM64 的 Docker 镜像。
|
||||
> 如果你的操作系统是 ARM64 架构,请参考[这篇文档](https://ragflow.io/docs/dev/build_docker_image)自行构建 Docker 镜像。
|
||||
|
||||
> 运行以下命令会自动下载 RAGFlow slim Docker 镜像 `v0.21.1`。请参考下表查看不同 Docker 发行版的描述。如需下载不同于 `v0.21.1` 的 Docker 镜像,请在运行 `docker compose` 启动服务之前先更新 **docker/.env** 文件内的 `RAGFLOW_IMAGE` 变量。
|
||||
> 运行以下命令会自动下载 RAGFlow Docker 镜像 `v0.22.0`。请参考下表查看不同 Docker 发行版的描述。如需下载不同于 `v0.22.0` 的 Docker 镜像,请在运行 `docker compose` 启动服务之前先更新 **docker/.env** 文件内的 `RAGFLOW_IMAGE` 变量。
|
||||
|
||||
```bash
|
||||
$ cd ragflow/docker
|
||||
# Use CPU for embedding and DeepDoc tasks:
|
||||
|
||||
# 可选:使用稳定版本标签(查看发布:https://github.com/infiniflow/ragflow/releases),例如:git checkout v0.22.0
|
||||
|
||||
# Use CPU for DeepDoc tasks:
|
||||
$ docker compose -f docker-compose.yml up -d
|
||||
|
||||
# To use GPU to accelerate embedding and DeepDoc tasks:
|
||||
# To use GPU to accelerate DeepDoc tasks:
|
||||
# sed -i '1i DEVICE=gpu' .env
|
||||
# docker compose -f docker-compose.yml up -d
|
||||
```
|
||||
|
||||
> 注意:在 `v0.22.0` 之前的版本,我们会同时提供包含 embedding 模型的镜像和不含 embedding 模型的 slim 镜像。具体如下:
|
||||
|
||||
| RAGFlow image tag | Image size (GB) | Has embedding models? | Stable? |
|
||||
| ----------------- | --------------- | --------------------- | ------------------------ |
|
||||
| v0.21.1 | ≈9 | ✔️ | Stable release |
|
||||
| v0.21.1-slim | ≈2 | ❌ | Stable release |
|
||||
| nightly | ≈2 | ❌ | _Unstable_ nightly build |
|
||||
|
||||
> 注意:从 `v0.22.0` 开始,我们只发布 slim 版本,并且不再在镜像标签后附加 **-slim** 后缀。
|
||||
|
||||
> 从 `v0.22.0` 开始,我们只发布 slim 版本,并且不再在镜像标签后附加 **-slim** 后缀。
|
||||
|
||||
> [!TIP]
|
||||
> 如果你遇到 Docker 镜像拉不下来的问题,可以在 **docker/.env** 文件内根据变量 `RAGFLOW_IMAGE` 的注释提示选择华为云或者阿里云的相应镜像。
|
||||
|
|
@ -284,7 +288,7 @@ RAGFlow 默认使用 Elasticsearch 存储文本和向量数据. 如果要切换
|
|||
> [!WARNING]
|
||||
> Infinity 目前官方并未正式支持在 Linux/arm64 架构下的机器上运行.
|
||||
|
||||
## 🔧 源码编译 Docker 镜像(不含 embedding 模型)
|
||||
## 🔧 源码编译 Docker 镜像
|
||||
|
||||
本 Docker 镜像大小约 2 GB 左右并且依赖外部的大模型和 embedding 服务。
|
||||
|
||||
|
|
|
|||
|
|
@ -349,7 +349,7 @@ class Canvas(Graph):
|
|||
i += 1
|
||||
else:
|
||||
for _, ele in cpn.get_input_elements().items():
|
||||
if isinstance(ele, dict) and ele.get("_cpn_id") and ele.get("_cpn_id") not in self.path[:i]:
|
||||
if isinstance(ele, dict) and ele.get("_cpn_id") and ele.get("_cpn_id") not in self.path[:i] and self.path[0].lower().find("userfillup") < 0:
|
||||
self.path.pop(i)
|
||||
t -= 1
|
||||
break
|
||||
|
|
|
|||
|
|
@ -47,6 +47,7 @@ class DataOperations(ComponentBase,ABC):
|
|||
inputs = [inputs]
|
||||
for input_ref in inputs:
|
||||
input_object=self._canvas.get_variable_value(input_ref)
|
||||
self.set_input_value(input_ref, input_object)
|
||||
if input_object is None:
|
||||
continue
|
||||
if isinstance(input_object,dict):
|
||||
|
|
|
|||
519
agent/templates/user_interaction.json
Normal file
519
agent/templates/user_interaction.json
Normal file
|
|
@ -0,0 +1,519 @@
|
|||
{
|
||||
"id": 27,
|
||||
"title": {
|
||||
"en": "Interacting with the Agent",
|
||||
"zh": "用户与 Agent 交互"
|
||||
},
|
||||
"description": {
|
||||
"en": "During the Agent’s execution, users can actively intervene and interact with the Agent to adjust or guide its output, ensuring the final result aligns with their intentions.",
|
||||
"zh": "在 Agent 的运行过程中,用户可以随时介入,与 Agent 进行交互,以调整或引导生成结果,使最终输出更符合预期。"
|
||||
},
|
||||
"canvas_type": "Agent",
|
||||
"dsl": {
|
||||
"components": {
|
||||
"Agent:LargeFliesMelt": {
|
||||
"downstream": [
|
||||
"UserFillUp:GoldBroomsRelate"
|
||||
],
|
||||
"obj": {
|
||||
"component_name": "Agent",
|
||||
"params": {
|
||||
"cite": true,
|
||||
"delay_after_error": 1,
|
||||
"description": "",
|
||||
"exception_default_value": "",
|
||||
"exception_goto": [],
|
||||
"exception_method": "",
|
||||
"frequencyPenaltyEnabled": false,
|
||||
"frequency_penalty": 0.7,
|
||||
"llm_id": "qwen-turbo@Tongyi-Qianwen",
|
||||
"maxTokensEnabled": false,
|
||||
"max_retries": 3,
|
||||
"max_rounds": 1,
|
||||
"max_tokens": 256,
|
||||
"mcp": [],
|
||||
"message_history_window_size": 12,
|
||||
"outputs": {
|
||||
"content": {
|
||||
"type": "string",
|
||||
"value": ""
|
||||
},
|
||||
"structured": {}
|
||||
},
|
||||
"presencePenaltyEnabled": false,
|
||||
"presence_penalty": 0.4,
|
||||
"prompts": [
|
||||
{
|
||||
"content": "User query:{sys.query}",
|
||||
"role": "user"
|
||||
}
|
||||
],
|
||||
"sys_prompt": "<role>\nYou are the Planning Agent in a multi-agent RAG workflow.\nYour sole job is to design a crisp, executable Search Plan for the next agent. Do not search or answer the user’s question.\n</role>\n<objectives>\nUnderstand the user’s task and decompose it into evidence-seeking steps.\nProduce high-quality queries and retrieval settings tailored to the task type (fact lookup, multi-hop reasoning, comparison, statistics, how-to, etc.).\nIdentify missing information that would materially change the plan (≤3 concise questions).\nOptimize for source trustworthiness, diversity, and recency; define stopping criteria to avoid over-searching.\nAnswer in 150 words.\n<objectives>",
|
||||
"temperature": 0.1,
|
||||
"temperatureEnabled": false,
|
||||
"tools": [],
|
||||
"topPEnabled": false,
|
||||
"top_p": 0.3,
|
||||
"user_prompt": "",
|
||||
"visual_files_var": ""
|
||||
}
|
||||
},
|
||||
"upstream": [
|
||||
"begin"
|
||||
]
|
||||
},
|
||||
"Agent:TangyWordsType": {
|
||||
"downstream": [
|
||||
"Message:FreshWallsStudy"
|
||||
],
|
||||
"obj": {
|
||||
"component_name": "Agent",
|
||||
"params": {
|
||||
"cite": true,
|
||||
"delay_after_error": 1,
|
||||
"description": "",
|
||||
"exception_default_value": "",
|
||||
"exception_goto": [],
|
||||
"exception_method": "",
|
||||
"frequencyPenaltyEnabled": false,
|
||||
"frequency_penalty": 0.7,
|
||||
"llm_id": "qwen-turbo@Tongyi-Qianwen",
|
||||
"maxTokensEnabled": false,
|
||||
"max_retries": 3,
|
||||
"max_rounds": 1,
|
||||
"max_tokens": 256,
|
||||
"mcp": [],
|
||||
"message_history_window_size": 12,
|
||||
"outputs": {
|
||||
"content": {
|
||||
"type": "string",
|
||||
"value": ""
|
||||
},
|
||||
"structured": {}
|
||||
},
|
||||
"presencePenaltyEnabled": false,
|
||||
"presence_penalty": 0.4,
|
||||
"prompts": [
|
||||
{
|
||||
"content": "Search Plan: {Agent:LargeFliesMelt@content}\n\n\n\nAwait Response feedback:{UserFillUp:GoldBroomsRelate@instructions}\n",
|
||||
"role": "user"
|
||||
}
|
||||
],
|
||||
"sys_prompt": "<role>\nYou are the Search Agent.\nYour job is to execute the approved Search Plan, integrate the Await Response feedback, retrieve evidence, and produce a well-grounded answer.\n</role>\n<objectives>\nTranslate the plan + feedback into concrete searches.\nCollect diverse, trustworthy, and recent evidence meeting the plan’s evidence bar.\nSynthesize a concise answer; include citations next to claims they support.\nIf evidence is insufficient or conflicting, clearly state limitations and propose next steps.\n</objectives>\n <tools>\nRetrieval: You must use Retrieval to do the search.\n </tools>\n",
|
||||
"temperature": 0.1,
|
||||
"temperatureEnabled": false,
|
||||
"tools": [
|
||||
{
|
||||
"component_name": "Retrieval",
|
||||
"name": "Retrieval",
|
||||
"params": {
|
||||
"cross_languages": [],
|
||||
"description": "",
|
||||
"empty_response": "",
|
||||
"kb_ids": [],
|
||||
"keywords_similarity_weight": 0.7,
|
||||
"outputs": {
|
||||
"formalized_content": {
|
||||
"type": "string",
|
||||
"value": ""
|
||||
},
|
||||
"json": {
|
||||
"type": "Array<Object>",
|
||||
"value": []
|
||||
}
|
||||
},
|
||||
"rerank_id": "",
|
||||
"similarity_threshold": 0.2,
|
||||
"toc_enhance": false,
|
||||
"top_k": 1024,
|
||||
"top_n": 8,
|
||||
"use_kg": false
|
||||
}
|
||||
}
|
||||
],
|
||||
"topPEnabled": false,
|
||||
"top_p": 0.3,
|
||||
"user_prompt": "",
|
||||
"visual_files_var": ""
|
||||
}
|
||||
},
|
||||
"upstream": [
|
||||
"UserFillUp:GoldBroomsRelate"
|
||||
]
|
||||
},
|
||||
"Message:FreshWallsStudy": {
|
||||
"downstream": [],
|
||||
"obj": {
|
||||
"component_name": "Message",
|
||||
"params": {
|
||||
"content": [
|
||||
"{Agent:TangyWordsType@content}"
|
||||
]
|
||||
}
|
||||
},
|
||||
"upstream": [
|
||||
"Agent:TangyWordsType"
|
||||
]
|
||||
},
|
||||
"UserFillUp:GoldBroomsRelate": {
|
||||
"downstream": [
|
||||
"Agent:TangyWordsType"
|
||||
],
|
||||
"obj": {
|
||||
"component_name": "UserFillUp",
|
||||
"params": {
|
||||
"enable_tips": true,
|
||||
"inputs": {
|
||||
"instructions": {
|
||||
"name": "instructions",
|
||||
"optional": false,
|
||||
"options": [],
|
||||
"type": "paragraph"
|
||||
}
|
||||
},
|
||||
"outputs": {
|
||||
"instructions": {
|
||||
"name": "instructions",
|
||||
"optional": false,
|
||||
"options": [],
|
||||
"type": "paragraph"
|
||||
}
|
||||
},
|
||||
"tips": "Here is my search plan:\n{Agent:LargeFliesMelt@content}\nAre you okay with it?"
|
||||
}
|
||||
},
|
||||
"upstream": [
|
||||
"Agent:LargeFliesMelt"
|
||||
]
|
||||
},
|
||||
"begin": {
|
||||
"downstream": [
|
||||
"Agent:LargeFliesMelt"
|
||||
],
|
||||
"obj": {
|
||||
"component_name": "Begin",
|
||||
"params": {}
|
||||
},
|
||||
"upstream": []
|
||||
}
|
||||
},
|
||||
"globals": {
|
||||
"sys.conversation_turns": 0,
|
||||
"sys.files": [],
|
||||
"sys.query": "",
|
||||
"sys.user_id": ""
|
||||
},
|
||||
"graph": {
|
||||
"edges": [
|
||||
{
|
||||
"data": {
|
||||
"isHovered": false
|
||||
},
|
||||
"id": "xy-edge__beginstart-Agent:LargeFliesMeltend",
|
||||
"source": "begin",
|
||||
"sourceHandle": "start",
|
||||
"target": "Agent:LargeFliesMelt",
|
||||
"targetHandle": "end"
|
||||
},
|
||||
{
|
||||
"data": {
|
||||
"isHovered": false
|
||||
},
|
||||
"id": "xy-edge__Agent:LargeFliesMeltstart-UserFillUp:GoldBroomsRelateend",
|
||||
"source": "Agent:LargeFliesMelt",
|
||||
"sourceHandle": "start",
|
||||
"target": "UserFillUp:GoldBroomsRelate",
|
||||
"targetHandle": "end"
|
||||
},
|
||||
{
|
||||
"data": {
|
||||
"isHovered": false
|
||||
},
|
||||
"id": "xy-edge__UserFillUp:GoldBroomsRelatestart-Agent:TangyWordsTypeend",
|
||||
"source": "UserFillUp:GoldBroomsRelate",
|
||||
"sourceHandle": "start",
|
||||
"target": "Agent:TangyWordsType",
|
||||
"targetHandle": "end"
|
||||
},
|
||||
{
|
||||
"id": "xy-edge__Agent:TangyWordsTypetool-Tool:NastyBatsGoend",
|
||||
"source": "Agent:TangyWordsType",
|
||||
"sourceHandle": "tool",
|
||||
"target": "Tool:NastyBatsGo",
|
||||
"targetHandle": "end"
|
||||
},
|
||||
{
|
||||
"id": "xy-edge__Agent:TangyWordsTypestart-Message:FreshWallsStudyend",
|
||||
"source": "Agent:TangyWordsType",
|
||||
"sourceHandle": "start",
|
||||
"target": "Message:FreshWallsStudy",
|
||||
"targetHandle": "end"
|
||||
}
|
||||
],
|
||||
"nodes": [
|
||||
{
|
||||
"data": {
|
||||
"label": "Begin",
|
||||
"name": "begin"
|
||||
},
|
||||
"dragging": false,
|
||||
"id": "begin",
|
||||
"measured": {
|
||||
"height": 50,
|
||||
"width": 200
|
||||
},
|
||||
"position": {
|
||||
"x": 154.9008789064451,
|
||||
"y": 119.51001744285344
|
||||
},
|
||||
"selected": false,
|
||||
"sourcePosition": "left",
|
||||
"targetPosition": "right",
|
||||
"type": "beginNode"
|
||||
},
|
||||
{
|
||||
"data": {
|
||||
"form": {
|
||||
"cite": true,
|
||||
"delay_after_error": 1,
|
||||
"description": "",
|
||||
"exception_default_value": "",
|
||||
"exception_goto": [],
|
||||
"exception_method": "",
|
||||
"frequencyPenaltyEnabled": false,
|
||||
"frequency_penalty": 0.7,
|
||||
"llm_id": "qwen-turbo@Tongyi-Qianwen",
|
||||
"maxTokensEnabled": false,
|
||||
"max_retries": 3,
|
||||
"max_rounds": 1,
|
||||
"max_tokens": 256,
|
||||
"mcp": [],
|
||||
"message_history_window_size": 12,
|
||||
"outputs": {
|
||||
"content": {
|
||||
"type": "string",
|
||||
"value": ""
|
||||
},
|
||||
"structured": {}
|
||||
},
|
||||
"presencePenaltyEnabled": false,
|
||||
"presence_penalty": 0.4,
|
||||
"prompts": [
|
||||
{
|
||||
"content": "User query:{sys.query}",
|
||||
"role": "user"
|
||||
}
|
||||
],
|
||||
"sys_prompt": "<role>\nYou are the Planning Agent in a multi-agent RAG workflow.\nYour sole job is to design a crisp, executable Search Plan for the next agent. Do not search or answer the user’s question.\n</role>\n<objectives>\nUnderstand the user’s task and decompose it into evidence-seeking steps.\nProduce high-quality queries and retrieval settings tailored to the task type (fact lookup, multi-hop reasoning, comparison, statistics, how-to, etc.).\nIdentify missing information that would materially change the plan (≤3 concise questions).\nOptimize for source trustworthiness, diversity, and recency; define stopping criteria to avoid over-searching.\nAnswer in 150 words.\n<objectives>",
|
||||
"temperature": 0.1,
|
||||
"temperatureEnabled": false,
|
||||
"tools": [],
|
||||
"topPEnabled": false,
|
||||
"top_p": 0.3,
|
||||
"user_prompt": "",
|
||||
"visual_files_var": ""
|
||||
},
|
||||
"label": "Agent",
|
||||
"name": "Planning Agent"
|
||||
},
|
||||
"dragging": false,
|
||||
"id": "Agent:LargeFliesMelt",
|
||||
"measured": {
|
||||
"height": 90,
|
||||
"width": 200
|
||||
},
|
||||
"position": {
|
||||
"x": 443.96309330796714,
|
||||
"y": 104.61370811205677
|
||||
},
|
||||
"selected": false,
|
||||
"sourcePosition": "right",
|
||||
"targetPosition": "left",
|
||||
"type": "agentNode"
|
||||
},
|
||||
{
|
||||
"data": {
|
||||
"form": {
|
||||
"enable_tips": true,
|
||||
"inputs": {
|
||||
"instructions": {
|
||||
"name": "instructions",
|
||||
"optional": false,
|
||||
"options": [],
|
||||
"type": "paragraph"
|
||||
}
|
||||
},
|
||||
"outputs": {
|
||||
"instructions": {
|
||||
"name": "instructions",
|
||||
"optional": false,
|
||||
"options": [],
|
||||
"type": "paragraph"
|
||||
}
|
||||
},
|
||||
"tips": "Here is my search plan:\n{Agent:LargeFliesMelt@content}\nAre you okay with it?"
|
||||
},
|
||||
"label": "UserFillUp",
|
||||
"name": "Await Response"
|
||||
},
|
||||
"dragging": false,
|
||||
"id": "UserFillUp:GoldBroomsRelate",
|
||||
"measured": {
|
||||
"height": 50,
|
||||
"width": 200
|
||||
},
|
||||
"position": {
|
||||
"x": 683.3409492927474,
|
||||
"y": 116.76274137645598
|
||||
},
|
||||
"selected": false,
|
||||
"sourcePosition": "right",
|
||||
"targetPosition": "left",
|
||||
"type": "ragNode"
|
||||
},
|
||||
{
|
||||
"data": {
|
||||
"form": {
|
||||
"cite": true,
|
||||
"delay_after_error": 1,
|
||||
"description": "",
|
||||
"exception_default_value": "",
|
||||
"exception_goto": [],
|
||||
"exception_method": "",
|
||||
"frequencyPenaltyEnabled": false,
|
||||
"frequency_penalty": 0.7,
|
||||
"llm_id": "qwen-turbo@Tongyi-Qianwen",
|
||||
"maxTokensEnabled": false,
|
||||
"max_retries": 3,
|
||||
"max_rounds": 1,
|
||||
"max_tokens": 256,
|
||||
"mcp": [],
|
||||
"message_history_window_size": 12,
|
||||
"outputs": {
|
||||
"content": {
|
||||
"type": "string",
|
||||
"value": ""
|
||||
},
|
||||
"structured": {}
|
||||
},
|
||||
"presencePenaltyEnabled": false,
|
||||
"presence_penalty": 0.4,
|
||||
"prompts": [
|
||||
{
|
||||
"content": "Search Plan: {Agent:LargeFliesMelt@content}\n\n\n\nAwait Response feedback:{UserFillUp:GoldBroomsRelate@instructions}\n",
|
||||
"role": "user"
|
||||
}
|
||||
],
|
||||
"sys_prompt": "<role>\nYou are the Search Agent.\nYour job is to execute the approved Search Plan, integrate the Await Response feedback, retrieve evidence, and produce a well-grounded answer.\n</role>\n<objectives>\nTranslate the plan + feedback into concrete searches.\nCollect diverse, trustworthy, and recent evidence meeting the plan’s evidence bar.\nSynthesize a concise answer; include citations next to claims they support.\nIf evidence is insufficient or conflicting, clearly state limitations and propose next steps.\n</objectives>\n <tools>\nRetrieval: You must use Retrieval to do the search.\n </tools>\n",
|
||||
"temperature": 0.1,
|
||||
"temperatureEnabled": false,
|
||||
"tools": [
|
||||
{
|
||||
"component_name": "Retrieval",
|
||||
"name": "Retrieval",
|
||||
"params": {
|
||||
"cross_languages": [],
|
||||
"description": "",
|
||||
"empty_response": "",
|
||||
"kb_ids": [],
|
||||
"keywords_similarity_weight": 0.7,
|
||||
"outputs": {
|
||||
"formalized_content": {
|
||||
"type": "string",
|
||||
"value": ""
|
||||
},
|
||||
"json": {
|
||||
"type": "Array<Object>",
|
||||
"value": []
|
||||
}
|
||||
},
|
||||
"rerank_id": "",
|
||||
"similarity_threshold": 0.2,
|
||||
"toc_enhance": false,
|
||||
"top_k": 1024,
|
||||
"top_n": 8,
|
||||
"use_kg": false
|
||||
}
|
||||
}
|
||||
],
|
||||
"topPEnabled": false,
|
||||
"top_p": 0.3,
|
||||
"user_prompt": "",
|
||||
"visual_files_var": ""
|
||||
},
|
||||
"label": "Agent",
|
||||
"name": "Search Agent"
|
||||
},
|
||||
"dragging": false,
|
||||
"id": "Agent:TangyWordsType",
|
||||
"measured": {
|
||||
"height": 90,
|
||||
"width": 200
|
||||
},
|
||||
"position": {
|
||||
"x": 944.6411255659472,
|
||||
"y": 99.84499066368488
|
||||
},
|
||||
"selected": true,
|
||||
"sourcePosition": "right",
|
||||
"targetPosition": "left",
|
||||
"type": "agentNode"
|
||||
},
|
||||
{
|
||||
"data": {
|
||||
"form": {
|
||||
"description": "This is an agent for a specific task.",
|
||||
"user_prompt": "This is the order you need to send to the agent."
|
||||
},
|
||||
"label": "Tool",
|
||||
"name": "flow.tool_0"
|
||||
},
|
||||
"id": "Tool:NastyBatsGo",
|
||||
"measured": {
|
||||
"height": 50,
|
||||
"width": 200
|
||||
},
|
||||
"position": {
|
||||
"x": 862.6411255659472,
|
||||
"y": 239.84499066368488
|
||||
},
|
||||
"sourcePosition": "right",
|
||||
"targetPosition": "left",
|
||||
"type": "toolNode"
|
||||
},
|
||||
{
|
||||
"data": {
|
||||
"form": {
|
||||
"content": [
|
||||
"{Agent:TangyWordsType@content}"
|
||||
]
|
||||
},
|
||||
"label": "Message",
|
||||
"name": "Message"
|
||||
},
|
||||
"dragging": false,
|
||||
"id": "Message:FreshWallsStudy",
|
||||
"measured": {
|
||||
"height": 50,
|
||||
"width": 200
|
||||
},
|
||||
"position": {
|
||||
"x": 1216.7057997987163,
|
||||
"y": 120.48541298149814
|
||||
},
|
||||
"selected": false,
|
||||
"sourcePosition": "right",
|
||||
"targetPosition": "left",
|
||||
"type": "messageNode"
|
||||
}
|
||||
]
|
||||
},
|
||||
"history": [],
|
||||
"messages": [],
|
||||
"path": [],
|
||||
"retrieval": [],
|
||||
"variables": {}
|
||||
},
|
||||
"avatar":
|
||||
"data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAADAAAAAwCAYAAABXAvmHAAAACXBIWXMAABYlAAAWJQFJUiTwAAAAAXNSR0IArs4c6QAAAARnQU1BAACxjwv8YQUAAA1FSURBVHgBzVppcFRVFv7e672TTjrprGTrkI1tgFjiAI4TmgEGcEFgVFAHxr2mLFe0cKxyBOeHljVjwVDqzDhVTqksOlOiuCBrCwk7hCAkLAGy70kn3eklvb03571e8rrTCQGi5em6ecu9fe85557znXNuh0EUHTGb5/Ast4TnmXsB3oifBTFVxEsVy7PrZ5lM9RE9oZvTZrPeAe51Gvh85ABGMuonJh6Ra9Mzw2KDm2PXm0ymPoS6ReZ5n5n6p4f45QOsj0ihEXxwdGg9Pta3Ax2R98ErH+wIzMFjFFTlhcwkCMEKT4qE+Nc5np8e5Aa8MAf94UNtmEl5DPYFrpHPER8+xn3U3KNkXqDpKtFaSGi1Wm188MH7655+6inoExLR0toW0N211R+8jJ198YHtiFyDH5EHE8Oy7Iccx/1BeJ4yZRI2bXgHSfokyGQsLH19YDgax4RXGJz450A8v1HGMMw62soMEgRxcVqcrDyG9X95E339fSidPh05Odnot9vB0Qe84CB8wMQglSe8HWPI3HAmJVmLYTOYJ55YxVedOYcTxyux59svsfqpJ5GWkorGhhZYLL0oKSnC2pfX4DemOaIg9n473B63ZCFGOu2YU4QIgikIJiF5yaxcuZTPzzfC42WQnZWJLdu2IiXFgG+/3oOs7Ey0t3fC7/cjMyMdU6dOwpyyMswtmwO9PhmdnYE+5hrcS9Eq8MzSHRfuvT7Dj5qbiKeGY4fKsWL1ShQWFKG1uQVz55Thh5pqVJ87D6fLDY/HA7/PD87PiYxMnToRLzz7LI0zoamphQTxYewpKIzUB6OUJcIoOTEOVpjhcroRH69FQ2MLHANOZKZn4PcPP4DikgKoVWrI5CzkcjmUSiUuXriKRx77I2bMuh0tba1IStKL84TXDTU26hp9z0QzJYVVbhBuJdDL8Vy4yeRK+TrBNHp7e9HR0QWXawDNTa3o7ulB2R2zER+nQXZ2FhRKBVwOl7BloO+IVwGpBtxebPv0v0hNN+CO2beHHT4cD6I9Pvo+iobElmtYk6xkQuE6nvOjpaWNmHeDJxPp7bVCp4vHvv0HsXfvQVSeOgOtRoOFi+aJu+C0O6FVa2AwJEOhkMPhcODEiVNwuJ0w5ubRLsbB7/UPMsvH5HTYxvNBwSUAIT5yQcG4wTlk3+/5bt2cX5chIzMdbW3tOH/+EtRaNex2B1RkKkuXLkJCog41NZdw8MBhFBUVICU1WTQDwZwMKcloJJN74snHMeByoq7uCr1XECBkEzB4I3ge6p4xkp3hiOFj+rdcYFpg5vaZs3H3nXehh0znyPFjWLv2NcyeNQPbtn0hDlRr1GKc2L1rP5QqJeQyMiMZRH8QqKujDa+9+gruWf4A4siPlLQzeXlG+Dl/hAjR2B7KnURQEnyDC4kyXPoS5dN7dn7FR0umJ4d8aNUjBKEdePSxh8iMDuDixcsiCglbK5PJ4PMFUEe4F6B01zfbMeD1iTx8vPUTMi0lXn7uJXT1dIr+Eph/EFViAMqoKGBag89sZFIVuMaTplsJWZYtvwv7yQdqai5CQWYhoMyECUWYv8BE2s0RJ8gk03v/3Q24UHsJ27/YjnaKDa4BD1INBpQfrgDlWoFZRRPgcdNBW4hlLBNGM3mEdMGrmzDfSNt/4OBhdLR1Yv26tXj77U0iEqUYUvDvf34AjUpOaUclssaNQ2cXoZdzAC++/Cds3vwZLl+oxtZPt2INPVefOYH6hsbALtwkDabtg8jG8pK0OURtbR34dMtHaKhrImlZvPXW35FsSMKKFfeivOIwsnKMWLDwbrjdbtEHOB8HFfnF8YoD+Md7m7Bv3y7s2rtfXOjkqUqMFYnMR5tQsCdCQwK+t5MQ+3d/g4cfXiHCZHeXBZ98/D/R5gX0sfRZsGTZAzDNW4SG5kbk5uSSL3DIzc1GaekMkNyQ0Z+PN29FYmICxooEPkNN5Hv3zh0jhgq9PhFXrtSh+nw1Kg4dIah04Oix02JfQUEepk2bgs8//0bcocWLfotXXlqD7u5uzF1wp7hIcnIyKr7fjSZKTyDV5HXkO9EkrfjksQZI62Brnw2pqQbMz5yH5fcug4/zUtCqxIcffSRmsI2NrSKjlp5ebNnyGWG/B2++8QYW3TlfTAhZGQOej8EAw0SYbcTaGLk6kxZR8miGY4wWGRAY6+7pFu1PSLH/88G/yAcG8NcNG3H85CkxSgrBT4gPDqcDE4sn4GjKCbBCsGAki7KDO8AybIBRugqLMFFMhuPDCALJI9CBGfxidKkYfi+Mp0mFGCEkUy888wy0Wq3YJyf/sFOa0W/rR093DzLS0tDXZw3WHkPnE9li5IhzdMCu0EGupnn8/sgxLEISBJTMISKSsVKnCDEtvYY+ovLIKQVGQk1waCF56yDs7+zsotjRDlu/DR6fF4sXL0RN9QWwFD/ik1JJwdwQ7XEspSKudqSqFUhsqcWAtUccH4tC/Ilrh/hlmbB84RcjtZgTSwQMPQtayhqXiTPvvIDHJyXjtWVlUCUkQ6ZQBHZDXIvMR6akOmMALtoxTUom9K2XaPesYl9wsmsSG2aQYTDih4ndQvl89LOb0o72wlkonTYZ9y25BwffWYOexstQJ+jhJ8377b1Ic9ngam4A53GC9zqgpCrPd7YcnFwZqB4l64aVHNXYmBoMDkbMgmMUJNqqH6rEFChvW0wMWjFv0RKom37AuX3bkcnbUKCOQ3LqOFgb69Gu1GGgs5HiiwYTbp0JS8UOMBpdxLrDRXI2WruICmrRGh5VEz9kIj4PtIZMqGdQ1HZZEZ+ZhxmFuVD73WT/bhx5/8+wZk1AvHEqrAYjCdoHH9UUxRMnoefIDggFHitjR9QVK9VyQIBYzNxAHhPEeI6EkJM2vfm3QSnzwpA/FZaOeuzdvAn9GUYUlC2Gj2zfkJoPRfpkNFdW0DgWeVlGKJqr4SKQkFNmOxwHbDSTTFjxTFAuaQ9z7Y8EIUK+RdgIXq1DUsFM2Pva0dfWDGu/C8ZxBhiaTyGXbF/ltsBSfRI+dQKhmB8KbTwUcYlgG2vQ095CZaxy0KwxaOLscEyEtyWYx4ey4ZhloFQ/XLD5o5pgDl46V+q4igOU6PVyGtxy10ooEtPQfKYc+z/cBBfrRM7EKZR3dYs7x/IK6JJT4K2vRq8QTxgpfwgaaizn4MN/pFsyfAsWKRFNFlCPCNRC3k7PjVZKCpvrULZ8BV7927vobmxG7dG9OHK4HDYqW6/W1cI3wCAtdyql8c0U/Z3wUuGUWzwFF8w7oIzTiZYptQ2WCUdoVnKVoFL4lgn3D+5UYKx45YVdCupEsmNhUyRUUhrywGWUoGhCKdpbWmGpO42LZ8/BR/EgnmqNxb97FCqFH9UHd8JG6YjT1U9w7IOP6pNJhUYc3f0VtCSElBt5KDeJFEKKnbwElQLhX0oMI70O52rBBM3nRELRL9HQeh5x8KGy3IzjVbWwWfvxwc7dsFssuHz2NBo7W1Ciywdnt4KOQ+ChkxC1SoWmc+XwmBZRYaUWq0Oe80H26KpV60TNhTQW0fiAVoNFBBPSMs9E+ED08/CNxgm2HZcCy5XjkGt16Dp/Fms3vUfiyGFtv4KK/ftQWFxEhwEMvFQwyTihWFKIJ4Ie+u6V0+XkUjzcDhvcNJ9ckkxgSN6LoNfzsYA08t3ooTYA1UrjrXD+cAC/mj+X0CYJ/cT8oe++ppQiBf2OfjE5tFD2qzekIk9GdXpGDtRUSMnpN4wByptSUhOhdLVK6gFecu4SfY3Nxo0TbX18qhHujHqkjy9G84WT6GptwZUOC2qqziM7Lx16jRI54/MJwTzgKYdiBSgWYopSQ0edJXB6/NhyslYCo+F8Q5KDYGiUHVTiTYkAn9MGfcks9NFRpnAUqc/MIgY5LF1xH9Ioe1247H5MnjQNxZNvAUMptttuQyYdlrXRaYmccqnaLhvSEjWRFdlozEDK+I0KIVZiAsRSkaSdOBfeumPobWkUS9RcOqZJjtchJbsEjt42eEhQpS5ZFBiETHVX68jnPHR040YGHW1GFjQxXED67mbq2IhpJTvJu21QjZ8BFaXRPnq2kP3rE3RouUBVHsGzlQ4PEhLi4CB/aCLo9dLhmRDFs+k3jEt0GC3sQD01Y2jCoatJb6MGXEOe0QnMUPCyI+OWBdDk/AK1x3ejraFeLJYG6DTE7XIhjiC0z9IFJ9UNOq2AOz4UZ2bD6hyokj22enU+zTETN0LMyO16MliOzEmpVsH4i1lIzy2kn0+14CjAUSYHi80uFj8cOfL4wiJkpKfT4QKHLB37HVNuNs9hWc6MkVU5IiL9GCSccMsVKjrOl5HZuKnWtqKj4TI6as+AcffTkaUKfl6WL9rEoQP7NpAenhvNxEOO92LGjx+BGCGfUkKh0ZIv21Bz9vTGJ59+8XlRALPZrFexvLAL0zEGNNLJ80h910FVmoQkU2lpaeBfDYT/OXBzjIm0uhFjQMwN9o2KOH6jhngVmI853xGz2QgZ1lH2MI3yIXFHeP4nNP7YVE9cfMlw7BezTKbvpR3/Bx465XnKBextAAAAAElFTkSuQmCC"
|
||||
}
|
||||
|
|
@ -358,7 +358,7 @@ def list_app():
|
|||
for o in objs:
|
||||
if o.llm_name + "@" + o.llm_factory in llm_set:
|
||||
continue
|
||||
llms.append({"llm_name": o.llm_name, "model_type": o.model_type, "fid": o.llm_factory, "available": True})
|
||||
llms.append({"llm_name": o.llm_name, "model_type": o.model_type, "fid": o.llm_factory, "available": True, "status": StatusEnum.VALID.value})
|
||||
|
||||
res = {}
|
||||
for m in llms:
|
||||
|
|
|
|||
|
|
@ -89,13 +89,7 @@ def init_superuser():
|
|||
|
||||
|
||||
def init_llm_factory():
|
||||
try:
|
||||
LLMService.filter_delete([(LLM.fid == "MiniMax" or LLM.fid == "Minimax")])
|
||||
LLMService.filter_delete([(LLM.fid == "cohere")])
|
||||
LLMFactoriesService.filter_delete([LLMFactories.name == "cohere"])
|
||||
except Exception:
|
||||
pass
|
||||
|
||||
LLMFactoriesService.filter_delete([1 == 1])
|
||||
factory_llm_infos = settings.FACTORY_LLM_INFOS
|
||||
for factory_llm_info in factory_llm_infos:
|
||||
info = deepcopy(factory_llm_info)
|
||||
|
|
|
|||
|
|
@ -236,13 +236,13 @@ class Connector2KbService(CommonService):
|
|||
conn_id = conn["id"]
|
||||
connector_ids.append(conn_id)
|
||||
if conn_id in old_conn_ids:
|
||||
cls.update_by_id(conn_id, {"auto_parse": conn.get("auto_parse", "1")})
|
||||
cls.filter_update([cls.model.connector_id==conn_id, cls.model.kb_id==kb_id], {"auto_parse": conn.get("auto_parse", "1")})
|
||||
continue
|
||||
cls.save(**{
|
||||
"id": get_uuid(),
|
||||
"connector_id": conn_id,
|
||||
"kb_id": kb_id,
|
||||
"auto_parse": conn.get("auto_parse", "1")
|
||||
"auto_parse": conn.get("auto_parse", "1")
|
||||
})
|
||||
SyncLogsService.schedule(conn_id, kb_id, reindex=True)
|
||||
|
||||
|
|
|
|||
|
|
@ -846,7 +846,7 @@ def queue_raptor_o_graphrag_tasks(sample_doc_id, ty, priority, fake_doc_id="", d
|
|||
"to_page": 100000000,
|
||||
"task_type": ty,
|
||||
"progress_msg": datetime.now().strftime("%H:%M:%S") + " created task " + ty,
|
||||
"begin_at": datetime.now(),
|
||||
"begin_at": datetime.now().strftime("%Y-%m-%d %H:%M:%S"),
|
||||
}
|
||||
|
||||
task = new_task()
|
||||
|
|
|
|||
|
|
@ -170,6 +170,10 @@ CONFLUENCE_TIMEZONE_OFFSET = float(
|
|||
os.environ.get("CONFLUENCE_TIMEZONE_OFFSET", get_current_tz_offset())
|
||||
)
|
||||
|
||||
CONFLUENCE_SYNC_TIME_BUFFER_SECONDS = int(
|
||||
os.environ.get("CONFLUENCE_SYNC_TIME_BUFFER_SECONDS", ONE_DAY)
|
||||
)
|
||||
|
||||
GOOGLE_DRIVE_CONNECTOR_SIZE_THRESHOLD = int(
|
||||
os.environ.get("GOOGLE_DRIVE_CONNECTOR_SIZE_THRESHOLD", 10 * 1024 * 1024)
|
||||
)
|
||||
|
|
|
|||
|
|
@ -20,6 +20,7 @@ from requests.exceptions import HTTPError
|
|||
|
||||
from common.data_source.config import INDEX_BATCH_SIZE, DocumentSource, CONTINUE_ON_CONNECTOR_FAILURE, \
|
||||
CONFLUENCE_CONNECTOR_LABELS_TO_SKIP, CONFLUENCE_TIMEZONE_OFFSET, CONFLUENCE_CONNECTOR_USER_PROFILES_OVERRIDE, \
|
||||
CONFLUENCE_SYNC_TIME_BUFFER_SECONDS, \
|
||||
OAUTH_CONFLUENCE_CLOUD_CLIENT_ID, OAUTH_CONFLUENCE_CLOUD_CLIENT_SECRET, _DEFAULT_PAGINATION_LIMIT, \
|
||||
_PROBLEMATIC_EXPANSIONS, _REPLACEMENT_EXPANSIONS, _USER_NOT_FOUND, _COMMENT_EXPANSION_FIELDS, \
|
||||
_ATTACHMENT_EXPANSION_FIELDS, _PAGE_EXPANSION_FIELDS, ONE_DAY, ONE_HOUR, _RESTRICTIONS_EXPANSION_FIELDS, \
|
||||
|
|
@ -1289,6 +1290,7 @@ class ConfluenceConnector(
|
|||
# pages.
|
||||
labels_to_skip: list[str] = CONFLUENCE_CONNECTOR_LABELS_TO_SKIP,
|
||||
timezone_offset: float = CONFLUENCE_TIMEZONE_OFFSET,
|
||||
time_buffer_seconds: int = CONFLUENCE_SYNC_TIME_BUFFER_SECONDS,
|
||||
scoped_token: bool = False,
|
||||
) -> None:
|
||||
self.wiki_base = wiki_base
|
||||
|
|
@ -1300,6 +1302,7 @@ class ConfluenceConnector(
|
|||
self.batch_size = batch_size
|
||||
self.labels_to_skip = labels_to_skip
|
||||
self.timezone_offset = timezone_offset
|
||||
self.time_buffer_seconds = max(0, time_buffer_seconds)
|
||||
self.scoped_token = scoped_token
|
||||
self._confluence_client: OnyxConfluence | None = None
|
||||
self._low_timeout_confluence_client: OnyxConfluence | None = None
|
||||
|
|
@ -1356,6 +1359,24 @@ class ConfluenceConnector(
|
|||
logging.info(f"Setting allow_images to {value}.")
|
||||
self.allow_images = value
|
||||
|
||||
def _adjust_start_for_query(
|
||||
self, start: SecondsSinceUnixEpoch | None
|
||||
) -> SecondsSinceUnixEpoch | None:
|
||||
if not start or start <= 0:
|
||||
return start
|
||||
if self.time_buffer_seconds <= 0:
|
||||
return start
|
||||
return max(0.0, start - self.time_buffer_seconds)
|
||||
|
||||
def _is_newer_than_start(
|
||||
self, doc_time: datetime | None, start: SecondsSinceUnixEpoch | None
|
||||
) -> bool:
|
||||
if not start or start <= 0:
|
||||
return True
|
||||
if doc_time is None:
|
||||
return True
|
||||
return doc_time.timestamp() > start
|
||||
|
||||
@property
|
||||
def confluence_client(self) -> OnyxConfluence:
|
||||
if self._confluence_client is None:
|
||||
|
|
@ -1414,9 +1435,10 @@ class ConfluenceConnector(
|
|||
"""
|
||||
page_query = self.base_cql_page_query + self.cql_label_filter
|
||||
# Add time filters
|
||||
if start:
|
||||
query_start = self._adjust_start_for_query(start)
|
||||
if query_start:
|
||||
formatted_start_time = datetime.fromtimestamp(
|
||||
start, tz=self.timezone
|
||||
query_start, tz=self.timezone
|
||||
).strftime("%Y-%m-%d %H:%M")
|
||||
page_query += f" and lastmodified >= '{formatted_start_time}'"
|
||||
if end:
|
||||
|
|
@ -1436,10 +1458,12 @@ class ConfluenceConnector(
|
|||
) -> str:
|
||||
attachment_query = f"type=attachment and container='{confluence_page_id}'"
|
||||
attachment_query += self.cql_label_filter
|
||||
|
||||
# Add time filters to avoid reprocessing unchanged attachments during refresh
|
||||
if start:
|
||||
query_start = self._adjust_start_for_query(start)
|
||||
if query_start:
|
||||
formatted_start_time = datetime.fromtimestamp(
|
||||
start, tz=self.timezone
|
||||
query_start, tz=self.timezone
|
||||
).strftime("%Y-%m-%d %H:%M")
|
||||
attachment_query += f" and lastmodified >= '{formatted_start_time}'"
|
||||
if end:
|
||||
|
|
@ -1447,6 +1471,7 @@ class ConfluenceConnector(
|
|||
"%Y-%m-%d %H:%M"
|
||||
)
|
||||
attachment_query += f" and lastmodified <= '{formatted_end_time}'"
|
||||
|
||||
attachment_query += " order by lastmodified asc"
|
||||
return attachment_query
|
||||
|
||||
|
|
@ -1668,7 +1693,8 @@ class ConfluenceConnector(
|
|||
),
|
||||
primary_owners=primary_owners,
|
||||
)
|
||||
attachment_docs.append(attachment_doc)
|
||||
if self._is_newer_than_start(attachment_doc.doc_updated_at, start):
|
||||
attachment_docs.append(attachment_doc)
|
||||
except Exception as e:
|
||||
logging.error(
|
||||
f"Failed to extract/summarize attachment {attachment['title']}",
|
||||
|
|
@ -1729,7 +1755,8 @@ class ConfluenceConnector(
|
|||
continue
|
||||
|
||||
# yield completed document (or failure)
|
||||
yield doc_or_failure
|
||||
if self._is_newer_than_start(doc_or_failure.doc_updated_at, start):
|
||||
yield doc_or_failure
|
||||
|
||||
# Now get attachments for that page:
|
||||
attachment_docs, attachment_failures = self._fetch_page_attachments(
|
||||
|
|
|
|||
|
|
@ -5,7 +5,7 @@
|
|||
"logo": "",
|
||||
"tags": "LLM,TEXT EMBEDDING,TTS,TEXT RE-RANK,SPEECH2TEXT,MODERATION",
|
||||
"status": "1",
|
||||
"rank": "99",
|
||||
"rank": "999",
|
||||
"llm": [
|
||||
{
|
||||
"llm_name": "gpt-5",
|
||||
|
|
@ -175,7 +175,7 @@
|
|||
"logo": "",
|
||||
"tags": "LLM",
|
||||
"status": "1",
|
||||
"rank": "92",
|
||||
"rank": "930",
|
||||
"llm": [
|
||||
{
|
||||
"llm_name": "grok-4",
|
||||
|
|
@ -332,7 +332,7 @@
|
|||
"logo": "",
|
||||
"tags": "LLM,TEXT EMBEDDING,TEXT RE-RANK,TTS,SPEECH2TEXT,MODERATION",
|
||||
"status": "1",
|
||||
"rank": "94",
|
||||
"rank": "950",
|
||||
"llm": [
|
||||
{
|
||||
"llm_name": "Moonshot-Kimi-K2-Instruct",
|
||||
|
|
@ -717,7 +717,7 @@
|
|||
"logo": "",
|
||||
"tags": "LLM,TEXT EMBEDDING,SPEECH2TEXT,MODERATION",
|
||||
"status": "1",
|
||||
"rank": "93",
|
||||
"rank": "940",
|
||||
"llm": [
|
||||
{
|
||||
"llm_name": "glm-4.5",
|
||||
|
|
@ -863,7 +863,7 @@
|
|||
"logo": "",
|
||||
"tags": "LLM,TEXT EMBEDDING,SPEECH2TEXT,MODERATION",
|
||||
"status": "1",
|
||||
"rank": "84",
|
||||
"rank": "830",
|
||||
"llm": []
|
||||
},
|
||||
{
|
||||
|
|
@ -885,7 +885,8 @@
|
|||
"logo": "",
|
||||
"tags": "LLM,TEXT EMBEDDING,SPEECH2TEXT,MODERATION",
|
||||
"status": "1",
|
||||
"llm": []
|
||||
"llm": [],
|
||||
"rank": "890"
|
||||
},
|
||||
{
|
||||
"name": "VLLM",
|
||||
|
|
@ -899,7 +900,7 @@
|
|||
"logo": "",
|
||||
"tags": "LLM,TEXT EMBEDDING,IMAGE2TEXT",
|
||||
"status": "1",
|
||||
"rank": "95",
|
||||
"rank": "960",
|
||||
"llm": [
|
||||
{
|
||||
"llm_name": "kimi-thinking-preview",
|
||||
|
|
@ -1020,7 +1021,7 @@
|
|||
"logo": "",
|
||||
"tags": "LLM",
|
||||
"status": "1",
|
||||
"rank": "96",
|
||||
"rank": "970",
|
||||
"llm": [
|
||||
{
|
||||
"llm_name": "deepseek-chat",
|
||||
|
|
@ -1199,7 +1200,7 @@
|
|||
"logo": "",
|
||||
"tags": "LLM,TEXT EMBEDDING",
|
||||
"status": "1",
|
||||
"rank": "82",
|
||||
"rank": "810",
|
||||
"llm": [
|
||||
{
|
||||
"llm_name": "abab6.5-chat",
|
||||
|
|
@ -1239,7 +1240,7 @@
|
|||
"logo": "",
|
||||
"tags": "LLM,TEXT EMBEDDING,MODERATION",
|
||||
"status": "1",
|
||||
"rank": "90",
|
||||
"rank": "910",
|
||||
"llm": [
|
||||
{
|
||||
"llm_name": "codestral-latest",
|
||||
|
|
@ -1333,7 +1334,7 @@
|
|||
"logo": "",
|
||||
"tags": "LLM,TEXT EMBEDDING,SPEECH2TEXT,MODERATION",
|
||||
"status": "1",
|
||||
"rank": "85",
|
||||
"rank": "850",
|
||||
"llm": [
|
||||
{
|
||||
"llm_name": "gpt-4o-mini",
|
||||
|
|
@ -1418,7 +1419,7 @@
|
|||
"logo": "",
|
||||
"tags": "LLM,TEXT EMBEDDING",
|
||||
"status": "1",
|
||||
"rank": "86",
|
||||
"rank": "860",
|
||||
"llm": []
|
||||
},
|
||||
{
|
||||
|
|
@ -1426,7 +1427,7 @@
|
|||
"logo": "",
|
||||
"tags": "LLM,TEXT EMBEDDING,IMAGE2TEXT",
|
||||
"status": "1",
|
||||
"rank": "97",
|
||||
"rank": "980",
|
||||
"llm": [
|
||||
{
|
||||
"llm_name": "gemini-2.5-flash",
|
||||
|
|
@ -1482,7 +1483,7 @@
|
|||
"logo": "",
|
||||
"tags": "LLM",
|
||||
"status": "1",
|
||||
"rank": "81",
|
||||
"rank": "800",
|
||||
"llm": [
|
||||
{
|
||||
"llm_name": "gemma2-9b-it",
|
||||
|
|
@ -1542,7 +1543,8 @@
|
|||
"logo": "",
|
||||
"tags": "LLM,IMAGE2TEXT",
|
||||
"status": "1",
|
||||
"llm": []
|
||||
"llm": [],
|
||||
"rank": "840"
|
||||
},
|
||||
{
|
||||
"name": "StepFun",
|
||||
|
|
@ -1592,7 +1594,7 @@
|
|||
"logo": "",
|
||||
"tags": "LLM,TEXT EMBEDDING, TEXT RE-RANK",
|
||||
"status": "1",
|
||||
"rank": "80",
|
||||
"rank": "790",
|
||||
"llm": [
|
||||
{
|
||||
"llm_name": "01-ai/yi-large",
|
||||
|
|
@ -2347,7 +2349,7 @@
|
|||
"logo": "",
|
||||
"tags": "LLM,TEXT EMBEDDING, TEXT RE-RANK",
|
||||
"status": "1",
|
||||
"rank": "89",
|
||||
"rank": "900",
|
||||
"llm": [
|
||||
{
|
||||
"llm_name": "command-r-plus",
|
||||
|
|
@ -2626,7 +2628,7 @@
|
|||
"logo": "",
|
||||
"tags": "LLM,TEXT EMBEDDING,TEXT RE-RANK,IMAGE2TEXT",
|
||||
"status": "1",
|
||||
"rank": "79",
|
||||
"rank": "780",
|
||||
"llm": [
|
||||
{
|
||||
"llm_name": "THUDM/GLM-4.1V-9B-Thinking",
|
||||
|
|
@ -3177,7 +3179,7 @@
|
|||
"logo": "",
|
||||
"tags": "LLM,TTS",
|
||||
"status": "1",
|
||||
"rank": "83",
|
||||
"rank": "820",
|
||||
"llm": []
|
||||
},
|
||||
{
|
||||
|
|
@ -3185,7 +3187,7 @@
|
|||
"logo": "",
|
||||
"tags": "LLM",
|
||||
"status": "1",
|
||||
"rank": "88",
|
||||
"rank": "880",
|
||||
"llm": []
|
||||
},
|
||||
{
|
||||
|
|
@ -3207,7 +3209,7 @@
|
|||
"logo": "",
|
||||
"tags": "LLM",
|
||||
"status": "1",
|
||||
"rank": "98",
|
||||
"rank": "990",
|
||||
"llm": [
|
||||
{
|
||||
"llm_name": "claude-opus-4-1-20250805",
|
||||
|
|
@ -3809,7 +3811,7 @@
|
|||
"logo": "",
|
||||
"tags": "TEXT EMBEDDING,TEXT RE-RANK",
|
||||
"status": "1",
|
||||
"rank": "91",
|
||||
"rank": "920",
|
||||
"llm": []
|
||||
},
|
||||
{
|
||||
|
|
@ -4553,7 +4555,7 @@
|
|||
"logo": "",
|
||||
"tags": "LLM",
|
||||
"status": "1",
|
||||
"rank": "87",
|
||||
"rank": "870",
|
||||
"llm": [
|
||||
{
|
||||
"llm_name": "LongCat-Flash-Chat",
|
||||
|
|
|
|||
|
|
@ -217,3 +217,6 @@ REGISTER_ENABLED=1
|
|||
# Enable DocLing and Mineru
|
||||
USE_DOCLING=false
|
||||
USE_MINERU=false
|
||||
|
||||
# pptx support
|
||||
DOTNET_SYSTEM_GLOBALIZATION_INVARIANT=1
|
||||
|
|
@ -97,14 +97,7 @@ RAGFlow utilizes MinIO as its object storage solution, leveraging its scalabilit
|
|||
- `SVR_HTTP_PORT`
|
||||
The port used to expose RAGFlow's HTTP API service to the host machine, allowing **external** access to the service running inside the Docker container. Defaults to `9380`.
|
||||
- `RAGFLOW-IMAGE`
|
||||
The Docker image edition. Available editions:
|
||||
|
||||
- `infiniflow/ragflow:v0.21.1-slim` (default): The RAGFlow Docker image without embedding models.
|
||||
- `infiniflow/ragflow:v0.21.1`: The RAGFlow Docker image with embedding models including:
|
||||
- Built-in embedding models:
|
||||
- `BAAI/bge-large-zh-v1.5`
|
||||
- `maidalun1020/bce-embedding-base_v1`
|
||||
|
||||
The Docker image edition. Defaults to `infiniflow/ragflow:v0.21.1` (the RAGFlow Docker image without embedding models).
|
||||
|
||||
:::tip NOTE
|
||||
If you cannot download the RAGFlow Docker image, try the following mirrors.
|
||||
|
|
|
|||
|
|
@ -24,14 +24,6 @@ A guide explaining how to build a RAGFlow Docker image from its source code. By
|
|||
|
||||
## Build a Docker image
|
||||
|
||||
<Tabs
|
||||
defaultValue="without"
|
||||
values={[
|
||||
{label: 'Build a Docker image without embedding models', value: 'without'},
|
||||
{label: 'Build a Docker image including embedding models', value: 'including'}
|
||||
]}>
|
||||
<TabItem value="without">
|
||||
|
||||
This image is approximately 2 GB in size and relies on external LLM and embedding services.
|
||||
|
||||
:::danger IMPORTANT
|
||||
|
|
@ -47,10 +39,6 @@ docker build -f Dockerfile.deps -t infiniflow/ragflow_deps .
|
|||
docker build -f Dockerfile -t infiniflow/ragflow:nightly .
|
||||
```
|
||||
|
||||
|
||||
</TabItem>
|
||||
</Tabs>
|
||||
|
||||
## Launch a RAGFlow Service from Docker for MacOS
|
||||
|
||||
After building the infiniflow/ragflow:nightly image, you are ready to launch a fully-functional RAGFlow service with all the required components, such as Elasticsearch, MySQL, MinIO, Redis, and more.
|
||||
|
|
|
|||
|
|
@ -42,11 +42,7 @@ cd ragflow/
|
|||
```
|
||||
|
||||
2. Install Python dependencies:
|
||||
- slim:
|
||||
```bash
|
||||
uv sync --python 3.10 # install RAGFlow dependent python modules
|
||||
```
|
||||
- full:
|
||||
|
||||
```bash
|
||||
uv sync --python 3.10 # install RAGFlow dependent python modules
|
||||
```
|
||||
|
|
|
|||
43
docs/faq.mdx
43
docs/faq.mdx
|
|
@ -26,27 +26,9 @@ The "garbage in garbage out" status quo remains unchanged despite the fact that
|
|||
|
||||
---
|
||||
|
||||
### Differences between RAGFlow full edition and RAGFlow slim edition?
|
||||
|
||||
Each RAGFlow release is available in two editions:
|
||||
|
||||
- **Slim edition**: excludes built-in embedding models and is identified by a **-slim** suffix added to the version name. Example: `infiniflow/ragflow:v0.21.1-slim`
|
||||
- **Full edition**: includes built-in embedding models and has no suffix added to the version name. Example: `infiniflow/ragflow:v0.21.1`
|
||||
|
||||
Note: Starting with `v0.22.0`, we ship only the slim edition and no longer append the **-slim** suffix to the image tag.
|
||||
|
||||
---
|
||||
|
||||
### Which embedding models can be deployed locally?
|
||||
|
||||
RAGFlow offers two Docker image editions, `v0.21.1-slim` and `v0.21.1`:
|
||||
|
||||
- `infiniflow/ragflow:v0.21.1-slim` (default): The RAGFlow Docker image without embedding models.
|
||||
- `infiniflow/ragflow:v0.21.1`: The RAGFlow Docker image with the following built-in embedding models:
|
||||
- `BAAI/bge-large-zh-v1.5`
|
||||
- `maidalun1020/bce-embedding-base_v1`
|
||||
|
||||
Note: Starting with `v0.22.0`, we ship only the slim edition and no longer append the **-slim** suffix to the image tag.
|
||||
Starting from `v0.22.0`, we ship only the slim edition and no longer append the **-slim** suffix to the image tag.
|
||||
|
||||
---
|
||||
|
||||
|
|
@ -65,7 +47,7 @@ If you build RAGFlow from source, the version number is also in the system log:
|
|||
/ _, _// ___ |/ /_/ // __/ / // /_/ /| |/ |/ /
|
||||
/_/ |_|/_/ |_|\____//_/ /_/ \____/ |__/|__/
|
||||
|
||||
2025-02-18 10:10:43,835 INFO 1445658 RAGFlow version: v0.15.0-50-g6daae7f2 full
|
||||
2025-02-18 10:10:43,835 INFO 1445658 RAGFlow version: v0.15.0-50-g6daae7f2
|
||||
```
|
||||
|
||||
Where:
|
||||
|
|
@ -73,9 +55,6 @@ Where:
|
|||
- `v0.15.0`: The officially published release.
|
||||
- `50`: The number of git commits since the official release.
|
||||
- `g6daae7f2`: `g` is the prefix, and `6daae7f2` is the first seven characters of the current commit ID.
|
||||
- `full`/`slim`: The RAGFlow edition.
|
||||
- `full`: The full RAGFlow edition.
|
||||
- `slim`: The RAGFlow edition without embedding models and Python packages.
|
||||
|
||||
---
|
||||
|
||||
|
|
@ -514,11 +493,11 @@ See [here](./guides/agent/best_practices/accelerate_agent_question_answering.md)
|
|||
|
||||
### How to use MinerU to parse PDF documents?
|
||||
|
||||
MinerU PDF document parsing is available starting from v0.21.1. RAGFlow supports MinerU (>= 2.6.3) as an optional PDF parser with multiple backends. RAGFlow itself only acts as a client: it calls MinerU to parse documents, reads the output files, and ingests the parsed content into RAGFlow. To use this feature, follow these steps:
|
||||
MinerU PDF document parsing is available starting from v0.22.0. RAGFlow supports MinerU (>= 2.6.3) as an optional PDF parser with multiple backends. RAGFlow acts only as a client for MinerU, calling it to parse documents, reading the output files, and ingesting the parsed content. To use this feature, follow these steps:
|
||||
|
||||
1. **Prepare MinerU**
|
||||
1. Prepare MinerU
|
||||
|
||||
- **If you run RAGFlow from source**, install MinerU into an isolated virtual environment (recommended path: `$HOME/uv_tools`):
|
||||
- **If you deploy RAGFlow from source**, install MinerU into an isolated virtual environment (recommended path: `$HOME/uv_tools`):
|
||||
|
||||
```bash
|
||||
mkdir -p "$HOME/uv_tools"
|
||||
|
|
@ -530,7 +509,7 @@ MinerU PDF document parsing is available starting from v0.21.1. RAGFlow supports
|
|||
# uv pip install -U "mineru[all]" -i https://mirrors.aliyun.com/pypi/simple
|
||||
```
|
||||
|
||||
- **If you run RAGFlow with Docker**, you usually only need to turn on MinerU support in `docker/.env`:
|
||||
- **If you deploy RAGFlow with Docker**, you usually only need to turn on MinerU support in `docker/.env`:
|
||||
|
||||
```bash
|
||||
# docker/.env
|
||||
|
|
@ -541,7 +520,7 @@ MinerU PDF document parsing is available starting from v0.21.1. RAGFlow supports
|
|||
|
||||
Enabling `USE_MINERU=true` will internally perform the same setup as the manual configuration (including setting the MinerU executable path and related environment variables). You only need the manual installation above if you are running from source or want full control over the MinerU installation.
|
||||
|
||||
2. **Start RAGFlow with MinerU enabled**
|
||||
2. Start RAGFlow with MinerU enabled:
|
||||
|
||||
- **Source deployment** – in the RAGFlow repo, export the key MinerU-related variables and start the backend service:
|
||||
|
||||
|
|
@ -570,7 +549,7 @@ MinerU PDF document parsing is available starting from v0.21.1. RAGFlow supports
|
|||
|
||||
### How to configure MinerU-specific settings?
|
||||
|
||||
The table below summarizes the most commonly used MinerU-related environment variables:
|
||||
The table below summarizes the most frequently used MinerU environment variables:
|
||||
|
||||
| Environment variable | Description | Default | Example |
|
||||
| ---------------------- | ---------------------------------- | ----------------------------------- | ----------------------------------------------------------------------------------------------- |
|
||||
|
|
@ -583,14 +562,14 @@ The table below summarizes the most commonly used MinerU-related environment var
|
|||
|
||||
1. Set `MINERU_EXECUTABLE` to the path to the MinerU executable if the default `mineru` is not on `PATH`.
|
||||
2. Set `MINERU_DELETE_OUTPUT` to `0` to keep MinerU's output. (Default: `1`, which deletes temporary output.)
|
||||
3. Set `MINERU_OUTPUT_DIR` to specify the output directory for MinerU (otherwise a system temp directory is used).
|
||||
3. Set `MINERU_OUTPUT_DIR` to specify the output directory for MinerU; otherwise, a system temp directory is used.
|
||||
4. Set `MINERU_BACKEND` to specify a parsing backend:
|
||||
- `"pipeline"` (default): The traditional multimodel pipeline.
|
||||
- `"vlm-transformers"`: A vision-language model using HuggingFace Transformers.
|
||||
- `"vlm-vllm-engine"`: A vision-language model using a local vLLM engine (requires a local GPU).
|
||||
- `"vlm-http-client"`: A vision-language model via HTTP client to a remote vLLM server (RAGFlow only requires CPU).
|
||||
5. If using the `"vlm-http-client"` backend, you must also set `MINERU_SERVER_URL` to the URL of your vLLM server.
|
||||
6. If you want RAGFlow to call a **remote MinerU service** (instead of a MinerU process running locally with RAGFlow), set `MINERU_APISERVER` to the URL of the remote MinerU server.
|
||||
5. If using the `"vlm-http-client"` backend, you must also set `MINERU_SERVER_URL` to your vLLM server's URL.
|
||||
6. If configuring RAGFlow to call a *remote* MinerU service, set `MINERU_APISERVER` to the MinerU server's URL.
|
||||
|
||||
:::tip NOTE
|
||||
For information about other environment variables natively supported by MinerU, see [here](https://opendatalab.github.io/MinerU/usage/cli_tools/#environment-variables-description).
|
||||
|
|
|
|||
|
|
@ -1,6 +1,6 @@
|
|||
---
|
||||
sidebar_position: 6
|
||||
slug: /using_admin_ui
|
||||
sidebar_position: 7
|
||||
slug: /accessing_admin_ui
|
||||
---
|
||||
|
||||
# Admin UI
|
||||
|
|
@ -10,47 +10,7 @@ The RAGFlow Admin UI is a web-based interface that provides comprehensive system
|
|||
|
||||
## Accessing the Admin UI
|
||||
|
||||
### Launching from source code
|
||||
|
||||
1. Start the RAGFlow front-end (if not already running):
|
||||
|
||||
```bash
|
||||
cd web
|
||||
npm run dev
|
||||
```
|
||||
|
||||
Typically, the front-end server is running on port `9222`. The following output confirms a successful launch of the RAGFlow UI:
|
||||
|
||||
```bash
|
||||
╔════════════════════════════════════════════════════╗
|
||||
║ App listening at: ║
|
||||
║ > Local: http://localhost:9222 ║
|
||||
ready - ║ > Network: http://192.168.1.92:9222 ║
|
||||
║ ║
|
||||
║ Now you can open browser with the above addresses↑ ║
|
||||
╚════════════════════════════════════════════════════╝
|
||||
```
|
||||
|
||||
|
||||
2. Login to RAGFlow Admin UI
|
||||
|
||||
Open your browser and navigate to:
|
||||
|
||||
```
|
||||
http://localhost:9222/admin
|
||||
```
|
||||
|
||||
Or if accessing from a remote machine:
|
||||
|
||||
```
|
||||
http://[YOUR_MACHINE_IP]:9222/admin
|
||||
```
|
||||
|
||||
> Replace `[YOUR_MACHINE_IP]` with your actual machine IP address (e.g., `http://192.168.1.49:9222/admin`).
|
||||
|
||||
Then, you will be presented with a login page where you need to enter your admin user email address and password.
|
||||
|
||||
3. After a successful login, you will be redirected to the **Service Status** page, which is the default landing page for the Admin UI.
|
||||
To access the RAGFlow admin UI, append `/admin` to the web UI's address, e.g. `http://[RAGFLOW_WEB_UI_ADDR]/admin`, replace `[RAGFLOW_WEB_UI_ADDR]` with real RAGFlow web UI address.
|
||||
|
||||
|
||||
## Admin UI Overview
|
||||
|
|
@ -59,7 +19,7 @@ The RAGFlow Admin UI is a web-based interface that provides comprehensive system
|
|||
|
||||
The service status page displays of all services within the RAGFlow system.
|
||||
|
||||
- **Service List**: View all services in a table format.
|
||||
- **Service List**: View all services in a table.
|
||||
- **Filtering**: Use the filter button to filter services by **Service Type**.
|
||||
- **Search**: Use the search bar to quickly find services by **Name** or **Service Type**.
|
||||
- **Actions** (hover over a row to see action buttons):
|
||||
|
|
@ -9,9 +9,132 @@ A component that sets the parsing rules for your dataset.
|
|||
|
||||
---
|
||||
|
||||
A **Parser** component defines how various file types should be parsed, including parsing methods for PDFs , fields to parse for Emails, and OCR methods for images.
|
||||
A **Parser** component is auto-populated on the ingestion pipeline canvas and required in all ingestion pipeline workflows. Just like the **Extract** stage in the traditional ETL process, a **Parser** component in an ingestion pipeline defines how various file types are parsed into structured data. Click the component to display its configuration panel. In this configuration panel, you set the parsing rules for various file types.
|
||||
|
||||
## Configurations
|
||||
|
||||
## Scenario
|
||||
Within the configuration panel, you can add multiple parsers and set the corresponding parsing rules or remove unwanted parsers. Please ensure your set of parsers covers all required file types; otherwise, an error would occur when you select this ingestion pipeline on your dataset's **Files** page.
|
||||
|
||||
A **Parser** component is auto-populated on the ingestion pipeline canvas and required in all ingestion pipeline workflows.
|
||||
The **Parser** component supports parsing the following file types:
|
||||
|
||||
| File type | File format |
|
||||
| ------------- | ------------------------ |
|
||||
| PDF | PDF |
|
||||
| Spreadsheet | XLSX, XLS, CSV |
|
||||
| Image | PNG, JPG, JPEG, GIF, TIF |
|
||||
| Email | EML |
|
||||
| Text & Markup | TXT, MD, MDX, HTML, JSON |
|
||||
| Word | DOCX |
|
||||
| PowerPoint | PPTX, PPT |
|
||||
| Audio | MP3, WAV |
|
||||
| Video | MP4, AVI, MKV |
|
||||
|
||||
### PDF parser
|
||||
|
||||
The output of a PDF parser is `json`. In the PDF parser, you select the parsing method that works best with your PDFs.
|
||||
|
||||
- DeepDoc: (Default) The default visual model performing OCR, TSR, and DLR tasks on complex PDFs, but can be time-consuming.
|
||||
- Naive: Skip OCR, TSR, and DLR tasks if *all* your PDFs are plain text.
|
||||
- [MinerU](https://github.com/opendatalab/MinerU): (Experimental) An open-source tool that converts PDF into machine-readable formats.
|
||||
- [Docling](https://github.com/docling-project/docling): (Experimental) An open-source document processing tool for gen AI.
|
||||
- A third-party visual model from a specific model provider.
|
||||
|
||||
:::danger IMPORTANG
|
||||
MinerU PDF document parsing is available starting from v0.22.0. RAGFlow supports MinerU (>= 2.6.3) as an optional PDF parser with multiple backends. RAGFlow acts only as a client for MinerU, calling it to parse documents, reading the output files, and ingesting the parsed content. To use this feature, follow these steps:
|
||||
|
||||
1. Prepare MinerU:
|
||||
|
||||
- **If you deploy RAGFlow from source**, install MinerU into an isolated virtual environment (recommended path: `$HOME/uv_tools`):
|
||||
|
||||
```bash
|
||||
mkdir -p "$HOME/uv_tools"
|
||||
cd "$HOME/uv_tools"
|
||||
uv venv .venv
|
||||
source .venv/bin/activate
|
||||
uv pip install -U "mineru[core]" -i https://mirrors.aliyun.com/pypi/simple
|
||||
# or
|
||||
# uv pip install -U "mineru[all]" -i https://mirrors.aliyun.com/pypi/simple
|
||||
```
|
||||
|
||||
- **If you deploy RAGFlow with Docker**, you usually only need to turn on MinerU support in `docker/.env`:
|
||||
|
||||
```bash
|
||||
# docker/.env
|
||||
...
|
||||
USE_MINERU=true
|
||||
...
|
||||
```
|
||||
|
||||
Enabling `USE_MINERU=true` will internally perform the same setup as the manual configuration (including setting the MinerU executable path and related environment variables). You only need the manual installation above if you are running from source or want full control over the MinerU installation.
|
||||
|
||||
2. Start RAGFlow with MinerU enabled:
|
||||
|
||||
- **Source deployment** – in the RAGFlow repo, export the key MinerU-related variables and start the backend service:
|
||||
|
||||
```bash
|
||||
# in RAGFlow repo
|
||||
export MINERU_EXECUTABLE="$HOME/uv_tools/.venv/bin/mineru"
|
||||
export MINERU_DELETE_OUTPUT=0 # keep output directory
|
||||
export MINERU_BACKEND=pipeline # or another backend you prefer
|
||||
|
||||
source .venv/bin/activate
|
||||
export PYTHONPATH=$(pwd)
|
||||
bash docker/launch_backend_service.sh
|
||||
```
|
||||
|
||||
- **Docker deployment** – after setting `USE_MINERU=true`, restart the containers so that the new settings take effect:
|
||||
|
||||
```bash
|
||||
# in RAGFlow repo
|
||||
docker compose -f docker/docker-compose.yml restart
|
||||
```
|
||||
|
||||
3. Restart the ragflow-server.
|
||||
:::
|
||||
|
||||
:::caution WARNING
|
||||
Third-party visual models are marked **Experimental**, because we have not fully tested these models for the aforementioned data extraction tasks.
|
||||
:::
|
||||
|
||||
### Spreadsheet parser
|
||||
|
||||
A spreadsheet parser outputs `html`, preserving the original layout and table structure. You may remove this parser if your dataset contains no spreadsheets.
|
||||
|
||||
### Image parser
|
||||
|
||||
An Image parser uses a native OCR model for text extraction by default. You may select an alternative VLM model, provided that you have properly configured it on the **Model provider** page.
|
||||
|
||||
### Email parser
|
||||
|
||||
With the Email parser, you select the fields to parse from Emails, such as **subject** and **body**. The parser will then extract text from these specified fields.
|
||||
|
||||
### Text&Markup parser
|
||||
|
||||
A Text&Markup parser automatically removes all formatting tags (e.g., those from HTML and Markdown files) to output clean, plain text only.
|
||||
|
||||
### Word parser
|
||||
|
||||
A Word parser outputs `json`, preserving the original document structure information, including titles, paragraphs, tables, headers, and footers.
|
||||
|
||||
### PowerPoint (PPT) parser
|
||||
|
||||
A PowerPoint parser extracts content from PowerPoint files into `json`, processing each slide individually and distinguishing between its title, body text, and notes.
|
||||
|
||||
### Audio parser
|
||||
|
||||
An Audio parser transcribes audio files to text. To use this parser, you must first configure an ASR model on the **Model provider** page.
|
||||
|
||||
### Video parser
|
||||
|
||||
A Video parser transcribes video files to text. To use this parser, you must first configure a VLM model on the **Model provider** page.
|
||||
|
||||
## Output
|
||||
|
||||
The global variable names for the output of the **Parser** component, which can be referenced by subsequent components in the ingestion pipeline.
|
||||
|
||||
| Variable name | Type |
|
||||
| ------------- | ------------------------ |
|
||||
| `markdown` | `string` |
|
||||
| `text` | `string` |
|
||||
| `html` | `string` |
|
||||
| `json` | `Array<Object>` |
|
||||
|
|
|
|||
|
|
@ -76,13 +76,8 @@ You can also change a file's chunking method on the **Files** page.
|
|||
|
||||
An embedding model converts chunks into embeddings. It cannot be changed once the dataset has chunks. To switch to a different embedding model, you must delete all existing chunks in the dataset. The obvious reason is that we *must* ensure that files in a specific dataset are converted to embeddings using the *same* embedding model (ensure that they are compared in the same embedding space).
|
||||
|
||||
The following embedding models can be deployed locally:
|
||||
|
||||
- BAAI/bge-large-zh-v1.5
|
||||
- maidalun1020/bce-embedding-base_v1
|
||||
|
||||
:::danger IMPORTANT
|
||||
These two embedding models are optimized specifically for English and Chinese, so performance may be compromised if you use them to embed documents in other languages.
|
||||
Some embedding models are optimized for specific languages, so performance may be compromised if you use them to embed documents in other languages.
|
||||
:::
|
||||
|
||||
### Upload file
|
||||
|
|
|
|||
|
|
@ -33,27 +33,61 @@ RAGFlow isn't one-size-fits-all. It is built for flexibility and supports deeper
|
|||
|
||||
2. Select the option that works best with your scenario:
|
||||
|
||||
- DeepDoc: (Default) The default visual model performing OCR, TSR, and DLR tasks on PDFs, which can be time-consuming.
|
||||
- DeepDoc: (Default) The default visual model performing OCR, TSR, and DLR tasks on PDFs, but can be time-consuming.
|
||||
- Naive: Skip OCR, TSR, and DLR tasks if *all* your PDFs are plain text.
|
||||
- MinerU: An experimental feature.
|
||||
- A third-party visual model provided by a specific model provider.
|
||||
- [MinerU](https://github.com/opendatalab/MinerU): (Experimental) An open-source tool that converts PDF into machine-readable formats.
|
||||
- [Docling](https://github.com/docling-project/docling): (Experimental) An open-source document processing tool for gen AI.
|
||||
- A third-party visual model from a specific model provider.
|
||||
|
||||
:::danger IMPORTANG
|
||||
MinerU PDF document parsing is available starting from v0.21.1. To use this feature, follow these steps:
|
||||
MinerU PDF document parsing is available starting from v0.22.0. RAGFlow supports MinerU (>= 2.6.3) as an optional PDF parser with multiple backends. RAGFlow acts only as a client for MinerU, calling it to parse documents, reading the output files, and ingesting the parsed content. To use this feature, follow these steps:
|
||||
|
||||
1. Before deploying ragflow-server, update your **docker/.env** file:
|
||||
- Enable `HF_ENDPOINT=https://hf-mirror.com`
|
||||
- Add a MinerU entry: `MINERU_EXECUTABLE=/ragflow/uv_tools/.venv/bin/mineru`
|
||||
1. Prepare MinerU:
|
||||
|
||||
2. Start the ragflow-server and run the following commands inside the container:
|
||||
- **If you deploy RAGFlow from source**, install MinerU into an isolated virtual environment (recommended path: `$HOME/uv_tools`):
|
||||
|
||||
```bash
|
||||
mkdir uv_tools
|
||||
cd uv_tools
|
||||
uv venv .venv
|
||||
source .venv/bin/activate
|
||||
uv pip install -U "mineru[core]" -i https://mirrors.aliyun.com/pypi/simple
|
||||
```
|
||||
```bash
|
||||
mkdir -p "$HOME/uv_tools"
|
||||
cd "$HOME/uv_tools"
|
||||
uv venv .venv
|
||||
source .venv/bin/activate
|
||||
uv pip install -U "mineru[core]" -i https://mirrors.aliyun.com/pypi/simple
|
||||
# or
|
||||
# uv pip install -U "mineru[all]" -i https://mirrors.aliyun.com/pypi/simple
|
||||
```
|
||||
|
||||
- **If you deploy RAGFlow with Docker**, you usually only need to turn on MinerU support in `docker/.env`:
|
||||
|
||||
```bash
|
||||
# docker/.env
|
||||
...
|
||||
USE_MINERU=true
|
||||
...
|
||||
```
|
||||
|
||||
Enabling `USE_MINERU=true` will internally perform the same setup as the manual configuration (including setting the MinerU executable path and related environment variables). You only need the manual installation above if you are running from source or want full control over the MinerU installation.
|
||||
|
||||
2. Start RAGFlow with MinerU enabled:
|
||||
|
||||
- **Source deployment** – in the RAGFlow repo, export the key MinerU-related variables and start the backend service:
|
||||
|
||||
```bash
|
||||
# in RAGFlow repo
|
||||
export MINERU_EXECUTABLE="$HOME/uv_tools/.venv/bin/mineru"
|
||||
export MINERU_DELETE_OUTPUT=0 # keep output directory
|
||||
export MINERU_BACKEND=pipeline # or another backend you prefer
|
||||
|
||||
source .venv/bin/activate
|
||||
export PYTHONPATH=$(pwd)
|
||||
bash docker/launch_backend_service.sh
|
||||
```
|
||||
|
||||
- **Docker deployment** – after setting `USE_MINERU=true`, restart the containers so that the new settings take effect:
|
||||
|
||||
```bash
|
||||
# in RAGFlow repo
|
||||
docker compose -f docker/docker-compose.yml restart
|
||||
```
|
||||
|
||||
3. Restart the ragflow-server.
|
||||
4. In the web UI, navigate to the **Configuration** page of your dataset. Click **Built-in** in the **Ingestion pipeline** section, select a chunking method from the **Built-in** dropdown, which supports PDF parsing, and slect **MinerU** in **PDF parser**.
|
||||
|
|
|
|||
|
|
@ -189,10 +189,6 @@ This section provides instructions on setting up the RAGFlow server on Linux. If
|
|||
|
||||
3. Use the pre-built Docker images and start up the server:
|
||||
|
||||
:::tip NOTE
|
||||
The command below downloads the `v0.21.1-slim` edition of the RAGFlow Docker image. Refer to the following table for descriptions of different RAGFlow editions. To download a RAGFlow edition different from `v0.21.1-slim`, update the `RAGFLOW_IMAGE` variable accordingly in **docker/.env** before using `docker compose` to start the server. For example: set `RAGFLOW_IMAGE=infiniflow/ragflow:v0.21.1` for the full edition `v0.21.1`.
|
||||
:::
|
||||
|
||||
```bash
|
||||
# Use CPU for embedding and DeepDoc tasks:
|
||||
$ docker compose -f docker-compose.yml up -d
|
||||
|
|
@ -202,11 +198,10 @@ This section provides instructions on setting up the RAGFlow server on Linux. If
|
|||
<APITable>
|
||||
```
|
||||
|
||||
| RAGFlow image tag | Image size (GB) | Has embedding models and Python packages? | Stable? |
|
||||
| ------------------- | --------------- | ----------------------------------------- | ------------------------ |
|
||||
| v0.21.1 | ≈9 | ✔️ | Stable release |
|
||||
| v0.21.1-slim | ≈2 | ❌ | Stable release |
|
||||
| nightly | ≈2 | ❌ | _Unstable_ nightly build |
|
||||
| RAGFlow image tag | Image size (GB) | Stable? |
|
||||
| ------------------- | --------------- | ------------------------ |
|
||||
| v0.21.1 | ≈2 | Stable release |
|
||||
| nightly | ≈2 | _Unstable_ nightly build |
|
||||
|
||||
```mdx-code-block
|
||||
</APITable>
|
||||
|
|
@ -222,7 +217,7 @@ These two embedding models are optimized specifically for English and Chinese, s
|
|||
:::
|
||||
|
||||
:::tip NOTE
|
||||
The image size shown refers to the size of the *downloaded* Docker image, which is compressed. When Docker runs the image, it unpacks it, resulting in significantly greater disk usage. For example, a slim edition image will expand to around 7 GB once unpacked.
|
||||
The image size shown refers to the size of the *downloaded* Docker image, which is compressed. When Docker runs the image, it unpacks it, resulting in significantly greater disk usage. A Docker image will expand to around 7 GB once unpacked.
|
||||
:::
|
||||
|
||||
4. Check the server status after having the server up and running:
|
||||
|
|
|
|||
|
|
@ -7,21 +7,6 @@ slug: /release_notes
|
|||
|
||||
Key features, improvements and bug fixes in the latest releases.
|
||||
|
||||
:::info
|
||||
Each RAGFlow release is available in two editions:
|
||||
- **Slim edition**: excludes built-in embedding models and is identified by a **-slim** suffix added to the version name. Example: `infiniflow/ragflow:v0.21.1-slim`
|
||||
- **Full edition**: includes built-in embedding models and has no suffix added to the version name. Example: `infiniflow/ragflow:v0.21.1`
|
||||
:::
|
||||
|
||||
:::danger IMPORTANT
|
||||
The embedding models included in a full edition are:
|
||||
|
||||
- BAAI/bge-large-zh-v1.5
|
||||
- maidalun1020/bce-embedding-base_v1
|
||||
|
||||
These two embedding models are optimized specifically for English and Chinese, so performance may be compromised if you use them to embed documents in other languages.
|
||||
:::
|
||||
|
||||
## v0.21.1
|
||||
|
||||
Released on October 23, 2025.
|
||||
|
|
|
|||
|
|
@ -97,7 +97,7 @@ class RecursiveAbstractiveProcessing4TreeOrganizedRetrieval:
|
|||
async def __call__(self, chunks, random_state, callback=None, task_id: str = ""):
|
||||
if len(chunks) <= 1:
|
||||
return []
|
||||
chunks = [(s, a) for s, a in chunks if s and a and len(a) > 0]
|
||||
chunks = [(s, a) for s, a in chunks if s and a is not None and len(a) > 0]
|
||||
layers = [(0, len(chunks))]
|
||||
start, end = 0, len(chunks)
|
||||
|
||||
|
|
|
|||
|
|
@ -647,7 +647,7 @@ async def run_raptor_for_kb(row, kb_parser_config, chat_mdl, embd_mdl, vector_si
|
|||
|
||||
res = []
|
||||
tk_count = 0
|
||||
async def generate(chunks):
|
||||
async def generate(chunks, did):
|
||||
nonlocal tk_count, res
|
||||
raptor = Raptor(
|
||||
raptor_config.get("max_cluster", 64),
|
||||
|
|
@ -660,7 +660,7 @@ async def run_raptor_for_kb(row, kb_parser_config, chat_mdl, embd_mdl, vector_si
|
|||
original_length = len(chunks)
|
||||
chunks = await raptor(chunks, kb_parser_config["raptor"]["random_seed"], callback, row["id"])
|
||||
doc = {
|
||||
"doc_id": fake_doc_id,
|
||||
"doc_id": did,
|
||||
"kb_id": [str(row["kb_id"])],
|
||||
"docnm_kwd": row["name"],
|
||||
"title_tks": rag_tokenizer.tokenize(row["name"]),
|
||||
|
|
@ -688,9 +688,8 @@ async def run_raptor_for_kb(row, kb_parser_config, chat_mdl, embd_mdl, vector_si
|
|||
fields=["content_with_weight", vctr_nm],
|
||||
sort_by_position=True):
|
||||
chunks.append((d["content_with_weight"], np.array(d[vctr_nm])))
|
||||
callback(progress=(x+1.)/len(doc_ids))
|
||||
await generate(chunks)
|
||||
|
||||
await generate(chunks, doc_id)
|
||||
callback(prog=(x+1.)/len(doc_ids))
|
||||
else:
|
||||
chunks = []
|
||||
for doc_id in doc_ids:
|
||||
|
|
@ -699,7 +698,7 @@ async def run_raptor_for_kb(row, kb_parser_config, chat_mdl, embd_mdl, vector_si
|
|||
sort_by_position=True):
|
||||
chunks.append((d["content_with_weight"], np.array(d[vctr_nm])))
|
||||
|
||||
await generate(chunks)
|
||||
await generate(chunks, fake_doc_id)
|
||||
|
||||
return res, tk_count
|
||||
|
||||
|
|
|
|||
|
|
@ -67,8 +67,10 @@ class Session(Base):
|
|||
or (self.__session_type == "chat" and json_data.get("data") is True)
|
||||
):
|
||||
return
|
||||
|
||||
yield self._structure_answer(json_data)
|
||||
if self.__session_type == "agent":
|
||||
yield self._structure_answer(json_data)
|
||||
else:
|
||||
yield self._structure_answer(json_data["data"])
|
||||
else:
|
||||
try:
|
||||
json_data = res.json()
|
||||
|
|
|
|||
|
|
@ -25,15 +25,7 @@
|
|||
_Replace `[YOUR_MACHINE_IP]` with your actual machine IP address (e.g., `http://192.168.1.49:9222`)._
|
||||
|
||||
|
||||
## Shutdown front-end
|
||||
|
||||
Ctrl + C or
|
||||
|
||||
```bash
|
||||
kill -f "umi dev"
|
||||
```
|
||||
|
||||
## Access admin UI
|
||||
## Login to RAGFlow web admin UI
|
||||
|
||||
Open your browser and navigate to:
|
||||
|
||||
|
|
@ -44,3 +36,10 @@
|
|||
_Replace `[YOUR_MACHINE_IP]` with your actual machine IP address (e.g., `http://192.168.1.49:9222/admin`)._
|
||||
|
||||
|
||||
## Shutdown front-end
|
||||
|
||||
Ctrl + C or
|
||||
|
||||
```bash
|
||||
kill -f "umi dev"
|
||||
```
|
||||
|
|
@ -1925,7 +1925,7 @@ Important structured information may include: names, dates, locations, events, k
|
|||
},
|
||||
admin: {
|
||||
loginTitle: 'Admin Console',
|
||||
title: 'RAGFlow admin',
|
||||
title: 'RAGFlow',
|
||||
confirm: 'Confirm',
|
||||
close: 'Close',
|
||||
yes: 'Yes',
|
||||
|
|
|
|||
|
|
@ -384,21 +384,21 @@ function AdminServiceStatus() {
|
|||
{/* Extra info modal*/}
|
||||
<Dialog open={extraInfoModalOpen} onOpenChange={setExtraInfoModalOpen}>
|
||||
<DialogContent
|
||||
className="flex flex-col max-h-[calc(100vh-4rem)] p-0 overflow-hidden"
|
||||
className="flex flex-col max-h-[calc(100vh-4rem)] overflow-hidden"
|
||||
onAnimationEnd={() => {
|
||||
if (!extraInfoModalOpen) {
|
||||
setItemToMakeAction(null);
|
||||
}
|
||||
}}
|
||||
>
|
||||
<DialogHeader className="p-6 border-b-0.5 border-border-button">
|
||||
<DialogHeader>
|
||||
<DialogTitle>{t('admin.extraInfo')}</DialogTitle>
|
||||
</DialogHeader>
|
||||
|
||||
<DialogDescription className="sr-only" />
|
||||
|
||||
<ScrollArea className="h-0 flex-1 grid">
|
||||
<div className="px-12">
|
||||
<div className="px-6">
|
||||
<JsonView
|
||||
src={itemToMakeAction?.extra ?? {}}
|
||||
className="rounded-lg p-4 bg-bg-card break-words text-text-secondary"
|
||||
|
|
@ -406,7 +406,7 @@ function AdminServiceStatus() {
|
|||
</div>
|
||||
</ScrollArea>
|
||||
|
||||
<DialogFooter className="flex justify-end gap-4 px-12 pt-4 pb-8">
|
||||
<DialogFooter className="flex justify-end gap-4 px-6 py-4">
|
||||
<Button
|
||||
className="px-4 h-10 dark:border-border-button"
|
||||
variant="outline"
|
||||
|
|
@ -421,7 +421,7 @@ function AdminServiceStatus() {
|
|||
{/* Service details modal */}
|
||||
<Dialog open={detailModalOpen} onOpenChange={setDetailModalOpen}>
|
||||
<DialogContent
|
||||
className="flex flex-col max-h-[calc(100vh-4rem)] max-w-6xl p-0 overflow-hidden"
|
||||
className="flex flex-col max-h-[calc(100vh-4rem)] max-w-6xl overflow-hidden"
|
||||
onAnimationEnd={() => {
|
||||
if (!detailModalOpen) {
|
||||
setItemToMakeAction(null);
|
||||
|
|
@ -443,7 +443,7 @@ function AdminServiceStatus() {
|
|||
<DialogDescription className="sr-only" />
|
||||
|
||||
<ScrollArea className="h-0 flex-1 text-text-secondary grid">
|
||||
<div className="px-12">
|
||||
<div className="px-6">
|
||||
{itemToMakeAction?.service_type === 'task_executor' ? (
|
||||
<TaskExecutorDetail
|
||||
content={
|
||||
|
|
@ -456,7 +456,7 @@ function AdminServiceStatus() {
|
|||
</div>
|
||||
</ScrollArea>
|
||||
|
||||
<DialogFooter className="flex justify-end gap-4 px-12 pt-4 pb-8">
|
||||
<DialogFooter className="flex justify-end gap-4 px-6 py-4">
|
||||
<Button
|
||||
className="px-4 h-10 dark:border-border-button"
|
||||
variant="outline"
|
||||
|
|
|
|||
|
|
@ -89,7 +89,7 @@ function DataOperationsForm({ node }: INextOperatorForm) {
|
|||
<QueryVariableList
|
||||
tooltip={t('flow.queryTip')}
|
||||
label={t('flow.query')}
|
||||
types={[JsonSchemaDataType.Array, JsonSchemaDataType.Object]}
|
||||
types={[JsonSchemaDataType.Object]}
|
||||
></QueryVariableList>
|
||||
<Separator />
|
||||
<RAGFlowFormItem name="operations" label={t('flow.operations')}>
|
||||
|
|
|
|||
|
|
@ -14,7 +14,7 @@ import { useForm } from 'react-hook-form';
|
|||
|
||||
const formSchema = z.object({
|
||||
title: z.string().min(1, {}),
|
||||
avatar: z.string().optional(),
|
||||
avatar: z.string().optional().nullable(),
|
||||
description: z.string().optional().nullable(),
|
||||
permission: z.string(),
|
||||
});
|
||||
|
|
|
|||
Loading…
Add table
Reference in a new issue