commit
4a049bc84f
125 changed files with 1920 additions and 394 deletions
3
.gitignore
vendored
3
.gitignore
vendored
|
|
@ -49,6 +49,7 @@ inputs/
|
|||
rag_storage/
|
||||
examples/input/
|
||||
examples/output/
|
||||
output*/
|
||||
|
||||
# Miscellaneous
|
||||
.DS_Store
|
||||
|
|
@ -59,6 +60,8 @@ ignore_this.txt
|
|||
# Project-specific files
|
||||
dickens*/
|
||||
book.txt
|
||||
LightRAG.pdf
|
||||
download_models_hf.py
|
||||
lightrag-dev/
|
||||
gui/
|
||||
|
||||
|
|
|
|||
55
README-zh.md
55
README-zh.md
|
|
@ -4,6 +4,7 @@
|
|||
|
||||
## 🎉 新闻
|
||||
|
||||
- [X] [2025.06.05]🎯📢LightRAG现已集成RAG-Anything,支持全面的多模态文档解析与RAG能力(PDF、图片、Office文档、表格、公式等)。详见下方[多模态处理模块](https://github.com/HKUDS/LightRAG?tab=readme-ov-file#多模态文档处理rag-anything集成)。
|
||||
- [X] [2025.03.18]🎯📢LightRAG现已支持引文功能。
|
||||
- [X] [2025.02.05]🎯📢我们团队发布了[VideoRAG](https://github.com/HKUDS/VideoRAG),用于理解超长上下文视频。
|
||||
- [X] [2025.01.13]🎯📢我们团队发布了[MiniRAG](https://github.com/HKUDS/MiniRAG),使用小型模型简化RAG。
|
||||
|
|
@ -1002,6 +1003,60 @@ rag.merge_entities(
|
|||
|
||||
</details>
|
||||
|
||||
## 多模态文档处理(RAG-Anything集成)
|
||||
|
||||
LightRAG 现已与 [RAG-Anything](https://github.com/HKUDS/RAG-Anything) 实现无缝集成,这是一个专为 LightRAG 构建的**全能多模态文档处理RAG系统**。RAG-Anything 提供先进的解析和检索增强生成(RAG)能力,让您能够无缝处理多模态文档,并从各种文档格式中提取结构化内容——包括文本、图片、表格和公式——以集成到您的RAG流程中。
|
||||
|
||||
**主要特性:**
|
||||
- **端到端多模态流程**:从文档摄取解析到智能多模态问答的完整工作流程
|
||||
- **通用文档支持**:无缝处理PDF、Office文档(DOC/DOCX/PPT/PPTX/XLS/XLSX)、图片和各种文件格式
|
||||
- **专业内容分析**:针对图片、表格、数学公式和异构内容类型的专用处理器
|
||||
- **多模态知识图谱**:自动实体提取和跨模态关系发现以增强理解
|
||||
- **混合智能检索**:覆盖文本和多模态内容的高级搜索能力,具备上下文理解
|
||||
|
||||
**快速开始:**
|
||||
1. 安装RAG-Anything:
|
||||
```bash
|
||||
pip install raganything
|
||||
```
|
||||
2. 处理多模态文档:
|
||||
```python
|
||||
import asyncio
|
||||
from raganything import RAGAnything
|
||||
from lightrag.llm.openai import openai_complete_if_cache, openai_embed
|
||||
|
||||
async def main():
|
||||
# 使用LightRAG集成初始化RAGAnything
|
||||
rag = RAGAnything(
|
||||
working_dir="./rag_storage",
|
||||
llm_model_func=lambda prompt, **kwargs: openai_complete_if_cache(
|
||||
"gpt-4o-mini", prompt, api_key="your-api-key", **kwargs
|
||||
),
|
||||
embedding_func=lambda texts: openai_embed(
|
||||
texts, model="text-embedding-3-large", api_key="your-api-key"
|
||||
),
|
||||
embedding_dim=3072,
|
||||
)
|
||||
|
||||
# 处理多模态文档
|
||||
await rag.process_document_complete(
|
||||
file_path="path/to/your/document.pdf",
|
||||
output_dir="./output"
|
||||
)
|
||||
|
||||
# 查询多模态内容
|
||||
result = await rag.query_with_multimodal(
|
||||
"图表中显示的主要发现是什么?",
|
||||
mode="hybrid"
|
||||
)
|
||||
print(result)
|
||||
|
||||
if __name__ == "__main__":
|
||||
asyncio.run(main())
|
||||
```
|
||||
|
||||
如需详细文档和高级用法,请参阅 [RAG-Anything 仓库](https://github.com/HKUDS/RAG-Anything)。
|
||||
|
||||
## Token统计功能
|
||||
|
||||
<details>
|
||||
|
|
|
|||
95
README.md
95
README.md
|
|
@ -39,7 +39,8 @@
|
|||
</div>
|
||||
|
||||
## 🎉 News
|
||||
|
||||
- [X] [2025.06.16]🎯📢Our team has released [RAG-Anything](https://github.com/HKUDS/RAG-Anything) an All-in-One Multimodal RAG System for seamless text, image, table, and equation processing.
|
||||
- [X] [2025.06.05]🎯📢LightRAG now supports comprehensive multimodal data handling through [RAG-Anything](https://github.com/HKUDS/RAG-Anything) integration, enabling seamless document parsing and RAG capabilities across diverse formats including PDFs, images, Office documents, tables, and formulas. Please refer to the new [multimodal section](https://github.com/HKUDS/LightRAG/?tab=readme-ov-file#multimodal-document-processing-rag-anything-integration) for details.
|
||||
- [X] [2025.03.18]🎯📢LightRAG now supports citation functionality, enabling proper source attribution.
|
||||
- [X] [2025.02.05]🎯📢Our team has released [VideoRAG](https://github.com/HKUDS/VideoRAG) understanding extremely long-context videos.
|
||||
- [X] [2025.01.13]🎯📢Our team has released [MiniRAG](https://github.com/HKUDS/MiniRAG) making RAG simpler with small models.
|
||||
|
|
@ -149,6 +150,12 @@ For a streaming response implementation example, please see `examples/lightrag_o
|
|||
|
||||
> If you would like to integrate LightRAG into your project, we recommend utilizing the REST API provided by the LightRAG Server. LightRAG Core is typically intended for embedded applications or for researchers who wish to conduct studies and evaluations.
|
||||
|
||||
### ⚠️ Important: Initialization Requirements
|
||||
|
||||
**LightRAG requires explicit initialization before use.** You must call both `await rag.initialize_storages()` and `await initialize_pipeline_status()` after creating a LightRAG instance, otherwise you will encounter errors like:
|
||||
- `AttributeError: __aenter__` - if storages are not initialized
|
||||
- `KeyError: 'history_messages'` - if pipeline status is not initialized
|
||||
|
||||
### A Simple Program
|
||||
|
||||
Use the below Python snippet to initialize LightRAG, insert text to it, and perform queries:
|
||||
|
|
@ -173,8 +180,9 @@ async def initialize_rag():
|
|||
embedding_func=openai_embed,
|
||||
llm_model_func=gpt_4o_mini_complete,
|
||||
)
|
||||
await rag.initialize_storages()
|
||||
await initialize_pipeline_status()
|
||||
# IMPORTANT: Both initialization calls are required!
|
||||
await rag.initialize_storages() # Initialize storage backends
|
||||
await initialize_pipeline_status() # Initialize processing pipeline
|
||||
return rag
|
||||
|
||||
async def main():
|
||||
|
|
@ -1051,6 +1059,60 @@ When merging entities:
|
|||
|
||||
</details>
|
||||
|
||||
## Multimodal Document Processing (RAG-Anything Integration)
|
||||
|
||||
LightRAG now seamlessly integrates with [RAG-Anything](https://github.com/HKUDS/RAG-Anything), a comprehensive **All-in-One Multimodal Document Processing RAG system** built specifically for LightRAG. RAG-Anything enables advanced parsing and retrieval-augmented generation (RAG) capabilities, allowing you to handle multimodal documents seamlessly and extract structured content—including text, images, tables, and formulas—from various document formats for integration into your RAG pipeline.
|
||||
|
||||
**Key Features:**
|
||||
- **End-to-End Multimodal Pipeline**: Complete workflow from document ingestion and parsing to intelligent multimodal query answering
|
||||
- **Universal Document Support**: Seamless processing of PDFs, Office documents (DOC/DOCX/PPT/PPTX/XLS/XLSX), images, and diverse file formats
|
||||
- **Specialized Content Analysis**: Dedicated processors for images, tables, mathematical equations, and heterogeneous content types
|
||||
- **Multimodal Knowledge Graph**: Automatic entity extraction and cross-modal relationship discovery for enhanced understanding
|
||||
- **Hybrid Intelligent Retrieval**: Advanced search capabilities spanning textual and multimodal content with contextual understanding
|
||||
|
||||
**Quick Start:**
|
||||
1. Install RAG-Anything:
|
||||
```bash
|
||||
pip install raganything
|
||||
```
|
||||
2. Process multimodal documents:
|
||||
```python
|
||||
import asyncio
|
||||
from raganything import RAGAnything
|
||||
from lightrag.llm.openai import openai_complete_if_cache, openai_embed
|
||||
|
||||
async def main():
|
||||
# Initialize RAGAnything with LightRAG integration
|
||||
rag = RAGAnything(
|
||||
working_dir="./rag_storage",
|
||||
llm_model_func=lambda prompt, **kwargs: openai_complete_if_cache(
|
||||
"gpt-4o-mini", prompt, api_key="your-api-key", **kwargs
|
||||
),
|
||||
embedding_func=lambda texts: openai_embed(
|
||||
texts, model="text-embedding-3-large", api_key="your-api-key"
|
||||
),
|
||||
embedding_dim=3072,
|
||||
)
|
||||
|
||||
# Process multimodal documents
|
||||
await rag.process_document_complete(
|
||||
file_path="path/to/your/document.pdf",
|
||||
output_dir="./output"
|
||||
)
|
||||
|
||||
# Query multimodal content
|
||||
result = await rag.query_with_multimodal(
|
||||
"What are the main findings shown in the figures and tables?",
|
||||
mode="hybrid"
|
||||
)
|
||||
print(result)
|
||||
|
||||
if __name__ == "__main__":
|
||||
asyncio.run(main())
|
||||
```
|
||||
|
||||
For detailed documentation and advanced usage, please refer to the [RAG-Anything repository](https://github.com/HKUDS/RAG-Anything).
|
||||
|
||||
## Token Usage Tracking
|
||||
|
||||
<details>
|
||||
|
|
@ -1475,6 +1537,33 @@ Thank you to all our contributors!
|
|||
<img src="https://contrib.rocks/image?repo=HKUDS/LightRAG" />
|
||||
</a>
|
||||
|
||||
## Troubleshooting
|
||||
|
||||
### Common Initialization Errors
|
||||
|
||||
If you encounter these errors when using LightRAG:
|
||||
|
||||
1. **`AttributeError: __aenter__`**
|
||||
- **Cause**: Storage backends not initialized
|
||||
- **Solution**: Call `await rag.initialize_storages()` after creating the LightRAG instance
|
||||
|
||||
2. **`KeyError: 'history_messages'`**
|
||||
- **Cause**: Pipeline status not initialized
|
||||
- **Solution**: Call `await initialize_pipeline_status()` after initializing storages
|
||||
|
||||
3. **Both errors in sequence**
|
||||
- **Cause**: Neither initialization method was called
|
||||
- **Solution**: Always follow this pattern:
|
||||
```python
|
||||
rag = LightRAG(...)
|
||||
await rag.initialize_storages()
|
||||
await initialize_pipeline_status()
|
||||
```
|
||||
|
||||
### Model Switching Issues
|
||||
|
||||
When switching between different embedding models, you must clear the data directory to avoid errors. The only file you may want to preserve is `kv_store_llm_response_cache.json` if you wish to retain the LLM cache.
|
||||
|
||||
## 🌟Citation
|
||||
|
||||
```python
|
||||
|
|
|
|||
281
docs/LightRAG_concurrent_explain.md
Normal file
281
docs/LightRAG_concurrent_explain.md
Normal file
|
|
@ -0,0 +1,281 @@
|
|||
# LightRAG Multi-Document Processing: Concurrent Control Strategy Analysis
|
||||
|
||||
LightRAG employs a multi-layered concurrent control strategy when processing multiple documents. This article provides an in-depth analysis of the concurrent control mechanisms at document level, chunk level, and LLM request level, helping you understand why specific concurrent behaviors occur.
|
||||
|
||||
## Overview
|
||||
|
||||
LightRAG's concurrent control is divided into three layers:
|
||||
|
||||
1. **Document-level concurrency**: Controls the number of documents processed simultaneously
|
||||
2. **Chunk-level concurrency**: Controls the number of chunks processed simultaneously within a single document
|
||||
3. **LLM request-level concurrency**: Controls the global concurrent number of LLM requests
|
||||
|
||||
## 1. Document-Level Concurrent Control
|
||||
|
||||
**Control Parameter**: `max_parallel_insert`
|
||||
|
||||
Document-level concurrency is controlled by the `max_parallel_insert` parameter, with a default value of 2.
|
||||
|
||||
```python
|
||||
# lightrag/lightrag.py
|
||||
max_parallel_insert: int = field(default=int(os.getenv("MAX_PARALLEL_INSERT", 2)))
|
||||
```
|
||||
|
||||
### Implementation Mechanism
|
||||
|
||||
In the `apipeline_process_enqueue_documents` method, a semaphore is used to control document concurrency:
|
||||
|
||||
```python
|
||||
# lightrag/lightrag.py - apipeline_process_enqueue_documents method
|
||||
async def process_document(
|
||||
doc_id: str,
|
||||
status_doc: DocProcessingStatus,
|
||||
split_by_character: str | None,
|
||||
split_by_character_only: bool,
|
||||
pipeline_status: dict,
|
||||
pipeline_status_lock: asyncio.Lock,
|
||||
semaphore: asyncio.Semaphore, # Document-level semaphore
|
||||
) -> None:
|
||||
"""Process single document"""
|
||||
async with semaphore: # 🔥 Document-level concurrent control
|
||||
# ... Process all chunks of a single document
|
||||
|
||||
# Create document-level semaphore
|
||||
semaphore = asyncio.Semaphore(self.max_parallel_insert) # Default 2
|
||||
|
||||
# Create processing tasks for each document
|
||||
doc_tasks = []
|
||||
for doc_id, status_doc in to_process_docs.items():
|
||||
doc_tasks.append(
|
||||
process_document(
|
||||
doc_id, status_doc, split_by_character, split_by_character_only,
|
||||
pipeline_status, pipeline_status_lock, semaphore
|
||||
)
|
||||
)
|
||||
|
||||
# Wait for all documents to complete processing
|
||||
await asyncio.gather(*doc_tasks)
|
||||
```
|
||||
|
||||
## 2. Chunk-Level Concurrent Control
|
||||
|
||||
**Control Parameter**: `llm_model_max_async`
|
||||
|
||||
**Key Point**: Each document independently creates its own chunk semaphore!
|
||||
|
||||
```python
|
||||
# lightrag/lightrag.py
|
||||
llm_model_max_async: int = field(default=int(os.getenv("MAX_ASYNC", 4)))
|
||||
```
|
||||
|
||||
### Implementation Mechanism
|
||||
|
||||
In the `extract_entities` function, **each document independently creates** its own chunk semaphore:
|
||||
|
||||
```python
|
||||
# lightrag/operate.py - extract_entities function
|
||||
async def extract_entities(chunks: dict[str, TextChunkSchema], global_config: dict[str, str], ...):
|
||||
# 🔥 Key: Each document independently creates this semaphore!
|
||||
llm_model_max_async = global_config.get("llm_model_max_async", 4)
|
||||
semaphore = asyncio.Semaphore(llm_model_max_async) # Chunk semaphore for each document
|
||||
|
||||
async def _process_with_semaphore(chunk):
|
||||
async with semaphore: # 🔥 Chunk concurrent control within document
|
||||
return await _process_single_content(chunk)
|
||||
|
||||
# Create tasks for each chunk
|
||||
tasks = []
|
||||
for c in ordered_chunks:
|
||||
task = asyncio.create_task(_process_with_semaphore(c))
|
||||
tasks.append(task)
|
||||
|
||||
# Wait for all chunks to complete processing
|
||||
done, pending = await asyncio.wait(tasks, return_when=asyncio.FIRST_EXCEPTION)
|
||||
chunk_results = [task.result() for task in tasks]
|
||||
return chunk_results
|
||||
```
|
||||
|
||||
### Important Inference: System Overall Chunk Concurrency
|
||||
|
||||
Since each document independently creates chunk semaphores, the theoretical chunk concurrency of the system is:
|
||||
|
||||
**Theoretical Chunk Concurrency = max_parallel_insert × llm_model_max_async**
|
||||
|
||||
For example:
|
||||
- `max_parallel_insert = 2` (process 2 documents simultaneously)
|
||||
- `llm_model_max_async = 4` (maximum 4 chunk concurrency per document)
|
||||
- **Theoretical result**: Maximum 2 × 4 = 8 chunks simultaneously in "processing" state
|
||||
|
||||
## 3. LLM Request-Level Concurrent Control (The Real Bottleneck)
|
||||
|
||||
**Control Parameter**: `llm_model_max_async` (globally shared)
|
||||
|
||||
**Key**: Although there might be 8 chunks "in processing", all LLM requests share the same global priority queue!
|
||||
|
||||
```python
|
||||
# lightrag/lightrag.py - __post_init__ method
|
||||
self.llm_model_func = priority_limit_async_func_call(self.llm_model_max_async)(
|
||||
partial(
|
||||
self.llm_model_func,
|
||||
hashing_kv=hashing_kv,
|
||||
**self.llm_model_kwargs,
|
||||
)
|
||||
)
|
||||
# 🔥 Global LLM queue size = llm_model_max_async = 4
|
||||
```
|
||||
|
||||
### Priority Queue Implementation
|
||||
|
||||
```python
|
||||
# lightrag/utils.py - priority_limit_async_func_call function
|
||||
def priority_limit_async_func_call(max_size: int, max_queue_size: int = 1000):
|
||||
def final_decro(func):
|
||||
queue = asyncio.PriorityQueue(maxsize=max_queue_size)
|
||||
tasks = set()
|
||||
|
||||
async def worker():
|
||||
"""Worker that processes tasks in the priority queue"""
|
||||
while not shutdown_event.is_set():
|
||||
try:
|
||||
priority, count, future, args, kwargs = await asyncio.wait_for(queue.get(), timeout=1.0)
|
||||
result = await func(*args, **kwargs) # 🔥 Actual LLM call
|
||||
if not future.done():
|
||||
future.set_result(result)
|
||||
except Exception as e:
|
||||
# Error handling...
|
||||
finally:
|
||||
queue.task_done()
|
||||
|
||||
# 🔥 Create fixed number of workers (max_size), this is the real concurrency limit
|
||||
for _ in range(max_size):
|
||||
task = asyncio.create_task(worker())
|
||||
tasks.add(task)
|
||||
```
|
||||
|
||||
## 4. Chunk Internal Processing Mechanism (Serial)
|
||||
|
||||
### Why Serial?
|
||||
|
||||
Internal processing of each chunk strictly follows this serial execution order:
|
||||
|
||||
```python
|
||||
# lightrag/operate.py - _process_single_content function
|
||||
async def _process_single_content(chunk_key_dp: tuple[str, TextChunkSchema]):
|
||||
# Step 1: Initial entity extraction
|
||||
hint_prompt = entity_extract_prompt.format(**{**context_base, "input_text": content})
|
||||
final_result = await use_llm_func_with_cache(hint_prompt, use_llm_func, ...)
|
||||
|
||||
# Process initial extraction results
|
||||
maybe_nodes, maybe_edges = await _process_extraction_result(final_result, chunk_key, file_path)
|
||||
|
||||
# Step 2: Gleaning phase
|
||||
for now_glean_index in range(entity_extract_max_gleaning):
|
||||
# 🔥 Serial wait for gleaning results
|
||||
glean_result = await use_llm_func_with_cache(
|
||||
continue_prompt, use_llm_func,
|
||||
llm_response_cache=llm_response_cache,
|
||||
history_messages=history, cache_type="extract"
|
||||
)
|
||||
|
||||
# Process gleaning results
|
||||
glean_nodes, glean_edges = await _process_extraction_result(glean_result, chunk_key, file_path)
|
||||
|
||||
# Merge results...
|
||||
|
||||
# Step 3: Determine whether to continue loop
|
||||
if now_glean_index == entity_extract_max_gleaning - 1:
|
||||
break
|
||||
|
||||
# 🔥 Serial wait for loop decision results
|
||||
if_loop_result = await use_llm_func_with_cache(
|
||||
if_loop_prompt, use_llm_func,
|
||||
llm_response_cache=llm_response_cache,
|
||||
history_messages=history, cache_type="extract"
|
||||
)
|
||||
|
||||
if if_loop_result.strip().strip('"').strip("'").lower() != "yes":
|
||||
break
|
||||
|
||||
return maybe_nodes, maybe_edges
|
||||
```
|
||||
|
||||
## 5. Complete Concurrent Hierarchy Diagram
|
||||

|
||||
|
||||
### Chunk Internal Processing (Serial)
|
||||
```
|
||||
Initial Extraction → Gleaning → Loop Decision → Complete
|
||||
```
|
||||
|
||||
## 6. Real-World Scenario Analysis
|
||||
|
||||
### Scenario 1: Single Document with Multiple Chunks
|
||||
Assume 1 document with 6 chunks:
|
||||
|
||||
- **Document level**: Only 1 document, not limited by `max_parallel_insert`
|
||||
- **Chunk level**: Maximum 4 chunks processed simultaneously (limited by `llm_model_max_async=4`)
|
||||
- **LLM level**: Global maximum 4 LLM requests concurrent
|
||||
|
||||
**Expected behavior**: 4 chunks process concurrently, remaining 2 chunks wait.
|
||||
|
||||
### Scenario 2: Multiple Documents with Multiple Chunks
|
||||
Assume 3 documents, each with 10 chunks:
|
||||
|
||||
- **Document level**: Maximum 2 documents processed simultaneously
|
||||
- **Chunk level**: Maximum 4 chunks per document processed simultaneously
|
||||
- **Theoretical Chunk concurrency**: 2 × 4 = 8 chunks processed simultaneously
|
||||
- **Actual LLM concurrency**: Only 4 LLM requests actually execute
|
||||
|
||||
**Actual state distribution**:
|
||||
```
|
||||
# Possible system state:
|
||||
Document 1: 4 chunks "processing" (2 executing LLM, 2 waiting for LLM response)
|
||||
Document 2: 4 chunks "processing" (2 executing LLM, 2 waiting for LLM response)
|
||||
Document 3: Waiting for document-level semaphore
|
||||
|
||||
Total:
|
||||
- 8 chunks in "processing" state
|
||||
- 4 LLM requests actually executing
|
||||
- 4 chunks waiting for LLM response
|
||||
```
|
||||
|
||||
## 7. Performance Optimization Recommendations
|
||||
|
||||
### Understanding the Bottleneck
|
||||
|
||||
The real bottleneck is the global LLM queue, not the chunk semaphores!
|
||||
|
||||
### Adjustment Strategies
|
||||
|
||||
**Strategy 1: Increase LLM Concurrent Capacity**
|
||||
|
||||
```bash
|
||||
# Environment variable configuration
|
||||
export MAX_PARALLEL_INSERT=2 # Keep document concurrency
|
||||
export MAX_ASYNC=8 # 🔥 Increase LLM request concurrency
|
||||
```
|
||||
|
||||
**Strategy 2: Balance Document and LLM Concurrency**
|
||||
|
||||
```python
|
||||
rag = LightRAG(
|
||||
max_parallel_insert=3, # Moderately increase document concurrency
|
||||
llm_model_max_async=12, # Significantly increase LLM concurrency
|
||||
entity_extract_max_gleaning=0, # Reduce serial steps within chunks
|
||||
)
|
||||
```
|
||||
|
||||
## 8. Summary
|
||||
|
||||
Key characteristics of LightRAG's multi-document concurrent processing mechanism:
|
||||
|
||||
### Concurrent Layers
|
||||
1. **Inter-document competition**: Controlled by `max_parallel_insert`, default 2 documents concurrent
|
||||
2. **Theoretical Chunk concurrency**: Each document independently creates semaphores, total = max_parallel_insert × llm_model_max_async
|
||||
3. **Actual LLM concurrency**: All chunks share global LLM queue, controlled by `llm_model_max_async`
|
||||
4. **Intra-chunk serial**: Multiple LLM requests within each chunk execute strictly serially
|
||||
|
||||
### Key Insights
|
||||
- **Theoretical vs Actual**: System may have many chunks "in processing", but only few are actually executing LLM requests
|
||||
- **Real Bottleneck**: Global LLM request queue is the performance bottleneck, not chunk semaphores
|
||||
- **Optimization Focus**: Increasing `llm_model_max_async` is more effective than increasing `max_parallel_insert`
|
||||
BIN
docs/assets/lightrag_indexing.png
Normal file
BIN
docs/assets/lightrag_indexing.png
Normal file
Binary file not shown.
|
After Width: | Height: | Size: 183 KiB |
277
docs/zh/LightRAG_concurrent_explain_zh.md
Normal file
277
docs/zh/LightRAG_concurrent_explain_zh.md
Normal file
|
|
@ -0,0 +1,277 @@
|
|||
# LightRAG 多文档并发控制机制详解
|
||||
|
||||
LightRAG 在处理多个文档时采用了多层次的并发控制策略。本文将深入分析文档级别、chunk级别和LLM请求级别的并发控制机制,帮助您理解为什么会出现特定的并发行为。
|
||||
|
||||
## 概述
|
||||
|
||||
LightRAG 的并发控制分为三个层次:
|
||||
|
||||
1. 文档级别并发:控制同时处理的文档数量
|
||||
2. Chunk级别并发:控制单个文档内同时处理的chunk数量
|
||||
3. LLM请求级别并发:控制全局LLM请求的并发数量
|
||||
|
||||
## 1. 文档级别并发控制
|
||||
|
||||
**控制参数**:`max_parallel_insert`
|
||||
|
||||
文档级别的并发由 `max_parallel_insert` 参数控制,默认值为2。
|
||||
|
||||
```python
|
||||
# lightrag/lightrag.py
|
||||
max_parallel_insert: int = field(default=int(os.getenv("MAX_PARALLEL_INSERT", 2)))
|
||||
```
|
||||
|
||||
### 实现机制
|
||||
|
||||
在 `apipeline_process_enqueue_documents` 方法中,使用信号量控制文档并发:
|
||||
|
||||
```python
|
||||
# lightrag/lightrag.py - apipeline_process_enqueue_documents方法
|
||||
async def process_document(
|
||||
doc_id: str,
|
||||
status_doc: DocProcessingStatus,
|
||||
split_by_character: str | None,
|
||||
split_by_character_only: bool,
|
||||
pipeline_status: dict,
|
||||
pipeline_status_lock: asyncio.Lock,
|
||||
semaphore: asyncio.Semaphore, # 文档级别信号量
|
||||
) -> None:
|
||||
"""Process single document"""
|
||||
async with semaphore: # 🔥 文档级别并发控制
|
||||
# ... 处理单个文档的所有chunks
|
||||
|
||||
# 创建文档级别信号量
|
||||
semaphore = asyncio.Semaphore(self.max_parallel_insert) # 默认2
|
||||
|
||||
# 为每个文档创建处理任务
|
||||
doc_tasks = []
|
||||
for doc_id, status_doc in to_process_docs.items():
|
||||
doc_tasks.append(
|
||||
process_document(
|
||||
doc_id, status_doc, split_by_character, split_by_character_only,
|
||||
pipeline_status, pipeline_status_lock, semaphore
|
||||
)
|
||||
)
|
||||
|
||||
# 等待所有文档处理完成
|
||||
await asyncio.gather(*doc_tasks)
|
||||
```
|
||||
|
||||
## 2. Chunk级别并发控制
|
||||
|
||||
**控制参数**:`llm_model_max_async`
|
||||
|
||||
**关键点**:每个文档都会独立创建自己的chunk信号量!
|
||||
|
||||
```python
|
||||
# lightrag/lightrag.py
|
||||
llm_model_max_async: int = field(default=int(os.getenv("MAX_ASYNC", 4)))
|
||||
```
|
||||
|
||||
### 实现机制
|
||||
|
||||
在 `extract_entities` 函数中,**每个文档独立创建**自己的chunk信号量:
|
||||
|
||||
```python
|
||||
# lightrag/operate.py - extract_entities函数
|
||||
async def extract_entities(chunks: dict[str, TextChunkSchema], global_config: dict[str, str], ...):
|
||||
# 🔥 关键:每个文档都会独立创建这个信号量!
|
||||
llm_model_max_async = global_config.get("llm_model_max_async", 4)
|
||||
semaphore = asyncio.Semaphore(llm_model_max_async) # 每个文档的chunk信号量
|
||||
|
||||
async def _process_with_semaphore(chunk):
|
||||
async with semaphore: # 🔥 文档内部的chunk并发控制
|
||||
return await _process_single_content(chunk)
|
||||
|
||||
# 为每个chunk创建任务
|
||||
tasks = []
|
||||
for c in ordered_chunks:
|
||||
task = asyncio.create_task(_process_with_semaphore(c))
|
||||
tasks.append(task)
|
||||
|
||||
# 等待所有chunk处理完成
|
||||
done, pending = await asyncio.wait(tasks, return_when=asyncio.FIRST_EXCEPTION)
|
||||
chunk_results = [task.result() for task in tasks]
|
||||
return chunk_results
|
||||
```
|
||||
|
||||
### 重要推论:系统整体Chunk并发数
|
||||
|
||||
由于每个文档独立创建chunk信号量,系统理论上的chunk并发数是:
|
||||
|
||||
**理论Chunk并发数 = max_parallel_insert × llm_model_max_async**
|
||||
|
||||
例如:
|
||||
- `max_parallel_insert = 2`(同时处理2个文档)
|
||||
- `llm_model_max_async = 4`(每个文档最多4个chunk并发)
|
||||
- 理论结果:最多 2 × 4 = 8个chunk同时处于"处理中"状态
|
||||
|
||||
## 3. LLM请求级别并发控制(真正的瓶颈)
|
||||
|
||||
**控制参数**:`llm_model_max_async`(全局共享)
|
||||
|
||||
**关键**:尽管可能有8个chunk在"处理中",但所有LLM请求共享同一个全局优先级队列!
|
||||
|
||||
```python
|
||||
# lightrag/lightrag.py - __post_init__方法
|
||||
self.llm_model_func = priority_limit_async_func_call(self.llm_model_max_async)(
|
||||
partial(
|
||||
self.llm_model_func,
|
||||
hashing_kv=hashing_kv,
|
||||
**self.llm_model_kwargs,
|
||||
)
|
||||
)
|
||||
# 🔥 全局LLM队列大小 = llm_model_max_async = 4
|
||||
```
|
||||
|
||||
### 优先级队列实现
|
||||
|
||||
```python
|
||||
# lightrag/utils.py - priority_limit_async_func_call函数
|
||||
def priority_limit_async_func_call(max_size: int, max_queue_size: int = 1000):
|
||||
def final_decro(func):
|
||||
queue = asyncio.PriorityQueue(maxsize=max_queue_size)
|
||||
tasks = set()
|
||||
|
||||
async def worker():
|
||||
"""Worker that processes tasks in the priority queue"""
|
||||
while not shutdown_event.is_set():
|
||||
try:
|
||||
priority, count, future, args, kwargs = await asyncio.wait_for(queue.get(), timeout=1.0)
|
||||
result = await func(*args, **kwargs) # 🔥 实际LLM调用
|
||||
if not future.done():
|
||||
future.set_result(result)
|
||||
except Exception as e:
|
||||
# 错误处理...
|
||||
finally:
|
||||
queue.task_done()
|
||||
|
||||
# 🔥 创建固定数量的worker(max_size个),这是真正的并发限制
|
||||
for _ in range(max_size):
|
||||
task = asyncio.create_task(worker())
|
||||
tasks.add(task)
|
||||
```
|
||||
|
||||
## 4. Chunk内部处理机制(串行)
|
||||
|
||||
### 为什么是串行?
|
||||
|
||||
每个chunk内部的处理严格按照以下顺序串行执行:
|
||||
|
||||
```python
|
||||
# lightrag/operate.py - _process_single_content函数
|
||||
async def _process_single_content(chunk_key_dp: tuple[str, TextChunkSchema]):
|
||||
# 步骤1:初始实体提取
|
||||
hint_prompt = entity_extract_prompt.format(**{**context_base, "input_text": content})
|
||||
final_result = await use_llm_func_with_cache(hint_prompt, use_llm_func, ...)
|
||||
|
||||
# 处理初始提取结果
|
||||
maybe_nodes, maybe_edges = await _process_extraction_result(final_result, chunk_key, file_path)
|
||||
|
||||
# 步骤2:Gleaning(深挖)阶段
|
||||
for now_glean_index in range(entity_extract_max_gleaning):
|
||||
# 🔥 串行等待gleaning结果
|
||||
glean_result = await use_llm_func_with_cache(
|
||||
continue_prompt, use_llm_func,
|
||||
llm_response_cache=llm_response_cache,
|
||||
history_messages=history, cache_type="extract"
|
||||
)
|
||||
|
||||
# 处理gleaning结果
|
||||
glean_nodes, glean_edges = await _process_extraction_result(glean_result, chunk_key, file_path)
|
||||
|
||||
# 合并结果...
|
||||
|
||||
# 步骤3:判断是否继续循环
|
||||
if now_glean_index == entity_extract_max_gleaning - 1:
|
||||
break
|
||||
|
||||
# 🔥 串行等待循环判断结果
|
||||
if_loop_result = await use_llm_func_with_cache(
|
||||
if_loop_prompt, use_llm_func,
|
||||
llm_response_cache=llm_response_cache,
|
||||
history_messages=history, cache_type="extract"
|
||||
)
|
||||
|
||||
if if_loop_result.strip().strip('"').strip("'").lower() != "yes":
|
||||
break
|
||||
|
||||
return maybe_nodes, maybe_edges
|
||||
```
|
||||
|
||||
## 5. 完整的并发层次图
|
||||

|
||||
|
||||
|
||||
## 6. 实际运行场景分析
|
||||
|
||||
### 场景1:单文档多Chunk
|
||||
假设有1个文档,包含6个chunks:
|
||||
|
||||
- 文档级别:只有1个文档,不受 `max_parallel_insert` 限制
|
||||
- Chunk级别:最多4个chunks同时处理(受 `llm_model_max_async=4` 限制)
|
||||
- LLM级别:全局最多4个LLM请求并发
|
||||
|
||||
**预期行为**:4个chunks并发处理,剩余2个chunks等待。
|
||||
|
||||
### 场景2:多文档多Chunk
|
||||
假设有3个文档,每个文档包含10个chunks:
|
||||
|
||||
- 文档级别:最多2个文档同时处理
|
||||
- Chunk级别:每个文档最多4个chunks同时处理
|
||||
- 理论Chunk并发:2 × 4 = 8个chunks同时处理
|
||||
- 实际LLM并发:只有4个LLM请求真正执行
|
||||
|
||||
**实际状态分布**:
|
||||
```
|
||||
# 可能的系统状态:
|
||||
文档1: 4个chunks"处理中"(其中2个在执行LLM,2个在等待LLM响应)
|
||||
文档2: 4个chunks"处理中"(其中2个在执行LLM,2个在等待LLM响应)
|
||||
文档3: 等待文档级别信号量
|
||||
|
||||
总计:
|
||||
- 8个chunks处于"处理中"状态
|
||||
- 4个LLM请求真正执行
|
||||
- 4个chunks等待LLM响应
|
||||
```
|
||||
|
||||
## 7. 性能优化建议
|
||||
|
||||
### 理解瓶颈
|
||||
|
||||
**真正的瓶颈是全局LLM队列,而不是chunk信号量!**
|
||||
|
||||
### 调整策略
|
||||
|
||||
**策略1:提高LLM并发能力**
|
||||
|
||||
```bash
|
||||
# 环境变量配置
|
||||
export MAX_PARALLEL_INSERT=2 # 保持文档并发
|
||||
export MAX_ASYNC=8 # 🔥 增加LLM请求并发数
|
||||
```
|
||||
|
||||
**策略2:平衡文档和LLM并发**
|
||||
|
||||
```python
|
||||
rag = LightRAG(
|
||||
max_parallel_insert=3, # 适度增加文档并发
|
||||
llm_model_max_async=12, # 大幅增加LLM并发
|
||||
entity_extract_max_gleaning=0, # 减少chunk内串行步骤
|
||||
)
|
||||
```
|
||||
|
||||
## 8. 总结
|
||||
|
||||
LightRAG的多文档并发处理机制的关键特点:
|
||||
|
||||
### 并发层次
|
||||
1. **文档间争抢**:受 `max_parallel_insert` 控制,默认2个文档并发
|
||||
2. **理论Chunk并发**:每个文档独立创建信号量,总数 = `max_parallel_insert × llm_model_max_async`
|
||||
3. **实际LLM并发**:所有chunk共享全局LLM队列,受 `llm_model_max_async` 控制
|
||||
4. **单Chunk内串行**:每个chunk内的多个LLM请求严格串行执行
|
||||
|
||||
### 关键洞察
|
||||
- **理论vs实际**:系统可能有很多chunk在"处理中",但只有少数在真正执行LLM请求
|
||||
- **真正瓶颈**:全局LLM请求队列是性能瓶颈,而不是chunk信号量
|
||||
- **优化重点**:提高 `llm_model_max_async` 比增加 `max_parallel_insert` 更有效
|
||||
224
examples/modalprocessors_example.py
Normal file
224
examples/modalprocessors_example.py
Normal file
|
|
@ -0,0 +1,224 @@
|
|||
"""
|
||||
Example of directly using modal processors
|
||||
|
||||
This example demonstrates how to use LightRAG's modal processors directly without going through MinerU.
|
||||
"""
|
||||
|
||||
import asyncio
|
||||
import argparse
|
||||
from lightrag.llm.openai import openai_complete_if_cache, openai_embed
|
||||
from lightrag.kg.shared_storage import initialize_pipeline_status
|
||||
from lightrag import LightRAG
|
||||
from raganything.modalprocessors import (
|
||||
ImageModalProcessor,
|
||||
TableModalProcessor,
|
||||
EquationModalProcessor,
|
||||
)
|
||||
|
||||
WORKING_DIR = "./rag_storage"
|
||||
|
||||
|
||||
def get_llm_model_func(api_key: str, base_url: str = None):
|
||||
return (
|
||||
lambda prompt,
|
||||
system_prompt=None,
|
||||
history_messages=[],
|
||||
**kwargs: openai_complete_if_cache(
|
||||
"gpt-4o-mini",
|
||||
prompt,
|
||||
system_prompt=system_prompt,
|
||||
history_messages=history_messages,
|
||||
api_key=api_key,
|
||||
base_url=base_url,
|
||||
**kwargs,
|
||||
)
|
||||
)
|
||||
|
||||
|
||||
def get_vision_model_func(api_key: str, base_url: str = None):
|
||||
return (
|
||||
lambda prompt,
|
||||
system_prompt=None,
|
||||
history_messages=[],
|
||||
image_data=None,
|
||||
**kwargs: openai_complete_if_cache(
|
||||
"gpt-4o",
|
||||
"",
|
||||
system_prompt=None,
|
||||
history_messages=[],
|
||||
messages=[
|
||||
{"role": "system", "content": system_prompt} if system_prompt else None,
|
||||
{
|
||||
"role": "user",
|
||||
"content": [
|
||||
{"type": "text", "text": prompt},
|
||||
{
|
||||
"type": "image_url",
|
||||
"image_url": {
|
||||
"url": f"data:image/jpeg;base64,{image_data}"
|
||||
},
|
||||
},
|
||||
],
|
||||
}
|
||||
if image_data
|
||||
else {"role": "user", "content": prompt},
|
||||
],
|
||||
api_key=api_key,
|
||||
base_url=base_url,
|
||||
**kwargs,
|
||||
)
|
||||
if image_data
|
||||
else openai_complete_if_cache(
|
||||
"gpt-4o-mini",
|
||||
prompt,
|
||||
system_prompt=system_prompt,
|
||||
history_messages=history_messages,
|
||||
api_key=api_key,
|
||||
base_url=base_url,
|
||||
**kwargs,
|
||||
)
|
||||
)
|
||||
|
||||
|
||||
async def process_image_example(lightrag: LightRAG, vision_model_func):
|
||||
"""Example of processing an image"""
|
||||
# Create image processor
|
||||
image_processor = ImageModalProcessor(
|
||||
lightrag=lightrag, modal_caption_func=vision_model_func
|
||||
)
|
||||
|
||||
# Prepare image content
|
||||
image_content = {
|
||||
"img_path": "image.jpg",
|
||||
"img_caption": ["Example image caption"],
|
||||
"img_footnote": ["Example image footnote"],
|
||||
}
|
||||
|
||||
# Process image
|
||||
description, entity_info = await image_processor.process_multimodal_content(
|
||||
modal_content=image_content,
|
||||
content_type="image",
|
||||
file_path="image_example.jpg",
|
||||
entity_name="Example Image",
|
||||
)
|
||||
|
||||
print("Image Processing Results:")
|
||||
print(f"Description: {description}")
|
||||
print(f"Entity Info: {entity_info}")
|
||||
|
||||
|
||||
async def process_table_example(lightrag: LightRAG, llm_model_func):
|
||||
"""Example of processing a table"""
|
||||
# Create table processor
|
||||
table_processor = TableModalProcessor(
|
||||
lightrag=lightrag, modal_caption_func=llm_model_func
|
||||
)
|
||||
|
||||
# Prepare table content
|
||||
table_content = {
|
||||
"table_body": """
|
||||
| Name | Age | Occupation |
|
||||
|------|-----|------------|
|
||||
| John | 25 | Engineer |
|
||||
| Mary | 30 | Designer |
|
||||
""",
|
||||
"table_caption": ["Employee Information Table"],
|
||||
"table_footnote": ["Data updated as of 2024"],
|
||||
}
|
||||
|
||||
# Process table
|
||||
description, entity_info = await table_processor.process_multimodal_content(
|
||||
modal_content=table_content,
|
||||
content_type="table",
|
||||
file_path="table_example.md",
|
||||
entity_name="Employee Table",
|
||||
)
|
||||
|
||||
print("\nTable Processing Results:")
|
||||
print(f"Description: {description}")
|
||||
print(f"Entity Info: {entity_info}")
|
||||
|
||||
|
||||
async def process_equation_example(lightrag: LightRAG, llm_model_func):
|
||||
"""Example of processing a mathematical equation"""
|
||||
# Create equation processor
|
||||
equation_processor = EquationModalProcessor(
|
||||
lightrag=lightrag, modal_caption_func=llm_model_func
|
||||
)
|
||||
|
||||
# Prepare equation content
|
||||
equation_content = {"text": "E = mc^2", "text_format": "LaTeX"}
|
||||
|
||||
# Process equation
|
||||
description, entity_info = await equation_processor.process_multimodal_content(
|
||||
modal_content=equation_content,
|
||||
content_type="equation",
|
||||
file_path="equation_example.txt",
|
||||
entity_name="Mass-Energy Equivalence",
|
||||
)
|
||||
|
||||
print("\nEquation Processing Results:")
|
||||
print(f"Description: {description}")
|
||||
print(f"Entity Info: {entity_info}")
|
||||
|
||||
|
||||
async def initialize_rag(api_key: str, base_url: str = None):
|
||||
rag = LightRAG(
|
||||
working_dir=WORKING_DIR,
|
||||
embedding_func=lambda texts: openai_embed(
|
||||
texts,
|
||||
model="text-embedding-3-large",
|
||||
api_key=api_key,
|
||||
base_url=base_url,
|
||||
),
|
||||
llm_model_func=lambda prompt,
|
||||
system_prompt=None,
|
||||
history_messages=[],
|
||||
**kwargs: openai_complete_if_cache(
|
||||
"gpt-4o-mini",
|
||||
prompt,
|
||||
system_prompt=system_prompt,
|
||||
history_messages=history_messages,
|
||||
api_key=api_key,
|
||||
base_url=base_url,
|
||||
**kwargs,
|
||||
),
|
||||
)
|
||||
|
||||
await rag.initialize_storages()
|
||||
await initialize_pipeline_status()
|
||||
|
||||
return rag
|
||||
|
||||
|
||||
def main():
|
||||
"""Main function to run the example"""
|
||||
parser = argparse.ArgumentParser(description="Modal Processors Example")
|
||||
parser.add_argument("--api-key", required=True, help="OpenAI API key")
|
||||
parser.add_argument("--base-url", help="Optional base URL for API")
|
||||
parser.add_argument(
|
||||
"--working-dir", "-w", default=WORKING_DIR, help="Working directory path"
|
||||
)
|
||||
|
||||
args = parser.parse_args()
|
||||
|
||||
# Run examples
|
||||
asyncio.run(main_async(args.api_key, args.base_url))
|
||||
|
||||
|
||||
async def main_async(api_key: str, base_url: str = None):
|
||||
# Initialize LightRAG
|
||||
lightrag = await initialize_rag(api_key, base_url)
|
||||
|
||||
# Get model functions
|
||||
llm_model_func = get_llm_model_func(api_key, base_url)
|
||||
vision_model_func = get_vision_model_func(api_key, base_url)
|
||||
|
||||
# Run examples
|
||||
await process_image_example(lightrag, vision_model_func)
|
||||
await process_table_example(lightrag, llm_model_func)
|
||||
await process_equation_example(lightrag, llm_model_func)
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
main()
|
||||
154
examples/raganything_example.py
Normal file
154
examples/raganything_example.py
Normal file
|
|
@ -0,0 +1,154 @@
|
|||
#!/usr/bin/env python
|
||||
"""
|
||||
Example script demonstrating the integration of MinerU parser with RAGAnything
|
||||
|
||||
This example shows how to:
|
||||
1. Process parsed documents with RAGAnything
|
||||
2. Perform multimodal queries on the processed documents
|
||||
3. Handle different types of content (text, images, tables)
|
||||
"""
|
||||
|
||||
import os
|
||||
import argparse
|
||||
import asyncio
|
||||
from lightrag.llm.openai import openai_complete_if_cache, openai_embed
|
||||
from raganything.raganything import RAGAnything
|
||||
|
||||
|
||||
async def process_with_rag(
|
||||
file_path: str,
|
||||
output_dir: str,
|
||||
api_key: str,
|
||||
base_url: str = None,
|
||||
working_dir: str = None,
|
||||
):
|
||||
"""
|
||||
Process document with RAGAnything
|
||||
|
||||
Args:
|
||||
file_path: Path to the document
|
||||
output_dir: Output directory for RAG results
|
||||
api_key: OpenAI API key
|
||||
base_url: Optional base URL for API
|
||||
"""
|
||||
try:
|
||||
# Initialize RAGAnything
|
||||
rag = RAGAnything(
|
||||
working_dir=working_dir,
|
||||
llm_model_func=lambda prompt,
|
||||
system_prompt=None,
|
||||
history_messages=[],
|
||||
**kwargs: openai_complete_if_cache(
|
||||
"gpt-4o-mini",
|
||||
prompt,
|
||||
system_prompt=system_prompt,
|
||||
history_messages=history_messages,
|
||||
api_key=api_key,
|
||||
base_url=base_url,
|
||||
**kwargs,
|
||||
),
|
||||
vision_model_func=lambda prompt,
|
||||
system_prompt=None,
|
||||
history_messages=[],
|
||||
image_data=None,
|
||||
**kwargs: openai_complete_if_cache(
|
||||
"gpt-4o",
|
||||
"",
|
||||
system_prompt=None,
|
||||
history_messages=[],
|
||||
messages=[
|
||||
{"role": "system", "content": system_prompt}
|
||||
if system_prompt
|
||||
else None,
|
||||
{
|
||||
"role": "user",
|
||||
"content": [
|
||||
{"type": "text", "text": prompt},
|
||||
{
|
||||
"type": "image_url",
|
||||
"image_url": {
|
||||
"url": f"data:image/jpeg;base64,{image_data}"
|
||||
},
|
||||
},
|
||||
],
|
||||
}
|
||||
if image_data
|
||||
else {"role": "user", "content": prompt},
|
||||
],
|
||||
api_key=api_key,
|
||||
base_url=base_url,
|
||||
**kwargs,
|
||||
)
|
||||
if image_data
|
||||
else openai_complete_if_cache(
|
||||
"gpt-4o-mini",
|
||||
prompt,
|
||||
system_prompt=system_prompt,
|
||||
history_messages=history_messages,
|
||||
api_key=api_key,
|
||||
base_url=base_url,
|
||||
**kwargs,
|
||||
),
|
||||
embedding_func=lambda texts: openai_embed(
|
||||
texts,
|
||||
model="text-embedding-3-large",
|
||||
api_key=api_key,
|
||||
base_url=base_url,
|
||||
),
|
||||
embedding_dim=3072,
|
||||
max_token_size=8192,
|
||||
)
|
||||
|
||||
# Process document
|
||||
await rag.process_document_complete(
|
||||
file_path=file_path, output_dir=output_dir, parse_method="auto"
|
||||
)
|
||||
|
||||
# Example queries
|
||||
queries = [
|
||||
"What is the main content of the document?",
|
||||
"Describe the images and figures in the document",
|
||||
"Tell me about the experimental results and data tables",
|
||||
]
|
||||
|
||||
print("\nQuerying processed document:")
|
||||
for query in queries:
|
||||
print(f"\nQuery: {query}")
|
||||
result = await rag.query_with_multimodal(query, mode="hybrid")
|
||||
print(f"Answer: {result}")
|
||||
|
||||
except Exception as e:
|
||||
print(f"Error processing with RAG: {str(e)}")
|
||||
|
||||
|
||||
def main():
|
||||
"""Main function to run the example"""
|
||||
parser = argparse.ArgumentParser(description="MinerU RAG Example")
|
||||
parser.add_argument("file_path", help="Path to the document to process")
|
||||
parser.add_argument(
|
||||
"--working_dir", "-w", default="./rag_storage", help="Working directory path"
|
||||
)
|
||||
parser.add_argument(
|
||||
"--output", "-o", default="./output", help="Output directory path"
|
||||
)
|
||||
parser.add_argument(
|
||||
"--api-key", required=True, help="OpenAI API key for RAG processing"
|
||||
)
|
||||
parser.add_argument("--base-url", help="Optional base URL for API")
|
||||
|
||||
args = parser.parse_args()
|
||||
|
||||
# Create output directory if specified
|
||||
if args.output:
|
||||
os.makedirs(args.output, exist_ok=True)
|
||||
|
||||
# Process with RAG
|
||||
asyncio.run(
|
||||
process_with_rag(
|
||||
args.file_path, args.output, args.api_key, args.base_url, args.working_dir
|
||||
)
|
||||
)
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
main()
|
||||
|
|
@ -1,5 +1,5 @@
|
|||
from .lightrag import LightRAG as LightRAG, QueryParam as QueryParam
|
||||
|
||||
__version__ = "1.3.8"
|
||||
__version__ = "1.3.9"
|
||||
__author__ = "Zirui Guo"
|
||||
__url__ = "https://github.com/HKUDS/LightRAG"
|
||||
|
|
|
|||
|
|
@ -1 +1 @@
|
|||
__api_version__ = "0171"
|
||||
__api_version__ = "0173"
|
||||
|
|
|
|||
|
|
@ -1,7 +1,7 @@
|
|||
from fastapi import APIRouter, HTTPException, Request
|
||||
from pydantic import BaseModel
|
||||
from typing import List, Dict, Any, Optional
|
||||
import logging
|
||||
from typing import List, Dict, Any, Optional, Type
|
||||
from lightrag.utils import logger
|
||||
import time
|
||||
import json
|
||||
import re
|
||||
|
|
@ -95,6 +95,68 @@ class OllamaTagResponse(BaseModel):
|
|||
models: List[OllamaModel]
|
||||
|
||||
|
||||
class OllamaRunningModelDetails(BaseModel):
|
||||
parent_model: str
|
||||
format: str
|
||||
family: str
|
||||
families: List[str]
|
||||
parameter_size: str
|
||||
quantization_level: str
|
||||
|
||||
|
||||
class OllamaRunningModel(BaseModel):
|
||||
name: str
|
||||
model: str
|
||||
size: int
|
||||
digest: str
|
||||
details: OllamaRunningModelDetails
|
||||
expires_at: str
|
||||
size_vram: int
|
||||
|
||||
|
||||
class OllamaPsResponse(BaseModel):
|
||||
models: List[OllamaRunningModel]
|
||||
|
||||
|
||||
async def parse_request_body(
|
||||
request: Request, model_class: Type[BaseModel]
|
||||
) -> BaseModel:
|
||||
"""
|
||||
Parse request body based on Content-Type header.
|
||||
Supports both application/json and application/octet-stream.
|
||||
|
||||
Args:
|
||||
request: The FastAPI Request object
|
||||
model_class: The Pydantic model class to parse the request into
|
||||
|
||||
Returns:
|
||||
An instance of the provided model_class
|
||||
"""
|
||||
content_type = request.headers.get("content-type", "").lower()
|
||||
|
||||
try:
|
||||
if content_type.startswith("application/json"):
|
||||
# FastAPI already handles JSON parsing for us
|
||||
body = await request.json()
|
||||
elif content_type.startswith("application/octet-stream"):
|
||||
# Manually parse octet-stream as JSON
|
||||
body_bytes = await request.body()
|
||||
body = json.loads(body_bytes.decode("utf-8"))
|
||||
else:
|
||||
# Try to parse as JSON for any other content type
|
||||
body_bytes = await request.body()
|
||||
body = json.loads(body_bytes.decode("utf-8"))
|
||||
|
||||
# Create an instance of the model
|
||||
return model_class(**body)
|
||||
except json.JSONDecodeError:
|
||||
raise HTTPException(status_code=400, detail="Invalid JSON in request body")
|
||||
except Exception as e:
|
||||
raise HTTPException(
|
||||
status_code=400, detail=f"Error parsing request body: {str(e)}"
|
||||
)
|
||||
|
||||
|
||||
def estimate_tokens(text: str) -> int:
|
||||
"""Estimate the number of tokens in text using tiktoken"""
|
||||
tokens = TiktokenTokenizer().encode(text)
|
||||
|
|
@ -197,13 +259,43 @@ class OllamaAPI:
|
|||
]
|
||||
)
|
||||
|
||||
@self.router.post("/generate", dependencies=[Depends(combined_auth)])
|
||||
async def generate(raw_request: Request, request: OllamaGenerateRequest):
|
||||
@self.router.get("/ps", dependencies=[Depends(combined_auth)])
|
||||
async def get_running_models():
|
||||
"""List Running Models - returns currently running models"""
|
||||
return OllamaPsResponse(
|
||||
models=[
|
||||
{
|
||||
"name": self.ollama_server_infos.LIGHTRAG_MODEL,
|
||||
"model": self.ollama_server_infos.LIGHTRAG_MODEL,
|
||||
"size": self.ollama_server_infos.LIGHTRAG_SIZE,
|
||||
"digest": self.ollama_server_infos.LIGHTRAG_DIGEST,
|
||||
"details": {
|
||||
"parent_model": "",
|
||||
"format": "gguf",
|
||||
"family": "llama",
|
||||
"families": ["llama"],
|
||||
"parameter_size": "7.2B",
|
||||
"quantization_level": "Q4_0",
|
||||
},
|
||||
"expires_at": "2050-12-31T14:38:31.83753-07:00",
|
||||
"size_vram": self.ollama_server_infos.LIGHTRAG_SIZE,
|
||||
}
|
||||
]
|
||||
)
|
||||
|
||||
@self.router.post(
|
||||
"/generate", dependencies=[Depends(combined_auth)], include_in_schema=True
|
||||
)
|
||||
async def generate(raw_request: Request):
|
||||
"""Handle generate completion requests acting as an Ollama model
|
||||
For compatibility purpose, the request is not processed by LightRAG,
|
||||
and will be handled by underlying LLM model.
|
||||
Supports both application/json and application/octet-stream Content-Types.
|
||||
"""
|
||||
try:
|
||||
# Parse the request body manually
|
||||
request = await parse_request_body(raw_request, OllamaGenerateRequest)
|
||||
|
||||
query = request.prompt
|
||||
start_time = time.time_ns()
|
||||
prompt_tokens = estimate_tokens(query)
|
||||
|
|
@ -278,7 +370,7 @@ class OllamaAPI:
|
|||
else:
|
||||
error_msg = f"Provider error: {error_msg}"
|
||||
|
||||
logging.error(f"Stream error: {error_msg}")
|
||||
logger.error(f"Stream error: {error_msg}")
|
||||
|
||||
# Send error message to client
|
||||
error_data = {
|
||||
|
|
@ -363,13 +455,19 @@ class OllamaAPI:
|
|||
trace_exception(e)
|
||||
raise HTTPException(status_code=500, detail=str(e))
|
||||
|
||||
@self.router.post("/chat", dependencies=[Depends(combined_auth)])
|
||||
async def chat(raw_request: Request, request: OllamaChatRequest):
|
||||
@self.router.post(
|
||||
"/chat", dependencies=[Depends(combined_auth)], include_in_schema=True
|
||||
)
|
||||
async def chat(raw_request: Request):
|
||||
"""Process chat completion requests acting as an Ollama model
|
||||
Routes user queries through LightRAG by selecting query mode based on prefix indicators.
|
||||
Detects and forwards OpenWebUI session-related requests (for meta data generation task) directly to LLM.
|
||||
Supports both application/json and application/octet-stream Content-Types.
|
||||
"""
|
||||
try:
|
||||
# Parse the request body manually
|
||||
request = await parse_request_body(raw_request, OllamaChatRequest)
|
||||
|
||||
# Get all messages
|
||||
messages = request.messages
|
||||
if not messages:
|
||||
|
|
@ -496,7 +594,7 @@ class OllamaAPI:
|
|||
else:
|
||||
error_msg = f"Provider error: {error_msg}"
|
||||
|
||||
logging.error(f"Stream error: {error_msg}")
|
||||
logger.error(f"Stream error: {error_msg}")
|
||||
|
||||
# Send error message to client
|
||||
error_data = {
|
||||
|
|
@ -530,6 +628,11 @@ class OllamaAPI:
|
|||
data = {
|
||||
"model": self.ollama_server_infos.LIGHTRAG_MODEL,
|
||||
"created_at": self.ollama_server_infos.LIGHTRAG_CREATED_AT,
|
||||
"message": {
|
||||
"role": "assistant",
|
||||
"content": "",
|
||||
"images": None,
|
||||
},
|
||||
"done": True,
|
||||
"total_duration": total_time,
|
||||
"load_duration": 0,
|
||||
|
|
|
|||
BIN
lightrag/api/webui/assets/KaTeX_AMS-Regular-BQhdFMY1.woff2
generated
Normal file
BIN
lightrag/api/webui/assets/KaTeX_AMS-Regular-BQhdFMY1.woff2
generated
Normal file
Binary file not shown.
BIN
lightrag/api/webui/assets/KaTeX_AMS-Regular-DMm9YOAa.woff
generated
Normal file
BIN
lightrag/api/webui/assets/KaTeX_AMS-Regular-DMm9YOAa.woff
generated
Normal file
Binary file not shown.
BIN
lightrag/api/webui/assets/KaTeX_AMS-Regular-DRggAlZN.ttf
generated
Normal file
BIN
lightrag/api/webui/assets/KaTeX_AMS-Regular-DRggAlZN.ttf
generated
Normal file
Binary file not shown.
BIN
lightrag/api/webui/assets/KaTeX_Caligraphic-Bold-ATXxdsX0.ttf
generated
Normal file
BIN
lightrag/api/webui/assets/KaTeX_Caligraphic-Bold-ATXxdsX0.ttf
generated
Normal file
Binary file not shown.
BIN
lightrag/api/webui/assets/KaTeX_Caligraphic-Bold-BEiXGLvX.woff
generated
Normal file
BIN
lightrag/api/webui/assets/KaTeX_Caligraphic-Bold-BEiXGLvX.woff
generated
Normal file
Binary file not shown.
BIN
lightrag/api/webui/assets/KaTeX_Caligraphic-Bold-Dq_IR9rO.woff2
generated
Normal file
BIN
lightrag/api/webui/assets/KaTeX_Caligraphic-Bold-Dq_IR9rO.woff2
generated
Normal file
Binary file not shown.
BIN
lightrag/api/webui/assets/KaTeX_Caligraphic-Regular-CTRA-rTL.woff
generated
Normal file
BIN
lightrag/api/webui/assets/KaTeX_Caligraphic-Regular-CTRA-rTL.woff
generated
Normal file
Binary file not shown.
BIN
lightrag/api/webui/assets/KaTeX_Caligraphic-Regular-Di6jR-x-.woff2
generated
Normal file
BIN
lightrag/api/webui/assets/KaTeX_Caligraphic-Regular-Di6jR-x-.woff2
generated
Normal file
Binary file not shown.
BIN
lightrag/api/webui/assets/KaTeX_Caligraphic-Regular-wX97UBjC.ttf
generated
Normal file
BIN
lightrag/api/webui/assets/KaTeX_Caligraphic-Regular-wX97UBjC.ttf
generated
Normal file
Binary file not shown.
BIN
lightrag/api/webui/assets/KaTeX_Fraktur-Bold-BdnERNNW.ttf
generated
Normal file
BIN
lightrag/api/webui/assets/KaTeX_Fraktur-Bold-BdnERNNW.ttf
generated
Normal file
Binary file not shown.
BIN
lightrag/api/webui/assets/KaTeX_Fraktur-Bold-BsDP51OF.woff
generated
Normal file
BIN
lightrag/api/webui/assets/KaTeX_Fraktur-Bold-BsDP51OF.woff
generated
Normal file
Binary file not shown.
BIN
lightrag/api/webui/assets/KaTeX_Fraktur-Bold-CL6g_b3V.woff2
generated
Normal file
BIN
lightrag/api/webui/assets/KaTeX_Fraktur-Bold-CL6g_b3V.woff2
generated
Normal file
Binary file not shown.
BIN
lightrag/api/webui/assets/KaTeX_Fraktur-Regular-CB_wures.ttf
generated
Normal file
BIN
lightrag/api/webui/assets/KaTeX_Fraktur-Regular-CB_wures.ttf
generated
Normal file
Binary file not shown.
BIN
lightrag/api/webui/assets/KaTeX_Fraktur-Regular-CTYiF6lA.woff2
generated
Normal file
BIN
lightrag/api/webui/assets/KaTeX_Fraktur-Regular-CTYiF6lA.woff2
generated
Normal file
Binary file not shown.
BIN
lightrag/api/webui/assets/KaTeX_Fraktur-Regular-Dxdc4cR9.woff
generated
Normal file
BIN
lightrag/api/webui/assets/KaTeX_Fraktur-Regular-Dxdc4cR9.woff
generated
Normal file
Binary file not shown.
BIN
lightrag/api/webui/assets/KaTeX_Main-Bold-Cx986IdX.woff2
generated
Normal file
BIN
lightrag/api/webui/assets/KaTeX_Main-Bold-Cx986IdX.woff2
generated
Normal file
Binary file not shown.
BIN
lightrag/api/webui/assets/KaTeX_Main-Bold-Jm3AIy58.woff
generated
Normal file
BIN
lightrag/api/webui/assets/KaTeX_Main-Bold-Jm3AIy58.woff
generated
Normal file
Binary file not shown.
BIN
lightrag/api/webui/assets/KaTeX_Main-Bold-waoOVXN0.ttf
generated
Normal file
BIN
lightrag/api/webui/assets/KaTeX_Main-Bold-waoOVXN0.ttf
generated
Normal file
Binary file not shown.
BIN
lightrag/api/webui/assets/KaTeX_Main-BoldItalic-DxDJ3AOS.woff2
generated
Normal file
BIN
lightrag/api/webui/assets/KaTeX_Main-BoldItalic-DxDJ3AOS.woff2
generated
Normal file
Binary file not shown.
BIN
lightrag/api/webui/assets/KaTeX_Main-BoldItalic-DzxPMmG6.ttf
generated
Normal file
BIN
lightrag/api/webui/assets/KaTeX_Main-BoldItalic-DzxPMmG6.ttf
generated
Normal file
Binary file not shown.
BIN
lightrag/api/webui/assets/KaTeX_Main-BoldItalic-SpSLRI95.woff
generated
Normal file
BIN
lightrag/api/webui/assets/KaTeX_Main-BoldItalic-SpSLRI95.woff
generated
Normal file
Binary file not shown.
BIN
lightrag/api/webui/assets/KaTeX_Main-Italic-3WenGoN9.ttf
generated
Normal file
BIN
lightrag/api/webui/assets/KaTeX_Main-Italic-3WenGoN9.ttf
generated
Normal file
Binary file not shown.
BIN
lightrag/api/webui/assets/KaTeX_Main-Italic-BMLOBm91.woff
generated
Normal file
BIN
lightrag/api/webui/assets/KaTeX_Main-Italic-BMLOBm91.woff
generated
Normal file
Binary file not shown.
BIN
lightrag/api/webui/assets/KaTeX_Main-Italic-NWA7e6Wa.woff2
generated
Normal file
BIN
lightrag/api/webui/assets/KaTeX_Main-Italic-NWA7e6Wa.woff2
generated
Normal file
Binary file not shown.
BIN
lightrag/api/webui/assets/KaTeX_Main-Regular-B22Nviop.woff2
generated
Normal file
BIN
lightrag/api/webui/assets/KaTeX_Main-Regular-B22Nviop.woff2
generated
Normal file
Binary file not shown.
BIN
lightrag/api/webui/assets/KaTeX_Main-Regular-Dr94JaBh.woff
generated
Normal file
BIN
lightrag/api/webui/assets/KaTeX_Main-Regular-Dr94JaBh.woff
generated
Normal file
Binary file not shown.
BIN
lightrag/api/webui/assets/KaTeX_Main-Regular-ypZvNtVU.ttf
generated
Normal file
BIN
lightrag/api/webui/assets/KaTeX_Main-Regular-ypZvNtVU.ttf
generated
Normal file
Binary file not shown.
BIN
lightrag/api/webui/assets/KaTeX_Math-BoldItalic-B3XSjfu4.ttf
generated
Normal file
BIN
lightrag/api/webui/assets/KaTeX_Math-BoldItalic-B3XSjfu4.ttf
generated
Normal file
Binary file not shown.
BIN
lightrag/api/webui/assets/KaTeX_Math-BoldItalic-CZnvNsCZ.woff2
generated
Normal file
BIN
lightrag/api/webui/assets/KaTeX_Math-BoldItalic-CZnvNsCZ.woff2
generated
Normal file
Binary file not shown.
BIN
lightrag/api/webui/assets/KaTeX_Math-BoldItalic-iY-2wyZ7.woff
generated
Normal file
BIN
lightrag/api/webui/assets/KaTeX_Math-BoldItalic-iY-2wyZ7.woff
generated
Normal file
Binary file not shown.
BIN
lightrag/api/webui/assets/KaTeX_Math-Italic-DA0__PXp.woff
generated
Normal file
BIN
lightrag/api/webui/assets/KaTeX_Math-Italic-DA0__PXp.woff
generated
Normal file
Binary file not shown.
BIN
lightrag/api/webui/assets/KaTeX_Math-Italic-flOr_0UB.ttf
generated
Normal file
BIN
lightrag/api/webui/assets/KaTeX_Math-Italic-flOr_0UB.ttf
generated
Normal file
Binary file not shown.
BIN
lightrag/api/webui/assets/KaTeX_Math-Italic-t53AETM-.woff2
generated
Normal file
BIN
lightrag/api/webui/assets/KaTeX_Math-Italic-t53AETM-.woff2
generated
Normal file
Binary file not shown.
BIN
lightrag/api/webui/assets/KaTeX_SansSerif-Bold-CFMepnvq.ttf
generated
Normal file
BIN
lightrag/api/webui/assets/KaTeX_SansSerif-Bold-CFMepnvq.ttf
generated
Normal file
Binary file not shown.
BIN
lightrag/api/webui/assets/KaTeX_SansSerif-Bold-D1sUS0GD.woff2
generated
Normal file
BIN
lightrag/api/webui/assets/KaTeX_SansSerif-Bold-D1sUS0GD.woff2
generated
Normal file
Binary file not shown.
BIN
lightrag/api/webui/assets/KaTeX_SansSerif-Bold-DbIhKOiC.woff
generated
Normal file
BIN
lightrag/api/webui/assets/KaTeX_SansSerif-Bold-DbIhKOiC.woff
generated
Normal file
Binary file not shown.
BIN
lightrag/api/webui/assets/KaTeX_SansSerif-Italic-C3H0VqGB.woff2
generated
Normal file
BIN
lightrag/api/webui/assets/KaTeX_SansSerif-Italic-C3H0VqGB.woff2
generated
Normal file
Binary file not shown.
BIN
lightrag/api/webui/assets/KaTeX_SansSerif-Italic-DN2j7dab.woff
generated
Normal file
BIN
lightrag/api/webui/assets/KaTeX_SansSerif-Italic-DN2j7dab.woff
generated
Normal file
Binary file not shown.
BIN
lightrag/api/webui/assets/KaTeX_SansSerif-Italic-YYjJ1zSn.ttf
generated
Normal file
BIN
lightrag/api/webui/assets/KaTeX_SansSerif-Italic-YYjJ1zSn.ttf
generated
Normal file
Binary file not shown.
BIN
lightrag/api/webui/assets/KaTeX_SansSerif-Regular-BNo7hRIc.ttf
generated
Normal file
BIN
lightrag/api/webui/assets/KaTeX_SansSerif-Regular-BNo7hRIc.ttf
generated
Normal file
Binary file not shown.
BIN
lightrag/api/webui/assets/KaTeX_SansSerif-Regular-CS6fqUqJ.woff
generated
Normal file
BIN
lightrag/api/webui/assets/KaTeX_SansSerif-Regular-CS6fqUqJ.woff
generated
Normal file
Binary file not shown.
BIN
lightrag/api/webui/assets/KaTeX_SansSerif-Regular-DDBCnlJ7.woff2
generated
Normal file
BIN
lightrag/api/webui/assets/KaTeX_SansSerif-Regular-DDBCnlJ7.woff2
generated
Normal file
Binary file not shown.
BIN
lightrag/api/webui/assets/KaTeX_Script-Regular-C5JkGWo-.ttf
generated
Normal file
BIN
lightrag/api/webui/assets/KaTeX_Script-Regular-C5JkGWo-.ttf
generated
Normal file
Binary file not shown.
BIN
lightrag/api/webui/assets/KaTeX_Script-Regular-D3wIWfF6.woff2
generated
Normal file
BIN
lightrag/api/webui/assets/KaTeX_Script-Regular-D3wIWfF6.woff2
generated
Normal file
Binary file not shown.
BIN
lightrag/api/webui/assets/KaTeX_Script-Regular-D5yQViql.woff
generated
Normal file
BIN
lightrag/api/webui/assets/KaTeX_Script-Regular-D5yQViql.woff
generated
Normal file
Binary file not shown.
BIN
lightrag/api/webui/assets/KaTeX_Size1-Regular-C195tn64.woff
generated
Normal file
BIN
lightrag/api/webui/assets/KaTeX_Size1-Regular-C195tn64.woff
generated
Normal file
Binary file not shown.
BIN
lightrag/api/webui/assets/KaTeX_Size1-Regular-Dbsnue_I.ttf
generated
Normal file
BIN
lightrag/api/webui/assets/KaTeX_Size1-Regular-Dbsnue_I.ttf
generated
Normal file
Binary file not shown.
BIN
lightrag/api/webui/assets/KaTeX_Size1-Regular-mCD8mA8B.woff2
generated
Normal file
BIN
lightrag/api/webui/assets/KaTeX_Size1-Regular-mCD8mA8B.woff2
generated
Normal file
Binary file not shown.
BIN
lightrag/api/webui/assets/KaTeX_Size2-Regular-B7gKUWhC.ttf
generated
Normal file
BIN
lightrag/api/webui/assets/KaTeX_Size2-Regular-B7gKUWhC.ttf
generated
Normal file
Binary file not shown.
BIN
lightrag/api/webui/assets/KaTeX_Size2-Regular-Dy4dx90m.woff2
generated
Normal file
BIN
lightrag/api/webui/assets/KaTeX_Size2-Regular-Dy4dx90m.woff2
generated
Normal file
Binary file not shown.
BIN
lightrag/api/webui/assets/KaTeX_Size2-Regular-oD1tc_U0.woff
generated
Normal file
BIN
lightrag/api/webui/assets/KaTeX_Size2-Regular-oD1tc_U0.woff
generated
Normal file
Binary file not shown.
BIN
lightrag/api/webui/assets/KaTeX_Size3-Regular-CTq5MqoE.woff
generated
Normal file
BIN
lightrag/api/webui/assets/KaTeX_Size3-Regular-CTq5MqoE.woff
generated
Normal file
Binary file not shown.
BIN
lightrag/api/webui/assets/KaTeX_Size3-Regular-DgpXs0kz.ttf
generated
Normal file
BIN
lightrag/api/webui/assets/KaTeX_Size3-Regular-DgpXs0kz.ttf
generated
Normal file
Binary file not shown.
BIN
lightrag/api/webui/assets/KaTeX_Size4-Regular-BF-4gkZK.woff
generated
Normal file
BIN
lightrag/api/webui/assets/KaTeX_Size4-Regular-BF-4gkZK.woff
generated
Normal file
Binary file not shown.
BIN
lightrag/api/webui/assets/KaTeX_Size4-Regular-DWFBv043.ttf
generated
Normal file
BIN
lightrag/api/webui/assets/KaTeX_Size4-Regular-DWFBv043.ttf
generated
Normal file
Binary file not shown.
BIN
lightrag/api/webui/assets/KaTeX_Size4-Regular-Dl5lxZxV.woff2
generated
Normal file
BIN
lightrag/api/webui/assets/KaTeX_Size4-Regular-Dl5lxZxV.woff2
generated
Normal file
Binary file not shown.
BIN
lightrag/api/webui/assets/KaTeX_Typewriter-Regular-C0xS9mPB.woff
generated
Normal file
BIN
lightrag/api/webui/assets/KaTeX_Typewriter-Regular-C0xS9mPB.woff
generated
Normal file
Binary file not shown.
BIN
lightrag/api/webui/assets/KaTeX_Typewriter-Regular-CO6r4hn1.woff2
generated
Normal file
BIN
lightrag/api/webui/assets/KaTeX_Typewriter-Regular-CO6r4hn1.woff2
generated
Normal file
Binary file not shown.
BIN
lightrag/api/webui/assets/KaTeX_Typewriter-Regular-D3Ib7_Hf.ttf
generated
Normal file
BIN
lightrag/api/webui/assets/KaTeX_Typewriter-Regular-D3Ib7_Hf.ttf
generated
Normal file
Binary file not shown.
|
|
@ -1 +1 @@
|
|||
import{e as v,c as b,g as m,k as O,h as P,j as p,l as w,m as c,n as x,t as A,o as N}from"./_baseUniq-OtJ11HbN.js";import{aU as g,aq as _,aV as $,aW as E,aX as F,aY as I,aZ as M,a_ as y,a$ as B,b0 as T}from"./mermaid-vendor-d7rbry5E.js";var S=/\s/;function q(n){for(var r=n.length;r--&&S.test(n.charAt(r)););return r}var G=/^\s+/;function H(n){return n&&n.slice(0,q(n)+1).replace(G,"")}var o=NaN,L=/^[-+]0x[0-9a-f]+$/i,R=/^0b[01]+$/i,W=/^0o[0-7]+$/i,X=parseInt;function Y(n){if(typeof n=="number")return n;if(v(n))return o;if(g(n)){var r=typeof n.valueOf=="function"?n.valueOf():n;n=g(r)?r+"":r}if(typeof n!="string")return n===0?n:+n;n=H(n);var t=R.test(n);return t||W.test(n)?X(n.slice(2),t?2:8):L.test(n)?o:+n}var z=1/0,C=17976931348623157e292;function K(n){if(!n)return n===0?n:0;if(n=Y(n),n===z||n===-1/0){var r=n<0?-1:1;return r*C}return n===n?n:0}function U(n){var r=K(n),t=r%1;return r===r?t?r-t:r:0}function fn(n){var r=n==null?0:n.length;return r?b(n):[]}var l=Object.prototype,Z=l.hasOwnProperty,dn=_(function(n,r){n=Object(n);var t=-1,e=r.length,a=e>2?r[2]:void 0;for(a&&$(r[0],r[1],a)&&(e=1);++t<e;)for(var f=r[t],i=E(f),s=-1,d=i.length;++s<d;){var u=i[s],h=n[u];(h===void 0||F(h,l[u])&&!Z.call(n,u))&&(n[u]=f[u])}return n});function un(n){var r=n==null?0:n.length;return r?n[r-1]:void 0}function D(n){return function(r,t,e){var a=Object(r);if(!I(r)){var f=m(t);r=O(r),t=function(s){return f(a[s],s,a)}}var i=n(r,t,e);return i>-1?a[f?r[i]:i]:void 0}}var J=Math.max;function Q(n,r,t){var e=n==null?0:n.length;if(!e)return-1;var a=t==null?0:U(t);return a<0&&(a=J(e+a,0)),P(n,m(r),a)}var hn=D(Q);function V(n,r){var t=-1,e=I(n)?Array(n.length):[];return p(n,function(a,f,i){e[++t]=r(a,f,i)}),e}function gn(n,r){var t=M(n)?w:V;return t(n,m(r))}var j=Object.prototype,k=j.hasOwnProperty;function nn(n,r){return n!=null&&k.call(n,r)}function mn(n,r){return n!=null&&c(n,r,nn)}function rn(n,r){return n<r}function tn(n,r,t){for(var e=-1,a=n.length;++e<a;){var f=n[e],i=r(f);if(i!=null&&(s===void 0?i===i&&!v(i):t(i,s)))var s=i,d=f}return d}function on(n){return n&&n.length?tn(n,y,rn):void 0}function an(n,r,t,e){if(!g(n))return n;r=x(r,n);for(var a=-1,f=r.length,i=f-1,s=n;s!=null&&++a<f;){var d=A(r[a]),u=t;if(d==="__proto__"||d==="constructor"||d==="prototype")return n;if(a!=i){var h=s[d];u=void 0,u===void 0&&(u=g(h)?h:B(r[a+1])?[]:{})}T(s,d,u),s=s[d]}return n}function vn(n,r,t){for(var e=-1,a=r.length,f={};++e<a;){var i=r[e],s=N(n,i);t(s,i)&&an(f,x(i,n),s)}return f}export{rn as a,tn as b,V as c,vn as d,on as e,fn as f,hn as g,mn as h,dn as i,U as j,un as l,gn as m,K as t};
|
||||
import{e as v,c as b,g as m,k as O,h as P,j as p,l as w,m as c,n as x,t as A,o as N}from"./_baseUniq-BhkTkpog.js";import{aU as g,aq as _,aV as $,aW as E,aX as F,aY as I,aZ as M,a_ as y,a$ as B,b0 as T}from"./mermaid-vendor-S2u3NfNd.js";var S=/\s/;function q(n){for(var r=n.length;r--&&S.test(n.charAt(r)););return r}var G=/^\s+/;function H(n){return n&&n.slice(0,q(n)+1).replace(G,"")}var o=NaN,L=/^[-+]0x[0-9a-f]+$/i,R=/^0b[01]+$/i,W=/^0o[0-7]+$/i,X=parseInt;function Y(n){if(typeof n=="number")return n;if(v(n))return o;if(g(n)){var r=typeof n.valueOf=="function"?n.valueOf():n;n=g(r)?r+"":r}if(typeof n!="string")return n===0?n:+n;n=H(n);var t=R.test(n);return t||W.test(n)?X(n.slice(2),t?2:8):L.test(n)?o:+n}var z=1/0,C=17976931348623157e292;function K(n){if(!n)return n===0?n:0;if(n=Y(n),n===z||n===-1/0){var r=n<0?-1:1;return r*C}return n===n?n:0}function U(n){var r=K(n),t=r%1;return r===r?t?r-t:r:0}function fn(n){var r=n==null?0:n.length;return r?b(n):[]}var l=Object.prototype,Z=l.hasOwnProperty,dn=_(function(n,r){n=Object(n);var t=-1,e=r.length,a=e>2?r[2]:void 0;for(a&&$(r[0],r[1],a)&&(e=1);++t<e;)for(var f=r[t],i=E(f),s=-1,d=i.length;++s<d;){var u=i[s],h=n[u];(h===void 0||F(h,l[u])&&!Z.call(n,u))&&(n[u]=f[u])}return n});function un(n){var r=n==null?0:n.length;return r?n[r-1]:void 0}function D(n){return function(r,t,e){var a=Object(r);if(!I(r)){var f=m(t);r=O(r),t=function(s){return f(a[s],s,a)}}var i=n(r,t,e);return i>-1?a[f?r[i]:i]:void 0}}var J=Math.max;function Q(n,r,t){var e=n==null?0:n.length;if(!e)return-1;var a=t==null?0:U(t);return a<0&&(a=J(e+a,0)),P(n,m(r),a)}var hn=D(Q);function V(n,r){var t=-1,e=I(n)?Array(n.length):[];return p(n,function(a,f,i){e[++t]=r(a,f,i)}),e}function gn(n,r){var t=M(n)?w:V;return t(n,m(r))}var j=Object.prototype,k=j.hasOwnProperty;function nn(n,r){return n!=null&&k.call(n,r)}function mn(n,r){return n!=null&&c(n,r,nn)}function rn(n,r){return n<r}function tn(n,r,t){for(var e=-1,a=n.length;++e<a;){var f=n[e],i=r(f);if(i!=null&&(s===void 0?i===i&&!v(i):t(i,s)))var s=i,d=f}return d}function on(n){return n&&n.length?tn(n,y,rn):void 0}function an(n,r,t,e){if(!g(n))return n;r=x(r,n);for(var a=-1,f=r.length,i=f-1,s=n;s!=null&&++a<f;){var d=A(r[a]),u=t;if(d==="__proto__"||d==="constructor"||d==="prototype")return n;if(a!=i){var h=s[d];u=void 0,u===void 0&&(u=g(h)?h:B(r[a+1])?[]:{})}T(s,d,u),s=s[d]}return n}function vn(n,r,t){for(var e=-1,a=r.length,f={};++e<a;){var i=r[e],s=N(n,i);t(s,i)&&an(f,x(i,n),s)}return f}export{rn as a,tn as b,V as c,vn as d,on as e,fn as f,hn as g,mn as h,dn as i,U as j,un as l,gn as m,K as t};
|
||||
File diff suppressed because one or more lines are too long
File diff suppressed because one or more lines are too long
File diff suppressed because one or more lines are too long
File diff suppressed because one or more lines are too long
|
|
@ -1 +1 @@
|
|||
import{_ as l}from"./mermaid-vendor-d7rbry5E.js";function m(e,c){var i,t,o;e.accDescr&&((i=c.setAccDescription)==null||i.call(c,e.accDescr)),e.accTitle&&((t=c.setAccTitle)==null||t.call(c,e.accTitle)),e.title&&((o=c.setDiagramTitle)==null||o.call(c,e.title))}l(m,"populateCommonDb");export{m as p};
|
||||
import{_ as l}from"./mermaid-vendor-S2u3NfNd.js";function m(e,c){var i,t,o;e.accDescr&&((i=c.setAccDescription)==null||i.call(c,e.accDescr)),e.accTitle&&((t=c.setAccTitle)==null||t.call(c,e.accTitle)),e.title&&((o=c.setDiagramTitle)==null||o.call(c,e.title))}l(m,"populateCommonDb");export{m as p};
|
||||
File diff suppressed because one or more lines are too long
File diff suppressed because one or more lines are too long
|
|
@ -1 +1 @@
|
|||
import{_ as n,a1 as x,j as l}from"./mermaid-vendor-d7rbry5E.js";var c=n((a,t)=>{const e=a.append("rect");if(e.attr("x",t.x),e.attr("y",t.y),e.attr("fill",t.fill),e.attr("stroke",t.stroke),e.attr("width",t.width),e.attr("height",t.height),t.name&&e.attr("name",t.name),t.rx&&e.attr("rx",t.rx),t.ry&&e.attr("ry",t.ry),t.attrs!==void 0)for(const r in t.attrs)e.attr(r,t.attrs[r]);return t.class&&e.attr("class",t.class),e},"drawRect"),d=n((a,t)=>{const e={x:t.startx,y:t.starty,width:t.stopx-t.startx,height:t.stopy-t.starty,fill:t.fill,stroke:t.stroke,class:"rect"};c(a,e).lower()},"drawBackgroundRect"),g=n((a,t)=>{const e=t.text.replace(x," "),r=a.append("text");r.attr("x",t.x),r.attr("y",t.y),r.attr("class","legend"),r.style("text-anchor",t.anchor),t.class&&r.attr("class",t.class);const s=r.append("tspan");return s.attr("x",t.x+t.textMargin*2),s.text(e),r},"drawText"),h=n((a,t,e,r)=>{const s=a.append("image");s.attr("x",t),s.attr("y",e);const i=l.sanitizeUrl(r);s.attr("xlink:href",i)},"drawImage"),m=n((a,t,e,r)=>{const s=a.append("use");s.attr("x",t),s.attr("y",e);const i=l.sanitizeUrl(r);s.attr("xlink:href",`#${i}`)},"drawEmbeddedImage"),y=n(()=>({x:0,y:0,width:100,height:100,fill:"#EDF2AE",stroke:"#666",anchor:"start",rx:0,ry:0}),"getNoteRect"),p=n(()=>({x:0,y:0,width:100,height:100,"text-anchor":"start",style:"#666",textMargin:0,rx:0,ry:0,tspan:!0}),"getTextObj");export{d as a,p as b,m as c,c as d,h as e,g as f,y as g};
|
||||
import{_ as n,a1 as x,j as l}from"./mermaid-vendor-S2u3NfNd.js";var c=n((a,t)=>{const e=a.append("rect");if(e.attr("x",t.x),e.attr("y",t.y),e.attr("fill",t.fill),e.attr("stroke",t.stroke),e.attr("width",t.width),e.attr("height",t.height),t.name&&e.attr("name",t.name),t.rx&&e.attr("rx",t.rx),t.ry&&e.attr("ry",t.ry),t.attrs!==void 0)for(const r in t.attrs)e.attr(r,t.attrs[r]);return t.class&&e.attr("class",t.class),e},"drawRect"),d=n((a,t)=>{const e={x:t.startx,y:t.starty,width:t.stopx-t.startx,height:t.stopy-t.starty,fill:t.fill,stroke:t.stroke,class:"rect"};c(a,e).lower()},"drawBackgroundRect"),g=n((a,t)=>{const e=t.text.replace(x," "),r=a.append("text");r.attr("x",t.x),r.attr("y",t.y),r.attr("class","legend"),r.style("text-anchor",t.anchor),t.class&&r.attr("class",t.class);const s=r.append("tspan");return s.attr("x",t.x+t.textMargin*2),s.text(e),r},"drawText"),h=n((a,t,e,r)=>{const s=a.append("image");s.attr("x",t),s.attr("y",e);const i=l.sanitizeUrl(r);s.attr("xlink:href",i)},"drawImage"),m=n((a,t,e,r)=>{const s=a.append("use");s.attr("x",t),s.attr("y",e);const i=l.sanitizeUrl(r);s.attr("xlink:href",`#${i}`)},"drawEmbeddedImage"),y=n(()=>({x:0,y:0,width:100,height:100,fill:"#EDF2AE",stroke:"#666",anchor:"start",rx:0,ry:0}),"getNoteRect"),p=n(()=>({x:0,y:0,width:100,height:100,"text-anchor":"start",style:"#666",textMargin:0,rx:0,ry:0,tspan:!0}),"getTextObj");export{d as a,p as b,m as c,c as d,h as e,g as f,y as g};
|
||||
|
|
@ -1 +1 @@
|
|||
import{_ as n,d as r,e as d,l as g}from"./mermaid-vendor-d7rbry5E.js";var u=n((e,t)=>{let o;return t==="sandbox"&&(o=r("#i"+e)),(t==="sandbox"?r(o.nodes()[0].contentDocument.body):r("body")).select(`[id="${e}"]`)},"getDiagramElement"),b=n((e,t,o,i)=>{e.attr("class",o);const{width:a,height:s,x:h,y:x}=l(e,t);d(e,s,a,i);const c=w(h,x,a,s,t);e.attr("viewBox",c),g.debug(`viewBox configured: ${c} with padding: ${t}`)},"setupViewPortForSVG"),l=n((e,t)=>{var i;const o=((i=e.node())==null?void 0:i.getBBox())||{width:0,height:0,x:0,y:0};return{width:o.width+t*2,height:o.height+t*2,x:o.x,y:o.y}},"calculateDimensionsWithPadding"),w=n((e,t,o,i,a)=>`${e-a} ${t-a} ${o} ${i}`,"createViewBox");export{u as g,b as s};
|
||||
import{_ as n,d as r,e as d,l as g}from"./mermaid-vendor-S2u3NfNd.js";var u=n((e,t)=>{let o;return t==="sandbox"&&(o=r("#i"+e)),(t==="sandbox"?r(o.nodes()[0].contentDocument.body):r("body")).select(`[id="${e}"]`)},"getDiagramElement"),b=n((e,t,o,i)=>{e.attr("class",o);const{width:a,height:s,x:h,y:x}=l(e,t);d(e,s,a,i);const c=w(h,x,a,s,t);e.attr("viewBox",c),g.debug(`viewBox configured: ${c} with padding: ${t}`)},"setupViewPortForSVG"),l=n((e,t)=>{var i;const o=((i=e.node())==null?void 0:i.getBBox())||{width:0,height:0,x:0,y:0};return{width:o.width+t*2,height:o.height+t*2,x:o.x,y:o.y}},"calculateDimensionsWithPadding"),w=n((e,t,o,i,a)=>`${e-a} ${t-a} ${o} ${i}`,"createViewBox");export{u as g,b as s};
|
||||
|
|
@ -1 +1 @@
|
|||
import{_ as s}from"./mermaid-vendor-d7rbry5E.js";var t,e=(t=class{constructor(i){this.init=i,this.records=this.init()}reset(){this.records=this.init()}},s(t,"ImperativeState"),t);export{e as I};
|
||||
import{_ as s}from"./mermaid-vendor-S2u3NfNd.js";var t,e=(t=class{constructor(i){this.init=i,this.records=this.init()}reset(){this.records=this.init()}},s(t,"ImperativeState"),t);export{e as I};
|
||||
|
|
@ -1 +1 @@
|
|||
import{s as a,c as s,a as e,C as t}from"./chunk-A2AXSNBT-CvfQgLmG.js";import{_ as i}from"./mermaid-vendor-d7rbry5E.js";import"./chunk-RZ5BOZE2-C762jHXr.js";import"./feature-graph-DbHHHM9y.js";import"./react-vendor-DEwriMA6.js";import"./graph-vendor-B-X5JegA.js";import"./ui-vendor-CeCm8EER.js";import"./utils-vendor-BysuhMZA.js";var f={parser:e,get db(){return new t},renderer:s,styles:a,init:i(r=>{r.class||(r.class={}),r.class.arrowMarkerAbsolute=r.arrowMarkerAbsolute},"init")};export{f as diagram};
|
||||
import{s as a,c as s,a as e,C as t}from"./chunk-A2AXSNBT-C3_hWPtY.js";import{_ as i}from"./mermaid-vendor-S2u3NfNd.js";import"./chunk-RZ5BOZE2-DAdu8FE8.js";import"./feature-graph-DGPXw7qg.js";import"./react-vendor-DEwriMA6.js";import"./graph-vendor-B-X5JegA.js";import"./ui-vendor-CeCm8EER.js";import"./utils-vendor-BysuhMZA.js";var f={parser:e,get db(){return new t},renderer:s,styles:a,init:i(r=>{r.class||(r.class={}),r.class.arrowMarkerAbsolute=r.arrowMarkerAbsolute},"init")};export{f as diagram};
|
||||
|
|
@ -1 +1 @@
|
|||
import{s as a,c as s,a as e,C as t}from"./chunk-A2AXSNBT-CvfQgLmG.js";import{_ as i}from"./mermaid-vendor-d7rbry5E.js";import"./chunk-RZ5BOZE2-C762jHXr.js";import"./feature-graph-DbHHHM9y.js";import"./react-vendor-DEwriMA6.js";import"./graph-vendor-B-X5JegA.js";import"./ui-vendor-CeCm8EER.js";import"./utils-vendor-BysuhMZA.js";var f={parser:e,get db(){return new t},renderer:s,styles:a,init:i(r=>{r.class||(r.class={}),r.class.arrowMarkerAbsolute=r.arrowMarkerAbsolute},"init")};export{f as diagram};
|
||||
import{s as a,c as s,a as e,C as t}from"./chunk-A2AXSNBT-C3_hWPtY.js";import{_ as i}from"./mermaid-vendor-S2u3NfNd.js";import"./chunk-RZ5BOZE2-DAdu8FE8.js";import"./feature-graph-DGPXw7qg.js";import"./react-vendor-DEwriMA6.js";import"./graph-vendor-B-X5JegA.js";import"./ui-vendor-CeCm8EER.js";import"./utils-vendor-BysuhMZA.js";var f={parser:e,get db(){return new t},renderer:s,styles:a,init:i(r=>{r.class||(r.class={}),r.class.arrowMarkerAbsolute=r.arrowMarkerAbsolute},"init")};export{f as diagram};
|
||||
1
lightrag/api/webui/assets/clone-BNTPEzIf.js
generated
Normal file
1
lightrag/api/webui/assets/clone-BNTPEzIf.js
generated
Normal file
|
|
@ -0,0 +1 @@
|
|||
import{b as r}from"./_baseUniq-BhkTkpog.js";var e=4;function a(o){return r(o,e)}export{a as c};
|
||||
1
lightrag/api/webui/assets/clone-vL6XIcCC.js
generated
1
lightrag/api/webui/assets/clone-vL6XIcCC.js
generated
|
|
@ -1 +0,0 @@
|
|||
import{b as r}from"./_baseUniq-OtJ11HbN.js";var e=4;function a(o){return r(o,e)}export{a as c};
|
||||
File diff suppressed because one or more lines are too long
|
|
@ -1,4 +1,4 @@
|
|||
import{p as k}from"./chunk-4BMEZGHF-Ct0jZH9M.js";import{_ as l,s as R,g as F,t as I,q as _,a as E,b as D,K as G,z,F as y,G as C,H as P,l as H,Q as V}from"./mermaid-vendor-d7rbry5E.js";import{p as W}from"./radar-MK3ICKWK-zkXzSXFe.js";import"./feature-graph-DbHHHM9y.js";import"./react-vendor-DEwriMA6.js";import"./graph-vendor-B-X5JegA.js";import"./ui-vendor-CeCm8EER.js";import"./utils-vendor-BysuhMZA.js";import"./_baseUniq-OtJ11HbN.js";import"./_basePickBy-Lz6agtdo.js";import"./clone-vL6XIcCC.js";var h={showLegend:!0,ticks:5,max:null,min:0,graticule:"circle"},w={axes:[],curves:[],options:h},g=structuredClone(w),B=P.radar,j=l(()=>y({...B,...C().radar}),"getConfig"),b=l(()=>g.axes,"getAxes"),q=l(()=>g.curves,"getCurves"),K=l(()=>g.options,"getOptions"),N=l(a=>{g.axes=a.map(t=>({name:t.name,label:t.label??t.name}))},"setAxes"),Q=l(a=>{g.curves=a.map(t=>({name:t.name,label:t.label??t.name,entries:U(t.entries)}))},"setCurves"),U=l(a=>{if(a[0].axis==null)return a.map(e=>e.value);const t=b();if(t.length===0)throw new Error("Axes must be populated before curves for reference entries");return t.map(e=>{const r=a.find(s=>{var o;return((o=s.axis)==null?void 0:o.$refText)===e.name});if(r===void 0)throw new Error("Missing entry for axis "+e.label);return r.value})},"computeCurveEntries"),X=l(a=>{var e,r,s,o,i;const t=a.reduce((n,c)=>(n[c.name]=c,n),{});g.options={showLegend:((e=t.showLegend)==null?void 0:e.value)??h.showLegend,ticks:((r=t.ticks)==null?void 0:r.value)??h.ticks,max:((s=t.max)==null?void 0:s.value)??h.max,min:((o=t.min)==null?void 0:o.value)??h.min,graticule:((i=t.graticule)==null?void 0:i.value)??h.graticule}},"setOptions"),Y=l(()=>{z(),g=structuredClone(w)},"clear"),$={getAxes:b,getCurves:q,getOptions:K,setAxes:N,setCurves:Q,setOptions:X,getConfig:j,clear:Y,setAccTitle:D,getAccTitle:E,setDiagramTitle:_,getDiagramTitle:I,getAccDescription:F,setAccDescription:R},Z=l(a=>{k(a,$);const{axes:t,curves:e,options:r}=a;$.setAxes(t),$.setCurves(e),$.setOptions(r)},"populate"),J={parse:l(async a=>{const t=await W("radar",a);H.debug(t),Z(t)},"parse")},tt=l((a,t,e,r)=>{const s=r.db,o=s.getAxes(),i=s.getCurves(),n=s.getOptions(),c=s.getConfig(),d=s.getDiagramTitle(),u=G(t),p=et(u,c),m=n.max??Math.max(...i.map(f=>Math.max(...f.entries))),x=n.min,v=Math.min(c.width,c.height)/2;at(p,o,v,n.ticks,n.graticule),rt(p,o,v,c),M(p,o,i,x,m,n.graticule,c),T(p,i,n.showLegend,c),p.append("text").attr("class","radarTitle").text(d).attr("x",0).attr("y",-c.height/2-c.marginTop)},"draw"),et=l((a,t)=>{const e=t.width+t.marginLeft+t.marginRight,r=t.height+t.marginTop+t.marginBottom,s={x:t.marginLeft+t.width/2,y:t.marginTop+t.height/2};return a.attr("viewbox",`0 0 ${e} ${r}`).attr("width",e).attr("height",r),a.append("g").attr("transform",`translate(${s.x}, ${s.y})`)},"drawFrame"),at=l((a,t,e,r,s)=>{if(s==="circle")for(let o=0;o<r;o++){const i=e*(o+1)/r;a.append("circle").attr("r",i).attr("class","radarGraticule")}else if(s==="polygon"){const o=t.length;for(let i=0;i<r;i++){const n=e*(i+1)/r,c=t.map((d,u)=>{const p=2*u*Math.PI/o-Math.PI/2,m=n*Math.cos(p),x=n*Math.sin(p);return`${m},${x}`}).join(" ");a.append("polygon").attr("points",c).attr("class","radarGraticule")}}},"drawGraticule"),rt=l((a,t,e,r)=>{const s=t.length;for(let o=0;o<s;o++){const i=t[o].label,n=2*o*Math.PI/s-Math.PI/2;a.append("line").attr("x1",0).attr("y1",0).attr("x2",e*r.axisScaleFactor*Math.cos(n)).attr("y2",e*r.axisScaleFactor*Math.sin(n)).attr("class","radarAxisLine"),a.append("text").text(i).attr("x",e*r.axisLabelFactor*Math.cos(n)).attr("y",e*r.axisLabelFactor*Math.sin(n)).attr("class","radarAxisLabel")}},"drawAxes");function M(a,t,e,r,s,o,i){const n=t.length,c=Math.min(i.width,i.height)/2;e.forEach((d,u)=>{if(d.entries.length!==n)return;const p=d.entries.map((m,x)=>{const v=2*Math.PI*x/n-Math.PI/2,f=A(m,r,s,c),O=f*Math.cos(v),S=f*Math.sin(v);return{x:O,y:S}});o==="circle"?a.append("path").attr("d",L(p,i.curveTension)).attr("class",`radarCurve-${u}`):o==="polygon"&&a.append("polygon").attr("points",p.map(m=>`${m.x},${m.y}`).join(" ")).attr("class",`radarCurve-${u}`)})}l(M,"drawCurves");function A(a,t,e,r){const s=Math.min(Math.max(a,t),e);return r*(s-t)/(e-t)}l(A,"relativeRadius");function L(a,t){const e=a.length;let r=`M${a[0].x},${a[0].y}`;for(let s=0;s<e;s++){const o=a[(s-1+e)%e],i=a[s],n=a[(s+1)%e],c=a[(s+2)%e],d={x:i.x+(n.x-o.x)*t,y:i.y+(n.y-o.y)*t},u={x:n.x-(c.x-i.x)*t,y:n.y-(c.y-i.y)*t};r+=` C${d.x},${d.y} ${u.x},${u.y} ${n.x},${n.y}`}return`${r} Z`}l(L,"closedRoundCurve");function T(a,t,e,r){if(!e)return;const s=(r.width/2+r.marginRight)*3/4,o=-(r.height/2+r.marginTop)*3/4,i=20;t.forEach((n,c)=>{const d=a.append("g").attr("transform",`translate(${s}, ${o+c*i})`);d.append("rect").attr("width",12).attr("height",12).attr("class",`radarLegendBox-${c}`),d.append("text").attr("x",16).attr("y",0).attr("class","radarLegendText").text(n.label)})}l(T,"drawLegend");var st={draw:tt},nt=l((a,t)=>{let e="";for(let r=0;r<a.THEME_COLOR_LIMIT;r++){const s=a[`cScale${r}`];e+=`
|
||||
import{p as k}from"./chunk-4BMEZGHF-DIaTioj_.js";import{_ as l,s as R,g as F,t as I,q as _,a as E,b as D,K as G,z,F as y,G as C,H as P,l as H,Q as V}from"./mermaid-vendor-S2u3NfNd.js";import{p as W}from"./radar-MK3ICKWK-BKWgs4sj.js";import"./feature-graph-DGPXw7qg.js";import"./react-vendor-DEwriMA6.js";import"./graph-vendor-B-X5JegA.js";import"./ui-vendor-CeCm8EER.js";import"./utils-vendor-BysuhMZA.js";import"./_baseUniq-BhkTkpog.js";import"./_basePickBy-C89rMBVC.js";import"./clone-BNTPEzIf.js";var h={showLegend:!0,ticks:5,max:null,min:0,graticule:"circle"},w={axes:[],curves:[],options:h},g=structuredClone(w),B=P.radar,j=l(()=>y({...B,...C().radar}),"getConfig"),b=l(()=>g.axes,"getAxes"),q=l(()=>g.curves,"getCurves"),K=l(()=>g.options,"getOptions"),N=l(a=>{g.axes=a.map(t=>({name:t.name,label:t.label??t.name}))},"setAxes"),Q=l(a=>{g.curves=a.map(t=>({name:t.name,label:t.label??t.name,entries:U(t.entries)}))},"setCurves"),U=l(a=>{if(a[0].axis==null)return a.map(e=>e.value);const t=b();if(t.length===0)throw new Error("Axes must be populated before curves for reference entries");return t.map(e=>{const r=a.find(s=>{var o;return((o=s.axis)==null?void 0:o.$refText)===e.name});if(r===void 0)throw new Error("Missing entry for axis "+e.label);return r.value})},"computeCurveEntries"),X=l(a=>{var e,r,s,o,i;const t=a.reduce((n,c)=>(n[c.name]=c,n),{});g.options={showLegend:((e=t.showLegend)==null?void 0:e.value)??h.showLegend,ticks:((r=t.ticks)==null?void 0:r.value)??h.ticks,max:((s=t.max)==null?void 0:s.value)??h.max,min:((o=t.min)==null?void 0:o.value)??h.min,graticule:((i=t.graticule)==null?void 0:i.value)??h.graticule}},"setOptions"),Y=l(()=>{z(),g=structuredClone(w)},"clear"),$={getAxes:b,getCurves:q,getOptions:K,setAxes:N,setCurves:Q,setOptions:X,getConfig:j,clear:Y,setAccTitle:D,getAccTitle:E,setDiagramTitle:_,getDiagramTitle:I,getAccDescription:F,setAccDescription:R},Z=l(a=>{k(a,$);const{axes:t,curves:e,options:r}=a;$.setAxes(t),$.setCurves(e),$.setOptions(r)},"populate"),J={parse:l(async a=>{const t=await W("radar",a);H.debug(t),Z(t)},"parse")},tt=l((a,t,e,r)=>{const s=r.db,o=s.getAxes(),i=s.getCurves(),n=s.getOptions(),c=s.getConfig(),d=s.getDiagramTitle(),u=G(t),p=et(u,c),m=n.max??Math.max(...i.map(f=>Math.max(...f.entries))),x=n.min,v=Math.min(c.width,c.height)/2;at(p,o,v,n.ticks,n.graticule),rt(p,o,v,c),M(p,o,i,x,m,n.graticule,c),T(p,i,n.showLegend,c),p.append("text").attr("class","radarTitle").text(d).attr("x",0).attr("y",-c.height/2-c.marginTop)},"draw"),et=l((a,t)=>{const e=t.width+t.marginLeft+t.marginRight,r=t.height+t.marginTop+t.marginBottom,s={x:t.marginLeft+t.width/2,y:t.marginTop+t.height/2};return a.attr("viewbox",`0 0 ${e} ${r}`).attr("width",e).attr("height",r),a.append("g").attr("transform",`translate(${s.x}, ${s.y})`)},"drawFrame"),at=l((a,t,e,r,s)=>{if(s==="circle")for(let o=0;o<r;o++){const i=e*(o+1)/r;a.append("circle").attr("r",i).attr("class","radarGraticule")}else if(s==="polygon"){const o=t.length;for(let i=0;i<r;i++){const n=e*(i+1)/r,c=t.map((d,u)=>{const p=2*u*Math.PI/o-Math.PI/2,m=n*Math.cos(p),x=n*Math.sin(p);return`${m},${x}`}).join(" ");a.append("polygon").attr("points",c).attr("class","radarGraticule")}}},"drawGraticule"),rt=l((a,t,e,r)=>{const s=t.length;for(let o=0;o<s;o++){const i=t[o].label,n=2*o*Math.PI/s-Math.PI/2;a.append("line").attr("x1",0).attr("y1",0).attr("x2",e*r.axisScaleFactor*Math.cos(n)).attr("y2",e*r.axisScaleFactor*Math.sin(n)).attr("class","radarAxisLine"),a.append("text").text(i).attr("x",e*r.axisLabelFactor*Math.cos(n)).attr("y",e*r.axisLabelFactor*Math.sin(n)).attr("class","radarAxisLabel")}},"drawAxes");function M(a,t,e,r,s,o,i){const n=t.length,c=Math.min(i.width,i.height)/2;e.forEach((d,u)=>{if(d.entries.length!==n)return;const p=d.entries.map((m,x)=>{const v=2*Math.PI*x/n-Math.PI/2,f=A(m,r,s,c),O=f*Math.cos(v),S=f*Math.sin(v);return{x:O,y:S}});o==="circle"?a.append("path").attr("d",L(p,i.curveTension)).attr("class",`radarCurve-${u}`):o==="polygon"&&a.append("polygon").attr("points",p.map(m=>`${m.x},${m.y}`).join(" ")).attr("class",`radarCurve-${u}`)})}l(M,"drawCurves");function A(a,t,e,r){const s=Math.min(Math.max(a,t),e);return r*(s-t)/(e-t)}l(A,"relativeRadius");function L(a,t){const e=a.length;let r=`M${a[0].x},${a[0].y}`;for(let s=0;s<e;s++){const o=a[(s-1+e)%e],i=a[s],n=a[(s+1)%e],c=a[(s+2)%e],d={x:i.x+(n.x-o.x)*t,y:i.y+(n.y-o.y)*t},u={x:n.x-(c.x-i.x)*t,y:n.y-(c.y-i.y)*t};r+=` C${d.x},${d.y} ${u.x},${u.y} ${n.x},${n.y}`}return`${r} Z`}l(L,"closedRoundCurve");function T(a,t,e,r){if(!e)return;const s=(r.width/2+r.marginRight)*3/4,o=-(r.height/2+r.marginTop)*3/4,i=20;t.forEach((n,c)=>{const d=a.append("g").attr("transform",`translate(${s}, ${o+c*i})`);d.append("rect").attr("width",12).attr("height",12).attr("class",`radarLegendBox-${c}`),d.append("text").attr("x",16).attr("y",0).attr("class","radarLegendText").text(n.label)})}l(T,"drawLegend");var st={draw:tt},nt=l((a,t)=>{let e="";for(let r=0;r<a.THEME_COLOR_LIMIT;r++){const s=a[`cScale${r}`];e+=`
|
||||
.radarCurve-${r} {
|
||||
color: ${s};
|
||||
fill: ${s};
|
||||
|
|
@ -1,4 +1,4 @@
|
|||
import{p as w}from"./chunk-4BMEZGHF-Ct0jZH9M.js";import{_ as n,s as B,g as S,t as F,q as z,a as P,b as W,F as x,K as T,e as D,z as _,G as A,H as E,l as v}from"./mermaid-vendor-d7rbry5E.js";import{p as N}from"./radar-MK3ICKWK-zkXzSXFe.js";import"./feature-graph-DbHHHM9y.js";import"./react-vendor-DEwriMA6.js";import"./graph-vendor-B-X5JegA.js";import"./ui-vendor-CeCm8EER.js";import"./utils-vendor-BysuhMZA.js";import"./_baseUniq-OtJ11HbN.js";import"./_basePickBy-Lz6agtdo.js";import"./clone-vL6XIcCC.js";var C={packet:[]},h=structuredClone(C),L=E.packet,Y=n(()=>{const t=x({...L,...A().packet});return t.showBits&&(t.paddingY+=10),t},"getConfig"),G=n(()=>h.packet,"getPacket"),H=n(t=>{t.length>0&&h.packet.push(t)},"pushWord"),I=n(()=>{_(),h=structuredClone(C)},"clear"),m={pushWord:H,getPacket:G,getConfig:Y,clear:I,setAccTitle:W,getAccTitle:P,setDiagramTitle:z,getDiagramTitle:F,getAccDescription:S,setAccDescription:B},K=1e4,M=n(t=>{w(t,m);let e=-1,o=[],s=1;const{bitsPerRow:i}=m.getConfig();for(let{start:a,end:r,label:p}of t.blocks){if(r&&r<a)throw new Error(`Packet block ${a} - ${r} is invalid. End must be greater than start.`);if(a!==e+1)throw new Error(`Packet block ${a} - ${r??a} is not contiguous. It should start from ${e+1}.`);for(e=r??a,v.debug(`Packet block ${a} - ${e} with label ${p}`);o.length<=i+1&&m.getPacket().length<K;){const[b,c]=O({start:a,end:r,label:p},s,i);if(o.push(b),b.end+1===s*i&&(m.pushWord(o),o=[],s++),!c)break;({start:a,end:r,label:p}=c)}}m.pushWord(o)},"populate"),O=n((t,e,o)=>{if(t.end===void 0&&(t.end=t.start),t.start>t.end)throw new Error(`Block start ${t.start} is greater than block end ${t.end}.`);return t.end+1<=e*o?[t,void 0]:[{start:t.start,end:e*o-1,label:t.label},{start:e*o,end:t.end,label:t.label}]},"getNextFittingBlock"),q={parse:n(async t=>{const e=await N("packet",t);v.debug(e),M(e)},"parse")},R=n((t,e,o,s)=>{const i=s.db,a=i.getConfig(),{rowHeight:r,paddingY:p,bitWidth:b,bitsPerRow:c}=a,u=i.getPacket(),l=i.getDiagramTitle(),g=r+p,d=g*(u.length+1)-(l?0:r),k=b*c+2,f=T(e);f.attr("viewbox",`0 0 ${k} ${d}`),D(f,d,k,a.useMaxWidth);for(const[$,y]of u.entries())U(f,y,$,a);f.append("text").text(l).attr("x",k/2).attr("y",d-g/2).attr("dominant-baseline","middle").attr("text-anchor","middle").attr("class","packetTitle")},"draw"),U=n((t,e,o,{rowHeight:s,paddingX:i,paddingY:a,bitWidth:r,bitsPerRow:p,showBits:b})=>{const c=t.append("g"),u=o*(s+a)+a;for(const l of e){const g=l.start%p*r+1,d=(l.end-l.start+1)*r-i;if(c.append("rect").attr("x",g).attr("y",u).attr("width",d).attr("height",s).attr("class","packetBlock"),c.append("text").attr("x",g+d/2).attr("y",u+s/2).attr("class","packetLabel").attr("dominant-baseline","middle").attr("text-anchor","middle").text(l.label),!b)continue;const k=l.end===l.start,f=u-2;c.append("text").attr("x",g+(k?d/2:0)).attr("y",f).attr("class","packetByte start").attr("dominant-baseline","auto").attr("text-anchor",k?"middle":"start").text(l.start),k||c.append("text").attr("x",g+d).attr("y",f).attr("class","packetByte end").attr("dominant-baseline","auto").attr("text-anchor","end").text(l.end)}},"drawWord"),X={draw:R},j={byteFontSize:"10px",startByteColor:"black",endByteColor:"black",labelColor:"black",labelFontSize:"12px",titleColor:"black",titleFontSize:"14px",blockStrokeColor:"black",blockStrokeWidth:"1",blockFillColor:"#efefef"},J=n(({packet:t}={})=>{const e=x(j,t);return`
|
||||
import{p as w}from"./chunk-4BMEZGHF-DIaTioj_.js";import{_ as n,s as B,g as S,t as F,q as z,a as P,b as W,F as x,K as T,e as D,z as _,G as A,H as E,l as v}from"./mermaid-vendor-S2u3NfNd.js";import{p as N}from"./radar-MK3ICKWK-BKWgs4sj.js";import"./feature-graph-DGPXw7qg.js";import"./react-vendor-DEwriMA6.js";import"./graph-vendor-B-X5JegA.js";import"./ui-vendor-CeCm8EER.js";import"./utils-vendor-BysuhMZA.js";import"./_baseUniq-BhkTkpog.js";import"./_basePickBy-C89rMBVC.js";import"./clone-BNTPEzIf.js";var C={packet:[]},h=structuredClone(C),L=E.packet,Y=n(()=>{const t=x({...L,...A().packet});return t.showBits&&(t.paddingY+=10),t},"getConfig"),G=n(()=>h.packet,"getPacket"),H=n(t=>{t.length>0&&h.packet.push(t)},"pushWord"),I=n(()=>{_(),h=structuredClone(C)},"clear"),m={pushWord:H,getPacket:G,getConfig:Y,clear:I,setAccTitle:W,getAccTitle:P,setDiagramTitle:z,getDiagramTitle:F,getAccDescription:S,setAccDescription:B},K=1e4,M=n(t=>{w(t,m);let e=-1,o=[],s=1;const{bitsPerRow:i}=m.getConfig();for(let{start:a,end:r,label:p}of t.blocks){if(r&&r<a)throw new Error(`Packet block ${a} - ${r} is invalid. End must be greater than start.`);if(a!==e+1)throw new Error(`Packet block ${a} - ${r??a} is not contiguous. It should start from ${e+1}.`);for(e=r??a,v.debug(`Packet block ${a} - ${e} with label ${p}`);o.length<=i+1&&m.getPacket().length<K;){const[b,c]=O({start:a,end:r,label:p},s,i);if(o.push(b),b.end+1===s*i&&(m.pushWord(o),o=[],s++),!c)break;({start:a,end:r,label:p}=c)}}m.pushWord(o)},"populate"),O=n((t,e,o)=>{if(t.end===void 0&&(t.end=t.start),t.start>t.end)throw new Error(`Block start ${t.start} is greater than block end ${t.end}.`);return t.end+1<=e*o?[t,void 0]:[{start:t.start,end:e*o-1,label:t.label},{start:e*o,end:t.end,label:t.label}]},"getNextFittingBlock"),q={parse:n(async t=>{const e=await N("packet",t);v.debug(e),M(e)},"parse")},R=n((t,e,o,s)=>{const i=s.db,a=i.getConfig(),{rowHeight:r,paddingY:p,bitWidth:b,bitsPerRow:c}=a,u=i.getPacket(),l=i.getDiagramTitle(),g=r+p,d=g*(u.length+1)-(l?0:r),k=b*c+2,f=T(e);f.attr("viewbox",`0 0 ${k} ${d}`),D(f,d,k,a.useMaxWidth);for(const[$,y]of u.entries())U(f,y,$,a);f.append("text").text(l).attr("x",k/2).attr("y",d-g/2).attr("dominant-baseline","middle").attr("text-anchor","middle").attr("class","packetTitle")},"draw"),U=n((t,e,o,{rowHeight:s,paddingX:i,paddingY:a,bitWidth:r,bitsPerRow:p,showBits:b})=>{const c=t.append("g"),u=o*(s+a)+a;for(const l of e){const g=l.start%p*r+1,d=(l.end-l.start+1)*r-i;if(c.append("rect").attr("x",g).attr("y",u).attr("width",d).attr("height",s).attr("class","packetBlock"),c.append("text").attr("x",g+d/2).attr("y",u+s/2).attr("class","packetLabel").attr("dominant-baseline","middle").attr("text-anchor","middle").text(l.label),!b)continue;const k=l.end===l.start,f=u-2;c.append("text").attr("x",g+(k?d/2:0)).attr("y",f).attr("class","packetByte start").attr("dominant-baseline","auto").attr("text-anchor",k?"middle":"start").text(l.start),k||c.append("text").attr("x",g+d).attr("y",f).attr("class","packetByte end").attr("dominant-baseline","auto").attr("text-anchor","end").text(l.end)}},"drawWord"),X={draw:R},j={byteFontSize:"10px",startByteColor:"black",endByteColor:"black",labelColor:"black",labelFontSize:"12px",titleColor:"black",titleFontSize:"14px",blockStrokeColor:"black",blockStrokeWidth:"1",blockFillColor:"#efefef"},J=n(({packet:t}={})=>{const e=x(j,t);return`
|
||||
.packetByte {
|
||||
font-size: ${e.byteFontSize};
|
||||
}
|
||||
File diff suppressed because one or more lines are too long
File diff suppressed because one or more lines are too long
File diff suppressed because one or more lines are too long
File diff suppressed because one or more lines are too long
|
|
@ -1,4 +1,4 @@
|
|||
import{_ as m,o as O1,l as Z,c as Ge,d as Ce,p as H1,r as q1,u as i1,b as X1,s as Q1,q as J1,a as Z1,g as $1,t as et,k as tt,v as st,J as it,x as rt,y as s1,z as nt,A as at,B as ut,C as lt}from"./mermaid-vendor-d7rbry5E.js";import{g as ot,s as ct}from"./chunk-RZ5BOZE2-C762jHXr.js";import"./feature-graph-DbHHHM9y.js";import"./react-vendor-DEwriMA6.js";import"./graph-vendor-B-X5JegA.js";import"./ui-vendor-CeCm8EER.js";import"./utils-vendor-BysuhMZA.js";var ht="flowchart-",Pe,dt=(Pe=class{constructor(){this.vertexCounter=0,this.config=Ge(),this.vertices=new Map,this.edges=[],this.classes=new Map,this.subGraphs=[],this.subGraphLookup=new Map,this.tooltips=new Map,this.subCount=0,this.firstGraphFlag=!0,this.secCount=-1,this.posCrossRef=[],this.funs=[],this.setAccTitle=X1,this.setAccDescription=Q1,this.setDiagramTitle=J1,this.getAccTitle=Z1,this.getAccDescription=$1,this.getDiagramTitle=et,this.funs.push(this.setupToolTips.bind(this)),this.addVertex=this.addVertex.bind(this),this.firstGraph=this.firstGraph.bind(this),this.setDirection=this.setDirection.bind(this),this.addSubGraph=this.addSubGraph.bind(this),this.addLink=this.addLink.bind(this),this.setLink=this.setLink.bind(this),this.updateLink=this.updateLink.bind(this),this.addClass=this.addClass.bind(this),this.setClass=this.setClass.bind(this),this.destructLink=this.destructLink.bind(this),this.setClickEvent=this.setClickEvent.bind(this),this.setTooltip=this.setTooltip.bind(this),this.updateLinkInterpolate=this.updateLinkInterpolate.bind(this),this.setClickFun=this.setClickFun.bind(this),this.bindFunctions=this.bindFunctions.bind(this),this.lex={firstGraph:this.firstGraph.bind(this)},this.clear(),this.setGen("gen-2")}sanitizeText(i){return tt.sanitizeText(i,this.config)}lookUpDomId(i){for(const n of this.vertices.values())if(n.id===i)return n.domId;return i}addVertex(i,n,a,u,l,f,c={},A){var U,T;if(!i||i.trim().length===0)return;let r;if(A!==void 0){let d;A.includes(`
|
||||
import{_ as m,o as O1,l as Z,c as Ge,d as Ce,p as H1,r as q1,u as i1,b as X1,s as Q1,q as J1,a as Z1,g as $1,t as et,k as tt,v as st,J as it,x as rt,y as s1,z as nt,A as at,B as ut,C as lt}from"./mermaid-vendor-S2u3NfNd.js";import{g as ot,s as ct}from"./chunk-RZ5BOZE2-DAdu8FE8.js";import"./feature-graph-DGPXw7qg.js";import"./react-vendor-DEwriMA6.js";import"./graph-vendor-B-X5JegA.js";import"./ui-vendor-CeCm8EER.js";import"./utils-vendor-BysuhMZA.js";var ht="flowchart-",Pe,dt=(Pe=class{constructor(){this.vertexCounter=0,this.config=Ge(),this.vertices=new Map,this.edges=[],this.classes=new Map,this.subGraphs=[],this.subGraphLookup=new Map,this.tooltips=new Map,this.subCount=0,this.firstGraphFlag=!0,this.secCount=-1,this.posCrossRef=[],this.funs=[],this.setAccTitle=X1,this.setAccDescription=Q1,this.setDiagramTitle=J1,this.getAccTitle=Z1,this.getAccDescription=$1,this.getDiagramTitle=et,this.funs.push(this.setupToolTips.bind(this)),this.addVertex=this.addVertex.bind(this),this.firstGraph=this.firstGraph.bind(this),this.setDirection=this.setDirection.bind(this),this.addSubGraph=this.addSubGraph.bind(this),this.addLink=this.addLink.bind(this),this.setLink=this.setLink.bind(this),this.updateLink=this.updateLink.bind(this),this.addClass=this.addClass.bind(this),this.setClass=this.setClass.bind(this),this.destructLink=this.destructLink.bind(this),this.setClickEvent=this.setClickEvent.bind(this),this.setTooltip=this.setTooltip.bind(this),this.updateLinkInterpolate=this.updateLinkInterpolate.bind(this),this.setClickFun=this.setClickFun.bind(this),this.bindFunctions=this.bindFunctions.bind(this),this.lex={firstGraph:this.firstGraph.bind(this)},this.clear(),this.setGen("gen-2")}sanitizeText(i){return tt.sanitizeText(i,this.config)}lookUpDomId(i){for(const n of this.vertices.values())if(n.id===i)return n.domId;return i}addVertex(i,n,a,u,l,f,c={},A){var U,T;if(!i||i.trim().length===0)return;let r;if(A!==void 0){let d;A.includes(`
|
||||
`)?d=A+`
|
||||
`:d=`{
|
||||
`+A+`
|
||||
File diff suppressed because one or more lines are too long
File diff suppressed because one or more lines are too long
File diff suppressed because one or more lines are too long
263
lightrag/api/webui/assets/index-1Hy45NwC.js
generated
Normal file
263
lightrag/api/webui/assets/index-1Hy45NwC.js
generated
Normal file
File diff suppressed because one or more lines are too long
|
|
@ -1,4 +1,4 @@
|
|||
import{j as o,Y as td,O as fg,k as dg,u as ad,Z as mg,c as hg,l as gg,g as pg,S as yg,T as vg,n as bg,m as nd,o as Sg,p as Tg,$ as ud,a0 as id,a1 as cd,a2 as xg}from"./ui-vendor-CeCm8EER.js";import{d as Ag,h as Dg,r as E,u as sd,H as Ng,i as Eg,j as Jf}from"./react-vendor-DEwriMA6.js";import{w as Ve,c as Qe,a5 as od,u as ql,v as Gt,a6 as rd,a7 as fd,I as us,B as Cn,D as Mg,i as zg,j as Cg,k as Og,l as jg,a8 as Rg,a9 as Ug,aa as _g,ab as Hg,ac as Ll,ad as dd,ae as ss,af as is,P as Lg,Q as qg,V as Bg,W as Gg,ag as Yg,ah as Xg,ai as md,aj as Vg,ak as Qg,al as hd,am as wg,an as gd,C as Zg,z as Kg,G as Jg,d as En,ao as kg,ap as Fg,aq as $g}from"./feature-graph-DbHHHM9y.js";import{S as kf,a as Ff,b as $f,c as Wf,d as ot,R as Wg}from"./feature-retrieval-zozGWnLh.js";import{D as Pg}from"./feature-documents-ClbgnjXg.js";import{i as cs}from"./utils-vendor-BysuhMZA.js";import"./graph-vendor-B-X5JegA.js";import"./mermaid-vendor-d7rbry5E.js";import"./markdown-vendor-BBaHfVvE.js";(function(){const b=document.createElement("link").relList;if(b&&b.supports&&b.supports("modulepreload"))return;for(const N of document.querySelectorAll('link[rel="modulepreload"]'))d(N);new MutationObserver(N=>{for(const j of N)if(j.type==="childList")for(const H of j.addedNodes)H.tagName==="LINK"&&H.rel==="modulepreload"&&d(H)}).observe(document,{childList:!0,subtree:!0});function x(N){const j={};return N.integrity&&(j.integrity=N.integrity),N.referrerPolicy&&(j.referrerPolicy=N.referrerPolicy),N.crossOrigin==="use-credentials"?j.credentials="include":N.crossOrigin==="anonymous"?j.credentials="omit":j.credentials="same-origin",j}function d(N){if(N.ep)return;N.ep=!0;const j=x(N);fetch(N.href,j)}})();var ts={exports:{}},Mn={},as={exports:{}},ns={};/**
|
||||
import{j as o,Y as td,O as fg,k as dg,u as ad,Z as mg,c as hg,l as gg,g as pg,S as yg,T as vg,n as bg,m as nd,o as Sg,p as Tg,$ as ud,a0 as id,a1 as cd,a2 as xg}from"./ui-vendor-CeCm8EER.js";import{d as Ag,h as Dg,r as E,u as sd,H as Ng,i as Eg,j as Jf}from"./react-vendor-DEwriMA6.js";import{w as Ve,c as Qe,a5 as od,u as ql,v as Gt,a6 as rd,a7 as fd,I as us,B as Cn,D as Mg,i as zg,j as Cg,k as Og,l as jg,a8 as Rg,a9 as Ug,aa as _g,ab as Hg,ac as Ll,ad as dd,ae as ss,af as is,P as Lg,Q as qg,V as Bg,W as Gg,ag as Yg,ah as Xg,ai as md,aj as Vg,ak as Qg,al as hd,am as wg,an as gd,C as Zg,z as Kg,G as Jg,d as En,ao as kg,ap as Fg,aq as $g}from"./feature-graph-DGPXw7qg.js";import{S as kf,a as Ff,b as $f,c as Wf,d as ot,R as Wg}from"./feature-retrieval-BjGVPTjV.js";import{D as Pg}from"./feature-documents-BsTs1vSG.js";import{i as cs}from"./utils-vendor-BysuhMZA.js";import"./graph-vendor-B-X5JegA.js";import"./mermaid-vendor-S2u3NfNd.js";import"./markdown-vendor-DmIvJdn7.js";(function(){const b=document.createElement("link").relList;if(b&&b.supports&&b.supports("modulepreload"))return;for(const N of document.querySelectorAll('link[rel="modulepreload"]'))d(N);new MutationObserver(N=>{for(const j of N)if(j.type==="childList")for(const H of j.addedNodes)H.tagName==="LINK"&&H.rel==="modulepreload"&&d(H)}).observe(document,{childList:!0,subtree:!0});function x(N){const j={};return N.integrity&&(j.integrity=N.integrity),N.referrerPolicy&&(j.referrerPolicy=N.referrerPolicy),N.crossOrigin==="use-credentials"?j.credentials="include":N.crossOrigin==="anonymous"?j.credentials="omit":j.credentials="same-origin",j}function d(N){if(N.ep)return;N.ep=!0;const j=x(N);fetch(N.href,j)}})();var ts={exports:{}},Mn={},as={exports:{}},ns={};/**
|
||||
* @license React
|
||||
* scheduler.production.js
|
||||
*
|
||||
File diff suppressed because one or more lines are too long
|
|
@ -1,2 +1,2 @@
|
|||
import{_ as e,l as o,K as i,e as n,L as p}from"./mermaid-vendor-d7rbry5E.js";import{p as m}from"./radar-MK3ICKWK-zkXzSXFe.js";import"./feature-graph-DbHHHM9y.js";import"./react-vendor-DEwriMA6.js";import"./graph-vendor-B-X5JegA.js";import"./ui-vendor-CeCm8EER.js";import"./utils-vendor-BysuhMZA.js";import"./_baseUniq-OtJ11HbN.js";import"./_basePickBy-Lz6agtdo.js";import"./clone-vL6XIcCC.js";var g={parse:e(async r=>{const a=await m("info",r);o.debug(a)},"parse")},v={version:p.version},d=e(()=>v.version,"getVersion"),c={getVersion:d},l=e((r,a,s)=>{o.debug(`rendering info diagram
|
||||
import{_ as e,l as o,K as i,e as n,L as p}from"./mermaid-vendor-S2u3NfNd.js";import{p as m}from"./radar-MK3ICKWK-BKWgs4sj.js";import"./feature-graph-DGPXw7qg.js";import"./react-vendor-DEwriMA6.js";import"./graph-vendor-B-X5JegA.js";import"./ui-vendor-CeCm8EER.js";import"./utils-vendor-BysuhMZA.js";import"./_baseUniq-BhkTkpog.js";import"./_basePickBy-C89rMBVC.js";import"./clone-BNTPEzIf.js";var g={parse:e(async r=>{const a=await m("info",r);o.debug(a)},"parse")},v={version:p.version},d=e(()=>v.version,"getVersion"),c={getVersion:d},l=e((r,a,s)=>{o.debug(`rendering info diagram
|
||||
`+r);const t=i(a);n(t,100,400,!0),t.append("g").append("text").attr("x",100).attr("y",40).attr("class","version").attr("font-size",32).style("text-anchor","middle").text(`v${s}`)},"draw"),f={draw:l},L={parser:g,db:c,renderer:f};export{L as diagram};
|
||||
Some files were not shown because too many files have changed in this diff Show more
Loading…
Add table
Reference in a new issue