Update README

This commit is contained in:
yangdx 2025-07-30 13:31:47 +08:00
parent 50621d5a94
commit 797dcc1ff1
2 changed files with 26 additions and 203 deletions

View file

@ -593,112 +593,20 @@ LightRAG 中的文档处理流程有些复杂,分为两个主要阶段:提
4. 使用查询端点查询系统
5. 如果在输入目录中放入新文件,触发文档扫描
### 查询端点
## 异步文档索引与进度跟踪
#### POST /query
使用不同搜索模式查询 RAG 系统。
LightRAG采用异步文档索引机制便于前端监控和查询文档处理进度。用户通过指定端点上传文件或插入文本时系统将返回唯一的跟踪ID以便实时监控处理进度。
```bash
curl -X POST "http://localhost:9621/query" \
-H "Content-Type: application/json" \
-d '{"query": "您的问题", "mode": "hybrid", ""}'
```
**支持生成跟踪ID的API端点**
* `/documents/upload`
* `/documents/text`
* `/documents/texts`
#### POST /query/stream
从 RAG 系统流式获取响应。
**文档处理状态查询端点:**
* `/track_status/{track_id}`
```bash
curl -X POST "http://localhost:9621/query/stream" \
-H "Content-Type: application/json" \
-d '{"query": "您的问题", "mode": "hybrid"}'
```
### 文档管理端点
#### POST /documents/text
直接将文本插入 RAG 系统。
```bash
curl -X POST "http://localhost:9621/documents/text" \
-H "Content-Type: application/json" \
-d '{"text": "您的文本内容", "description": "可选描述"}'
```
#### POST /documents/file
向 RAG 系统上传单个文件。
```bash
curl -X POST "http://localhost:9621/documents/file" \
-F "file=@/path/to/your/document.txt" \
-F "description=可选描述"
```
#### POST /documents/batch
一次上传多个文件。
```bash
curl -X POST "http://localhost:9621/documents/batch" \
-F "files=@/path/to/doc1.txt" \
-F "files=@/path/to/doc2.txt"
```
#### POST /documents/scan
触发输入目录中新文件的文档扫描。
```bash
curl -X POST "http://localhost:9621/documents/scan" --max-time 1800
```
> 根据所有新文件的预计索引时间调整 max-time。
#### DELETE /documents
从 RAG 系统中清除所有文档。
```bash
curl -X DELETE "http://localhost:9621/documents"
```
### Ollama 模拟端点
#### GET /api/version
获取 Ollama 版本信息。
```bash
curl http://localhost:9621/api/version
```
#### GET /api/tags
获取 Ollama 可用模型。
```bash
curl http://localhost:9621/api/tags
```
#### POST /api/chat
处理聊天补全请求。通过根据查询前缀选择查询模式将用户查询路由到 LightRAG。检测并将 OpenWebUI 会话相关请求(用于元数据生成任务)直接转发给底层 LLM。
```shell
curl -N -X POST http://localhost:9621/api/chat -H "Content-Type: application/json" -d \
'{"model":"lightrag:latest","messages":[{"role":"user","content":"猪八戒是谁"}],"stream":true}'
```
> 有关 Ollama API 的更多信息,请访问:[Ollama API 文档](https://github.com/ollama/ollama/blob/main/docs/api.md)
#### POST /api/generate
处理生成补全请求。为了兼容性目的,该请求不由 LightRAG 处理,而是由底层 LLM 模型处理。
### 实用工具端点
#### GET /health
检查服务器健康状况和配置。
```bash
curl "http://localhost:9621/health"
```
该端点提供全面的状态信息,包括:
* 文档处理状态(待处理/处理中/已处理/失败)
* 内容摘要和元数据
* 处理失败时的错误信息
* 创建和更新时间戳

View file

@ -534,111 +534,26 @@ You can test the API endpoints using the provided curl commands or through the S
4. Query the system using the query endpoints
5. Trigger document scan if new files are put into the inputs directory
### Query Endpoints:
## Asynchronous Document Indexing with Progress Tracking
#### POST /query
Query the RAG system with options for different search modes.
LightRAG implements asynchronous document indexing to enable frontend monitoring and querying of document processing progress. Upon uploading files or inserting text through designated endpoints, a unique Track ID is returned to facilitate real-time progress monitoring.
```bash
curl -X POST "http://localhost:9621/query" \
-H "Content-Type: application/json" \
-d '{"query": "Your question here", "mode": "hybrid"}'
```
**API Endpoints Supporting Track ID Generation:**
#### POST /query/stream
Stream responses from the RAG system.
* `/documents/upload`
* `/documents/text`
* `/documents/texts`
```bash
curl -X POST "http://localhost:9621/query/stream" \
-H "Content-Type: application/json" \
-d '{"query": "Your question here", "mode": "hybrid"}'
```
**Document Processing Status Query Endpoint:**
* `/track_status/{track_id}`
### Document Management Endpoints:
This endpoint provides comprehensive status information including:
* Document processing status (pending/processing/processed/failed)
* Content summary and metadata
* Error messages if processing failed
* Timestamps for creation and updates
#### POST /documents/text
Insert text directly into the RAG system.
```bash
curl -X POST "http://localhost:9621/documents/text" \
-H "Content-Type: application/json" \
-d '{"text": "Your text content here", "description": "Optional description"}'
```
#### POST /documents/file
Upload a single file to the RAG system.
```bash
curl -X POST "http://localhost:9621/documents/file" \
-F "file=@/path/to/your/document.txt" \
-F "description=Optional description"
```
#### POST /documents/batch
Upload multiple files at once.
```bash
curl -X POST "http://localhost:9621/documents/batch" \
-F "files=@/path/to/doc1.txt" \
-F "files=@/path/to/doc2.txt"
```
#### POST /documents/scan
Trigger document scan for new files in the input directory.
```bash
curl -X POST "http://localhost:9621/documents/scan" --max-time 1800
```
> Adjust max-time according to the estimated indexing time for all new files.
#### DELETE /documents
Clear all documents from the RAG system.
```bash
curl -X DELETE "http://localhost:9621/documents"
```
### Ollama Emulation Endpoints:
#### GET /api/version
Get Ollama version information.
```bash
curl http://localhost:9621/api/version
```
#### GET /api/tags
Get available Ollama models.
```bash
curl http://localhost:9621/api/tags
```
#### POST /api/chat
Handle chat completion requests. Routes user queries through LightRAG by selecting query mode based on query prefix. Detects and forwards OpenWebUI session-related requests (for metadata generation task) directly to the underlying LLM.
```shell
curl -N -X POST http://localhost:9621/api/chat -H "Content-Type: application/json" -d \
'{"model":"lightrag:latest","messages":[{"role":"user","content":"猪八戒是谁"}],"stream":true}'
```
> For more information about Ollama API, please visit: [Ollama API documentation](https://github.com/ollama/ollama/blob/main/docs/api.md)
#### POST /api/generate
Handle generate completion requests. For compatibility purposes, the request is not processed by LightRAG, and will be handled by the underlying LLM model.
### Utility Endpoints:
#### GET /health
Check server health and configuration.
```bash
curl "http://localhost:9621/health"
```