This commit is contained in:
Raphaël MANSUY 2025-12-04 19:18:34 +08:00
parent f7fbe802ba
commit 5bdd741eed
4 changed files with 72 additions and 86 deletions

View file

@ -13,28 +13,20 @@ jobs:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v6
- uses: actions/checkout@v4
with:
fetch-depth: 0 # Fetch all history for tags
# Build frontend WebUI
- name: Setup Bun
<<<<<<< HEAD
uses: oven-sh/setup-bun@v2
=======
uses: oven-sh/setup-bun@v1
>>>>>>> be9e6d16 (Exclude Frontend Build Artifacts from Git Repository)
with:
bun-version: latest
- name: Build Frontend WebUI
run: |
cd lightrag_webui
<<<<<<< HEAD
bun install --frozen-lockfile
=======
bun install
>>>>>>> be9e6d16 (Exclude Frontend Build Artifacts from Git Repository)
bun install --frozen-lockfile --production
bun run build
cd ..
@ -48,11 +40,7 @@ jobs:
echo "Frontend files:"
ls -lh lightrag/api/webui/ | head -10
<<<<<<< HEAD
- uses: actions/setup-python@v6
=======
- uses: actions/setup-python@v5
>>>>>>> be9e6d16 (Exclude Frontend Build Artifacts from Git Repository)
with:
python-version: "3.x"
@ -76,7 +64,7 @@ jobs:
python -m build
- name: Upload distributions
uses: actions/upload-artifact@v5
uses: actions/upload-artifact@v4
with:
name: release-dists
path: dist/
@ -93,7 +81,7 @@ jobs:
steps:
- name: Retrieve release distributions
uses: actions/download-artifact@v6
uses: actions/download-artifact@v4
with:
name: release-dists
path: dist/

View file

@ -29,14 +29,14 @@ cd lightrag
# Create a Python virtual environment
uv venv --seed --python 3.12
source .venv/bin/activate
source .venv/bin/acivate
# Install in editable mode with API support
pip install -e ".[api]"
# Build front-end artifacts
cd lightrag_webui
bun install --frozen-lockfile
bun install --frozen-lockfile --production
bun run build
cd ..
```
@ -118,10 +118,36 @@ lightrag-gunicorn --workers 4
### 使用 Docker 启动 LightRAG 服务器
使用 Docker Compose 是部署和运行 LightRAG Server 最便捷的方式。
- 创建一个项目目录。
- 将 LightRAG 仓库中的 `docker-compose.yml` 文件复制到您的项目目录中。
- 准备 `.env` 文件:复制示例文件 [`env.example`](https://ai.znipower.com:5013/c/env.example) 创建自定义的 `.env` 文件,并根据您的具体需求配置 LLM 和嵌入参数。
* 配置 .env 文件:
通过复制示例文件 [`env.example`](env.example) 创建个性化的 .env 文件,并根据实际需求设置 LLM 及 Embedding 参数。
* 创建一个名为 docker-compose.yml 的文件:
```yaml
services:
lightrag:
container_name: lightrag
image: ghcr.io/hkuds/lightrag:latest
build:
context: .
dockerfile: Dockerfile
tags:
- ghcr.io/hkuds/lightrag:latest
ports:
- "${PORT:-9621}:9621"
volumes:
- ./data/rag_storage:/app/data/rag_storage
- ./data/inputs:/app/data/inputs
- ./data/tiktoken:/app/data/tiktoken
- ./config.ini:/app/config.ini
- ./.env:/app/.env
env_file:
- .env
environment:
- TIKTOKEN_CACHE_DIR=/app/data/tiktoken
restart: unless-stopped
extra_hosts:
- "host.docker.internal:host-gateway"
```
* 通过以下命令启动 LightRAG 服务器:
@ -129,11 +155,11 @@ lightrag-gunicorn --workers 4
docker compose up
# 如果希望启动后让程序退到后台运行,需要在命令的最后添加 -d 参数
```
> 可以通过以下链接获取官方的docker compose文件[docker-compose.yml]( https://raw.githubusercontent.com/HKUDS/LightRAG/refs/heads/main/docker-compose.yml) 。如需获取LightRAG的历史版本镜像可以访问以下链接: [LightRAG Docker Images]( https://github.com/HKUDS/LightRAG/pkgs/container/lightrag). 如需获取更多关于docker部署的信息请参阅 [DockerDeployment.md](./../../docs/DockerDeployment.md).
> 可以通过以下链接获取官方的docker compose文件[docker-compose.yml]( https://raw.githubusercontent.com/HKUDS/LightRAG/refs/heads/main/docker-compose.yml) 。如需获取LightRAG的历史版本镜像可以访问以下链接: [LightRAG Docker Images]( https://github.com/HKUDS/LightRAG/pkgs/container/lightrag)
### 离线部署
官方的 LightRAG Docker 镜像完全兼容离线或隔离网络环境。如需搭建自己的离线部署环境,请参考 [离线部署指南](./../../docs/OfflineDeployment.md)。
对于离线或隔离环境,请参阅[离线部署指南](./../../docs/OfflineDeployment.md),了解如何预先安装所有依赖项和缓存文件
### 启动多个LightRAG实例
@ -408,10 +434,6 @@ LIGHTRAG_DOC_STATUS_STORAGE=PGDocStatusStorage
在向 LightRAG 添加文档后,您不能更改存储实现选择。目前尚不支持从一个存储实现迁移到另一个存储实现。更多配置信息请阅读示例 `env.exampl`e文件。
### 在不同存储类型之间迁移LLM缓存
当LightRAG更换存储实现方式的时候可以LLM缓存从就的存储迁移到新的存储。先以后在新的存储上重新上传文件时将利用利用原有存储的LLM缓存大幅度加快文件处理的速度。LLM缓存迁移工具的使用方法请参考[README_MIGRATE_LLM_CACHE.md](../tools/README_MIGRATE_LLM_CACHE.md)
### LightRag API 服务器命令行选项
| 参数 | 默认值 | 描述 |

View file

@ -29,14 +29,14 @@ cd lightrag
# Create a Python virtual environment
uv venv --seed --python 3.12
source .venv/bin/activate
source .venv/bin/acivate
# Install in editable mode with API support
pip install -e ".[api]"
# Build front-end artifacts
cd lightrag_webui
bun install --frozen-lockfile
bun install --frozen-lockfile --production
bun run build
cd ..
```
@ -119,13 +119,37 @@ During startup, configurations in the `.env` file can be overridden by command-l
### Launching LightRAG Server with Docker
Using Docker Compose is the most convenient way to deploy and run the LightRAG Server.
* Prepare the .env file:
Create a personalized .env file by copying the sample file [`env.example`](env.example). Configure the LLM and embedding parameters according to your requirements.
* Create a project directory.
* Create a file named `docker-compose.yml`:
* Copy the `docker-compose.yml` file from the LightRAG repository into your project directory.
* Prepare the `.env` file: Duplicate the sample file [`env.example`](https://ai.znipower.com:5013/c/env.example)to create a customized `.env` file, and configure the LLM and embedding parameters according to your specific requirements.
```yaml
services:
lightrag:
container_name: lightrag
image: ghcr.io/hkuds/lightrag:latest
build:
context: .
dockerfile: Dockerfile
tags:
- ghcr.io/hkuds/lightrag:latest
ports:
- "${PORT:-9621}:9621"
volumes:
- ./data/rag_storage:/app/data/rag_storage
- ./data/inputs:/app/data/inputs
- ./data/tiktoken:/app/data/tiktoken
- ./config.ini:/app/config.ini
- ./.env:/app/.env
env_file:
- .env
environment:
- TIKTOKEN_CACHE_DIR=/app/data/tiktoken
restart: unless-stopped
extra_hosts:
- "host.docker.internal:host-gateway"
```
* Start the LightRAG Server with the following command:
@ -134,11 +158,11 @@ docker compose up
# If you want the program to run in the background after startup, add the -d parameter at the end of the command.
```
You can get the official docker compose file from here: [docker-compose.yml](https://raw.githubusercontent.com/HKUDS/LightRAG/refs/heads/main/docker-compose.yml). For historical versions of LightRAG docker images, visit this link: [LightRAG Docker Images](https://github.com/HKUDS/LightRAG/pkgs/container/lightrag). For more details about docker deployment, please refer to [DockerDeployment.md](./../../docs/DockerDeployment.md).
> You can get the official docker compose file from here: [docker-compose.yml](https://raw.githubusercontent.com/HKUDS/LightRAG/refs/heads/main/docker-compose.yml). For historical versions of LightRAG docker images, visit this link: [LightRAG Docker Images](https://github.com/HKUDS/LightRAG/pkgs/container/lightrag)
### Offline Deployment
Official LightRAG Docker images are fully compatible with offline or air-gapped environments. If you want to build up you own offline enviroment, please refer to [Offline Deployment Guide](./../../docs/OfflineDeployment.md).
For offline or air-gapped environments, see the [Offline Deployment Guide](./../../docs/OfflineDeployment.md) for instructions on pre-installing all dependencies and cache files.
### Starting Multiple LightRAG Instances
@ -412,10 +436,6 @@ LIGHTRAG_DOC_STATUS_STORAGE=PGDocStatusStorage
You cannot change storage implementation selection after adding documents to LightRAG. Data migration from one storage implementation to another is not supported yet. For further information, please read the sample env file or config.ini file.
### LLM Cache Migration Between Storage Types
When switching the storage implementation in LightRAG, the LLM cache can be migrated from the existing storage to the new one. Subsequently, when re-uploading files to the new storage, the pre-existing LLM cache will significantly accelerate file processing. For detailed instructions on using the LLM cache migration tool, please refer to[README_MIGRATE_LLM_CACHE.md](../tools/README_MIGRATE_LLM_CACHE.md)
### LightRAG API Server Command Line Options
| Parameter | Default | Description |
@ -472,50 +492,6 @@ The `/query` and `/query/stream` API endpoints include an `enable_rerank` parame
RERANK_BY_DEFAULT=False
```
### Include Chunk Content in References
By default, the `/query` and `/query/stream` endpoints return references with only `reference_id` and `file_path`. For evaluation, debugging, or citation purposes, you can request the actual retrieved chunk content to be included in references.
The `include_chunk_content` parameter (default: `false`) controls whether the actual text content of retrieved chunks is included in the response references. This is particularly useful for:
- **RAG Evaluation**: Testing systems like RAGAS that need access to retrieved contexts
- **Debugging**: Verifying what content was actually used to generate the answer
- **Citation Display**: Showing users the exact text passages that support the response
- **Transparency**: Providing full visibility into the RAG retrieval process
**Example API Request:**
```json
{
"query": "What is LightRAG?",
"mode": "mix",
"include_references": true,
"include_chunk_content": true
}
```
**Example Response (with chunk content):**
```json
{
"response": "LightRAG is a graph-based RAG system...",
"references": [
{
"reference_id": "1",
"file_path": "/documents/intro.md",
"content": "LightRAG is a retrieval-augmented generation system that combines knowledge graphs with vector similarity search..."
},
{
"reference_id": "2",
"file_path": "/documents/features.md",
"content": "The system provides multiple query modes including local, global, hybrid, and mix modes..."
}
]
}
```
**Note**: This parameter only works when `include_references=true`. Setting `include_chunk_content=true` without including references has no effect.
### .env Examples
```bash

View file

@ -21,7 +21,7 @@ LightRAG WebUI is a React-based web interface for interacting with the LightRAG
Run the following command to build the project:
```bash
bun run build --emptyOutDir
bun run build
```
This command will bundle the project and output the built files to the `lightrag/api/webui` directory.