Merge pull request #2194 from danielaskdd/offline

Feat: Add Comprehensive Offline Deployment Solution
This commit is contained in:
Daniel.y 2025-10-11 10:36:01 +08:00 committed by GitHub
commit 49326f2b14
No known key found for this signature in database
GPG key ID: B5690EEEBB952194
11 changed files with 629 additions and 156 deletions

View file

@ -84,6 +84,8 @@
## Installation
> **📦 Offline Deployment**: For offline or air-gapped environments, see the [Offline Deployment Guide](./docs/OfflineDeployment.md) for instructions on pre-installing all dependencies and cache files.
### Install LightRAG Server
The LightRAG Server is designed to provide Web UI and API support. The Web UI facilitates document indexing, knowledge graph exploration, and a simple RAG query interface. LightRAG Server also provide an Ollama compatible interfaces, aiming to emulate LightRAG as an Ollama chat model. This allows AI chat bot, such as Open WebUI, to access LightRAG easily.

311
docs/OfflineDeployment.md Normal file
View file

@ -0,0 +1,311 @@
# LightRAG Offline Deployment Guide
This guide provides comprehensive instructions for deploying LightRAG in offline environments where internet access is limited or unavailable.
## Table of Contents
- [Overview](#overview)
- [Quick Start](#quick-start)
- [Layered Dependencies](#layered-dependencies)
- [Tiktoken Cache Management](#tiktoken-cache-management)
- [Complete Offline Deployment Workflow](#complete-offline-deployment-workflow)
- [Troubleshooting](#troubleshooting)
## Overview
LightRAG uses dynamic package installation (`pipmaster`) for optional features based on file types and configurations. In offline environments, these dynamic installations will fail. This guide shows you how to pre-install all necessary dependencies and cache files.
### What Gets Dynamically Installed?
LightRAG dynamically installs packages for:
- **Document Processing**: `docling`, `pypdf2`, `python-docx`, `python-pptx`, `openpyxl`
- **Storage Backends**: `redis`, `neo4j`, `pymilvus`, `pymongo`, `asyncpg`, `qdrant-client`
- **LLM Providers**: `openai`, `anthropic`, `ollama`, `zhipuai`, `aioboto3`, `voyageai`, `llama-index`, `lmdeploy`, `transformers`, `torch`
- Tiktoken Models**: BPE encoding models downloaded from OpenAI CDN
## Quick Start
### Option 1: Using pip with Offline Extras
```bash
# Online environment: Install all offline dependencies
pip install lightrag-hku[offline]
# Download tiktoken cache
lightrag-download-cache
# Create offline package
pip download lightrag-hku[offline] -d ./offline-packages
tar -czf lightrag-offline.tar.gz ./offline-packages ~/.tiktoken_cache
# Transfer to offline server
scp lightrag-offline.tar.gz user@offline-server:/path/to/
# Offline environment: Install
tar -xzf lightrag-offline.tar.gz
pip install --no-index --find-links=./offline-packages lightrag-hku[offline]
export TIKTOKEN_CACHE_DIR=~/.tiktoken_cache
```
### Option 2: Using Requirements Files
```bash
# Online environment: Download packages
pip download -r requirements-offline.txt -d ./packages
# Transfer to offline server
tar -czf packages.tar.gz ./packages
scp packages.tar.gz user@offline-server:/path/to/
# Offline environment: Install
tar -xzf packages.tar.gz
pip install --no-index --find-links=./packages -r requirements-offline.txt
```
## Layered Dependencies
LightRAG provides flexible dependency groups for different use cases:
### Available Dependency Groups
| Group | Description | Use Case |
|-------|-------------|----------|
| `offline-docs` | Document processing | PDF, DOCX, PPTX, XLSX files |
| `offline-storage` | Storage backends | Redis, Neo4j, MongoDB, PostgreSQL, etc. |
| `offline-llm` | LLM providers | OpenAI, Anthropic, Ollama, etc. |
| `offline` | All of the above | Complete offline deployment |
### Installation Examples
```bash
# Install only document processing dependencies
pip install lightrag-hku[offline-docs]
# Install document processing and storage backends
pip install lightrag-hku[offline-docs,offline-storage]
# Install all offline dependencies
pip install lightrag-hku[offline]
```
### Using Individual Requirements Files
```bash
# Document processing only
pip install -r requirements-offline-docs.txt
# Storage backends only
pip install -r requirements-offline-storage.txt
# LLM providers only
pip install -r requirements-offline-llm.txt
# All offline dependencies
pip install -r requirements-offline.txt
```
## Tiktoken Cache Management
Tiktoken downloads BPE encoding models on first use. In offline environments, you must pre-download these models.
### Using the CLI Command
After installing LightRAG, use the built-in command:
```bash
# Download to default location (~/.tiktoken_cache)
lightrag-download-cache
# Download to specific directory
lightrag-download-cache --cache-dir ./tiktoken_cache
# Download specific models only
lightrag-download-cache --models gpt-4o-mini gpt-4
```
### Default Models Downloaded
- `gpt-4o-mini` (LightRAG default)
- `gpt-4o`
- `gpt-4`
- `gpt-3.5-turbo`
- `text-embedding-ada-002`
- `text-embedding-3-small`
- `text-embedding-3-large`
### Setting Cache Location in Offline Environment
```bash
# Option 1: Environment variable (temporary)
export TIKTOKEN_CACHE_DIR=/path/to/tiktoken_cache
# Option 2: Add to ~/.bashrc or ~/.zshrc (persistent)
echo 'export TIKTOKEN_CACHE_DIR=~/.tiktoken_cache' >> ~/.bashrc
source ~/.bashrc
# Option 3: Copy to default location
cp -r /path/to/tiktoken_cache ~/.tiktoken_cache/
```
## Complete Offline Deployment Workflow
### Step 1: Prepare in Online Environment
```bash
# 1. Install LightRAG with offline dependencies
pip install lightrag-hku[offline]
# 2. Download tiktoken cache
lightrag-download-cache --cache-dir ./offline_cache/tiktoken
# 3. Download all Python packages
pip download lightrag-hku[offline] -d ./offline_cache/packages
# 4. Create archive for transfer
tar -czf lightrag-offline-complete.tar.gz ./offline_cache
# 5. Verify contents
tar -tzf lightrag-offline-complete.tar.gz | head -20
```
### Step 2: Transfer to Offline Environment
```bash
# Using scp
scp lightrag-offline-complete.tar.gz user@offline-server:/tmp/
# Or using USB/physical media
# Copy lightrag-offline-complete.tar.gz to USB drive
```
### Step 3: Install in Offline Environment
```bash
# 1. Extract archive
cd /tmp
tar -xzf lightrag-offline-complete.tar.gz
# 2. Install Python packages
pip install --no-index \
--find-links=/tmp/offline_cache/packages \
lightrag-hku[offline]
# 3. Set up tiktoken cache
mkdir -p ~/.tiktoken_cache
cp -r /tmp/offline_cache/tiktoken/* ~/.tiktoken_cache/
export TIKTOKEN_CACHE_DIR=~/.tiktoken_cache
# 4. Add to shell profile for persistence
echo 'export TIKTOKEN_CACHE_DIR=~/.tiktoken_cache' >> ~/.bashrc
```
### Step 4: Verify Installation
```bash
# Test Python import
python -c "from lightrag import LightRAG; print('✓ LightRAG imported')"
# Test tiktoken
python -c "from lightrag.utils import TiktokenTokenizer; t = TiktokenTokenizer(); print('✓ Tiktoken working')"
# Test optional dependencies (if installed)
python -c "import docling; print('✓ Docling available')"
python -c "import redis; print('✓ Redis available')"
```
## Troubleshooting
### Issue: Tiktoken fails with network error
**Problem**: `Unable to load tokenizer for model gpt-4o-mini`
**Solution**:
```bash
# Ensure TIKTOKEN_CACHE_DIR is set
echo $TIKTOKEN_CACHE_DIR
# Verify cache files exist
ls -la ~/.tiktoken_cache/
# If empty, you need to download cache in online environment first
```
### Issue: Dynamic package installation fails
**Problem**: `Error installing package xxx`
**Solution**:
```bash
# Pre-install the specific package you need
# For document processing:
pip install lightrag-hku[offline-docs]
# For storage backends:
pip install lightrag-hku[offline-storage]
# For LLM providers:
pip install lightrag-hku[offline-llm]
```
### Issue: Missing dependencies at runtime
**Problem**: `ModuleNotFoundError: No module named 'xxx'`
**Solution**:
```bash
# Check what you have installed
pip list | grep -i xxx
# Install missing component
pip install lightrag-hku[offline] # Install all offline deps
```
### Issue: Permission denied on tiktoken cache
**Problem**: `PermissionError: [Errno 13] Permission denied`
**Solution**:
```bash
# Ensure cache directory has correct permissions
chmod 755 ~/.tiktoken_cache
chmod 644 ~/.tiktoken_cache/*
# Or use a user-writable directory
export TIKTOKEN_CACHE_DIR=~/my_tiktoken_cache
mkdir -p ~/my_tiktoken_cache
```
## Best Practices
1. **Test in Online Environment First**: Always test your complete setup in an online environment before going offline.
2. **Keep Cache Updated**: Periodically update your offline cache when new models are released.
3. **Document Your Setup**: Keep notes on which optional dependencies you actually need.
4. **Version Pinning**: Consider pinning specific versions in production:
```bash
pip freeze > requirements-production.txt
```
5. **Minimal Installation**: Only install what you need:
```bash
# If you only process PDFs with OpenAI
pip install lightrag-hku[offline-docs]
# Then manually add: pip install openai
```
## Additional Resources
- [LightRAG GitHub Repository](https://github.com/HKUDS/LightRAG)
- [Docker Deployment Guide](./DockerDeployment.md)
- [API Documentation](../lightrag/api/README.md)
## Support
If you encounter issues not covered in this guide:
1. Check the [GitHub Issues](https://github.com/HKUDS/LightRAG/issues)
2. Review the [project documentation](../README.md)
3. Create a new issue with your offline deployment details

View file

@ -118,15 +118,23 @@ services:
lightrag:
container_name: lightrag
image: ghcr.io/hkuds/lightrag:latest
build:
context: .
dockerfile: Dockerfile
tags:
- ghcr.io/hkuds/lightrag:latest
ports:
- "${PORT:-9621}:9621"
volumes:
- ./data/rag_storage:/app/data/rag_storage
- ./data/inputs:/app/data/inputs
- ./data/tiktoken:/app/data/tiktoken
- ./config.ini:/app/config.ini
- ./.env:/app/.env
env_file:
- .env
environment:
- TIKTOKEN_CACHE_DIR=/app/data/tiktoken
restart: unless-stopped
extra_hosts:
- "host.docker.internal:host-gateway"
@ -140,6 +148,10 @@ docker compose up
```
> 可以通过以下链接获取官方的docker compose文件[docker-compose.yml]( https://raw.githubusercontent.com/HKUDS/LightRAG/refs/heads/main/docker-compose.yml) 。如需获取LightRAG的历史版本镜像可以访问以下链接: [LightRAG Docker Images]( https://github.com/HKUDS/LightRAG/pkgs/container/lightrag)
### 离线部署
对于离线或隔离环境,请参阅[离线部署指南](./../../docs/OfflineDeployment.md),了解如何预先安装所有依赖项和缓存文件。
### 启动多个LightRAG实例
有两种方式可以启动多个LightRAG实例。第一种方式是为每个实例配置一个完全独立的工作环境。此时需要为每个实例创建一个独立的工作目录然后在这个工作目录上放置一个当前实例专用的`.env`配置文件。不同实例的配置文件中的服务器监听端口不能重复,然后在工作目录上执行 lightrag-server 启动服务即可。

View file

@ -120,15 +120,23 @@ services:
lightrag:
container_name: lightrag
image: ghcr.io/hkuds/lightrag:latest
build:
context: .
dockerfile: Dockerfile
tags:
- ghcr.io/hkuds/lightrag:latest
ports:
- "${PORT:-9621}:9621"
volumes:
- ./data/rag_storage:/app/data/rag_storage
- ./data/inputs:/app/data/inputs
- ./data/tiktoken:/app/data/tiktoken
- ./config.ini:/app/config.ini
- ./.env:/app/.env
env_file:
- .env
environment:
- TIKTOKEN_CACHE_DIR=/app/data/tiktoken
restart: unless-stopped
extra_hosts:
- "host.docker.internal:host-gateway"
@ -143,6 +151,10 @@ docker compose up
> You can get the official docker compose file from here: [docker-compose.yml](https://raw.githubusercontent.com/HKUDS/LightRAG/refs/heads/main/docker-compose.yml). For historical versions of LightRAG docker images, visit this link: [LightRAG Docker Images](https://github.com/HKUDS/LightRAG/pkgs/container/lightrag)
### Offline Deployment
For offline or air-gapped environments, see the [Offline Deployment Guide](./../../docs/OfflineDeployment.md) for instructions on pre-installing all dependencies and cache files.
### Starting Multiple LightRAG Instances
There are two ways to start multiple LightRAG instances. The first way is to configure a completely independent working environment for each instance. This requires creating a separate working directory for each instance and placing a dedicated `.env` configuration file in that directory. The server listening ports in the configuration files of different instances cannot be the same. Then, you can start the service by running `lightrag-server` in the working directory.

View file

@ -1,156 +0,0 @@
1. **LlamaIndex** (`llm/llama_index.py`):
- Provides integration with OpenAI and other providers through LlamaIndex
- Supports both direct API access and proxy services like LiteLLM
- Handles embeddings and completions with consistent interfaces
- See example implementations:
- [Direct OpenAI Usage](../../examples/lightrag_llamaindex_direct_demo.py)
- [LiteLLM Proxy Usage](../../examples/lightrag_llamaindex_litellm_demo.py)
<details>
<summary> <b>Using LlamaIndex</b> </summary>
LightRAG supports LlamaIndex for embeddings and completions in two ways: direct OpenAI usage or through LiteLLM proxy.
### Setup
First, install the required dependencies:
```bash
pip install llama-index-llms-litellm llama-index-embeddings-litellm
```
### Standard OpenAI Usage
```python
from lightrag import LightRAG
from lightrag.llm.llama_index_impl import llama_index_complete_if_cache, llama_index_embed
from llama_index.embeddings.openai import OpenAIEmbedding
from llama_index.llms.openai import OpenAI
from lightrag.utils import EmbeddingFunc
# Initialize with direct OpenAI access
async def llm_model_func(prompt, system_prompt=None, history_messages=[], **kwargs):
try:
# Initialize OpenAI if not in kwargs
if 'llm_instance' not in kwargs:
llm_instance = OpenAI(
model="gpt-4",
api_key="your-openai-key",
)
kwargs['llm_instance'] = llm_instance
response = await llama_index_complete_if_cache(
kwargs['llm_instance'],
prompt,
system_prompt=system_prompt,
history_messages=history_messages,
**kwargs,
)
return response
except Exception as e:
logger.error(f"LLM request failed: {str(e)}")
raise
# Initialize LightRAG with OpenAI
rag = LightRAG(
working_dir="your/path",
llm_model_func=llm_model_func,
embedding_func=EmbeddingFunc(
embedding_dim=1536,
func=lambda texts: llama_index_embed(
texts,
embed_model=OpenAIEmbedding(
model="text-embedding-3-large",
api_key="your-openai-key"
)
),
),
)
```
### Using LiteLLM Proxy
1. Use any LLM provider through LiteLLM
2. Leverage LlamaIndex's embedding and completion capabilities
3. Maintain consistent configuration across services
```python
from lightrag import LightRAG
from lightrag.llm.llama_index_impl import llama_index_complete_if_cache, llama_index_embed
from llama_index.llms.litellm import LiteLLM
from llama_index.embeddings.litellm import LiteLLMEmbedding
from lightrag.utils import EmbeddingFunc
# Initialize with LiteLLM proxy
async def llm_model_func(prompt, system_prompt=None, history_messages=[], **kwargs):
try:
# Initialize LiteLLM if not in kwargs
if 'llm_instance' not in kwargs:
llm_instance = LiteLLM(
model=f"openai/{settings.LLM_MODEL}", # Format: "provider/model_name"
api_base=settings.LITELLM_URL,
api_key=settings.LITELLM_KEY,
)
kwargs['llm_instance'] = llm_instance
response = await llama_index_complete_if_cache(
kwargs['llm_instance'],
prompt,
system_prompt=system_prompt,
history_messages=history_messages,
**kwargs,
)
return response
except Exception as e:
logger.error(f"LLM request failed: {str(e)}")
raise
# Initialize LightRAG with LiteLLM
rag = LightRAG(
working_dir="your/path",
llm_model_func=llm_model_func,
embedding_func=EmbeddingFunc(
embedding_dim=1536,
func=lambda texts: llama_index_embed(
texts,
embed_model=LiteLLMEmbedding(
model_name=f"openai/{settings.EMBEDDING_MODEL}",
api_base=settings.LITELLM_URL,
api_key=settings.LITELLM_KEY,
)
),
),
)
```
### Environment Variables
For OpenAI direct usage:
```bash
OPENAI_API_KEY=your-openai-key
```
For LiteLLM proxy:
```bash
# LiteLLM Configuration
LITELLM_URL=http://litellm:4000
LITELLM_KEY=your-litellm-key
# Model Configuration
LLM_MODEL=gpt-4
EMBEDDING_MODEL=text-embedding-3-large
```
### Key Differences
1. **Direct OpenAI**:
- Simpler setup
- Direct API access
- Requires OpenAI API key
2. **LiteLLM Proxy**:
- Model provider agnostic
- Centralized API key management
- Support for multiple providers
- Better cost control and monitoring
</details>

View file

@ -0,0 +1,179 @@
"""
Download all necessary cache files for offline deployment.
This module provides a CLI command to download tiktoken model cache files
for offline environments where internet access is not available.
"""
import os
import sys
from pathlib import Path
def download_tiktoken_cache(cache_dir: str = None, models: list = None):
"""Download tiktoken models to local cache
Args:
cache_dir: Directory to store the cache files. If None, uses default location.
models: List of model names to download. If None, downloads common models.
Returns:
Tuple of (success_count, failed_models)
"""
try:
import tiktoken
except ImportError:
print("Error: tiktoken is not installed.")
print("Install with: pip install tiktoken")
sys.exit(1)
# Set cache directory if provided
if cache_dir:
cache_dir = os.path.abspath(cache_dir)
os.environ["TIKTOKEN_CACHE_DIR"] = cache_dir
cache_path = Path(cache_dir)
cache_path.mkdir(parents=True, exist_ok=True)
print(f"Using cache directory: {cache_dir}")
else:
cache_dir = os.environ.get(
"TIKTOKEN_CACHE_DIR", str(Path.home() / ".tiktoken_cache")
)
print(f"Using default cache directory: {cache_dir}")
# Common models used by LightRAG and OpenAI
if models is None:
models = [
"gpt-4o-mini", # Default model for LightRAG
"gpt-4o", # GPT-4 Omni
"gpt-4", # GPT-4
"gpt-3.5-turbo", # GPT-3.5 Turbo
"text-embedding-ada-002", # Legacy embedding model
"text-embedding-3-small", # Small embedding model
"text-embedding-3-large", # Large embedding model
]
print(f"\nDownloading {len(models)} tiktoken models...")
print("=" * 70)
success_count = 0
failed_models = []
for i, model in enumerate(models, 1):
try:
print(f"[{i}/{len(models)}] Downloading {model}...", end=" ", flush=True)
encoding = tiktoken.encoding_for_model(model)
# Trigger download by encoding a test string
encoding.encode("test")
print("✓ Done")
success_count += 1
except KeyError as e:
print(f"✗ Failed: Unknown model '{model}'")
failed_models.append((model, str(e)))
except Exception as e:
print(f"✗ Failed: {e}")
failed_models.append((model, str(e)))
print("=" * 70)
print(f"\n✓ Successfully cached {success_count}/{len(models)} models")
if failed_models:
print(f"\n✗ Failed to download {len(failed_models)} models:")
for model, error in failed_models:
print(f" - {model}: {error}")
print(f"\nCache location: {cache_dir}")
print("\nFor offline deployment:")
print(" 1. Copy directory to offline server:")
print(f" tar -czf tiktoken_cache.tar.gz {cache_dir}")
print(" scp tiktoken_cache.tar.gz user@offline-server:/path/to/")
print("")
print(" 2. On offline server, extract and set environment variable:")
print(" tar -xzf tiktoken_cache.tar.gz")
print(" export TIKTOKEN_CACHE_DIR=/path/to/tiktoken_cache")
print("")
print(" 3. Or copy to default location:")
print(f" cp -r {cache_dir} ~/.tiktoken_cache/")
return success_count, failed_models
def main():
"""Main entry point for the CLI command"""
import argparse
parser = argparse.ArgumentParser(
prog="lightrag-download-cache",
description="Download cache files for LightRAG offline deployment",
formatter_class=argparse.RawDescriptionHelpFormatter,
epilog="""
Examples:
# Download to default location (~/.tiktoken_cache)
lightrag-download-cache
# Download to specific directory
lightrag-download-cache --cache-dir ./offline_cache/tiktoken
# Download specific models only
lightrag-download-cache --models gpt-4o-mini gpt-4
For more information, visit: https://github.com/HKUDS/LightRAG
""",
)
parser.add_argument(
"--cache-dir",
help="Cache directory path (default: ~/.tiktoken_cache)",
default=None,
)
parser.add_argument(
"--models",
nargs="+",
help="Specific models to download (default: common models)",
default=None,
)
parser.add_argument(
"--version", action="version", version="%(prog)s (LightRAG cache downloader)"
)
args = parser.parse_args()
print("=" * 70)
print("LightRAG Offline Cache Downloader")
print("=" * 70)
try:
success_count, failed_models = download_tiktoken_cache(
args.cache_dir, args.models
)
print("\n" + "=" * 70)
print("Download Complete")
print("=" * 70)
# Exit with error code if all downloads failed
if success_count == 0:
print("\n✗ All downloads failed. Please check your internet connection.")
sys.exit(1)
# Exit with warning code if some downloads failed
elif failed_models:
print(
f"\n⚠ Some downloads failed ({len(failed_models)}/{success_count + len(failed_models)})"
)
sys.exit(2)
else:
print("\n✓ All cache files downloaded successfully!")
sys.exit(0)
except KeyboardInterrupt:
print("\n\n✗ Download interrupted by user")
sys.exit(130)
except Exception as e:
print(f"\n\n✗ Error: {e}")
import traceback
traceback.print_exc()
sys.exit(1)
if __name__ == "__main__":
main()

View file

@ -79,9 +79,48 @@ api = [
"uvicorn",
]
# Offline deployment dependencies (layered design for flexibility)
offline-docs = [
# Document processing dependencies
"docling>=1.0.0",
"pypdf2>=3.0.0",
"python-docx>=0.8.11",
"python-pptx>=0.6.21",
"openpyxl>=3.0.0",
]
offline-storage = [
# Storage backend dependencies
"redis>=5.0.0",
"neo4j>=5.0.0",
"pymilvus==2.5.2",
"pymongo>=4.0.0",
"asyncpg>=0.29.0",
"qdrant-client>=1.7.0",
]
offline-llm = [
# LLM provider dependencies
"openai>=1.0.0",
"anthropic>=0.18.0",
"ollama>=0.1.0",
"zhipuai>=2.0.0",
"aioboto3>=12.0.0",
"voyageai>=0.2.0",
"llama-index>=0.9.0",
"transformers>=4.30.0",
"torch>=2.0.0",
]
offline = [
# Complete offline package (includes all offline dependencies)
"lightrag-hku[offline-docs,offline-storage,offline-llm]",
]
[project.scripts]
lightrag-server = "lightrag.api.lightrag_server:main"
lightrag-gunicorn = "lightrag.api.run_with_gunicorn:main"
lightrag-download-cache = "lightrag.tools.download_cache:main"
[project.urls]
Homepage = "https://github.com/HKUDS/LightRAG"

View file

@ -0,0 +1,12 @@
# LightRAG Offline Dependencies - Document Processing
# Install with: pip install -r requirements-offline-docs.txt
# For offline installation:
# pip download -r requirements-offline-docs.txt -d ./packages
# pip install --no-index --find-links=./packages -r requirements-offline-docs.txt
# Document processing dependencies
docling>=1.0.0
openpyxl>=3.0.0
pypdf2>=3.0.0
python-docx>=0.8.11
python-pptx>=0.6.21

View file

@ -0,0 +1,16 @@
# LightRAG Offline Dependencies - LLM Providers
# Install with: pip install -r requirements-offline-llm.txt
# For offline installation:
# pip download -r requirements-offline-llm.txt -d ./packages
# pip install --no-index --find-links=./packages -r requirements-offline-llm.txt
aioboto3>=12.0.0
anthropic>=0.18.0
llama-index>=0.9.0
ollama>=0.1.0
# LLM provider dependencies
openai>=1.0.0
torch>=2.0.0
transformers>=4.30.0
voyageai>=0.2.0
zhipuai>=2.0.0

View file

@ -0,0 +1,13 @@
# LightRAG Offline Dependencies - Storage Backends
# Install with: pip install -r requirements-offline-storage.txt
# For offline installation:
# pip download -r requirements-offline-storage.txt -d ./packages
# pip install --no-index --find-links=./packages -r requirements-offline-storage.txt
asyncpg>=0.29.0
neo4j>=5.0.0
pymilvus==2.5.2
pymongo>=4.0.0
qdrant-client>=1.7.0
# Storage backend dependencies
redis>=5.0.0

33
requirements-offline.txt Normal file
View file

@ -0,0 +1,33 @@
# LightRAG Complete Offline Dependencies
# Install with: pip install -r requirements-offline.txt
# For offline installation:
# pip download -r requirements-offline.txt -d ./packages
# pip install --no-index --find-links=./packages -r requirements-offline.txt
#
# Or use pip install lightrag-hku[offline] for the same effect
aioboto3>=12.0.0
anthropic>=0.18.0
asyncpg>=0.29.0
# Document processing dependencies
docling>=1.0.0
llama-index>=0.9.0
neo4j>=5.0.0
ollama>=0.1.0
# LLM provider dependencies
openai>=1.0.0
openpyxl>=3.0.0
pymilvus==2.5.2
pymongo>=4.0.0
pypdf2>=3.0.0
python-docx>=0.8.11
python-pptx>=0.6.21
qdrant-client>=1.7.0
# Storage backend dependencies
redis>=5.0.0
torch>=2.0.0
transformers>=4.30.0
voyageai>=0.2.0
zhipuai>=2.0.0