LightRAG/docs/archives/DockerDeployment.md
Raphael MANSUY 2b292d4924
docs: Enterprise Edition & Multi-tenancy attribution (#5)
* Remove outdated documentation files: Quick Start Guide, Apache AGE Analysis, and Scratchpad.

* Add multi-tenant testing strategy and ADR index documentation

- Introduced ADR 008 detailing the multi-tenant testing strategy for the ./starter environment, covering compatibility and multi-tenant modes, testing scenarios, and implementation details.
- Created a comprehensive ADR index (README.md) summarizing all architecture decision records related to the multi-tenant implementation, including purpose, key sections, and reading paths for different roles.

* feat(docs): Add comprehensive multi-tenancy guide and README for LightRAG Enterprise

- Introduced `0008-multi-tenancy.md` detailing multi-tenancy architecture, key concepts, roles, permissions, configuration, and API endpoints.
- Created `README.md` as the main documentation index, outlining features, quick start, system overview, and deployment options.
- Documented the LightRAG architecture, storage backends, LLM integrations, and query modes.
- Established a task log (`2025-01-21-lightrag-documentation-log.md`) summarizing documentation creation actions, decisions, and insights.
2025-12-04 18:09:15 +08:00

3.7 KiB

LightRAG

A lightweight Knowledge Graph Retrieval-Augmented Generation system with multiple LLM backend support.

🚀 Installation

Prerequisites

  • Python 3.10+
  • Git
  • Docker (optional for Docker deployment)

Native Installation

  1. Clone the repository:
# Linux/MacOS
git clone https://github.com/HKUDS/LightRAG.git
cd LightRAG
# Windows PowerShell
git clone https://github.com/HKUDS/LightRAG.git
cd LightRAG
  1. Configure your environment:
# Linux/MacOS
cp .env.example .env
# Edit .env with your preferred configuration
# Windows PowerShell
Copy-Item .env.example .env
# Edit .env with your preferred configuration
  1. Create and activate virtual environment:
# Linux/MacOS
python -m venv venv
source venv/bin/activate
# Windows PowerShell
python -m venv venv
.\venv\Scripts\Activate
  1. Install dependencies:
# Both platforms
pip install -r requirements.txt

🐳 Docker Deployment

Docker instructions work the same on all platforms with Docker Desktop installed.

  1. Build and start the container:
docker-compose up -d

Configuration Options

LightRAG can be configured using environment variables in the .env file:

Server Configuration

  • HOST: Server host (default: 0.0.0.0)
  • PORT: Server port (default: 9621)

LLM Configuration

  • LLM_BINDING: LLM backend to use (lollms/ollama/openai)
  • LLM_BINDING_HOST: LLM server host URL
  • LLM_MODEL: Model name to use

Embedding Configuration

  • EMBEDDING_BINDING: Embedding backend (lollms/ollama/openai)
  • EMBEDDING_BINDING_HOST: Embedding server host URL
  • EMBEDDING_MODEL: Embedding model name

RAG Configuration

  • MAX_ASYNC: Maximum async operations
  • MAX_TOKENS: Maximum token size
  • EMBEDDING_DIM: Embedding dimensions

Security

  • LIGHTRAG_API_KEY: API key for authentication

Data Storage Paths

The system uses the following paths for data storage:

data/
├── rag_storage/    # RAG data persistence
└── inputs/         # Input documents

Example Deployments

  1. Using with Ollama:
LLM_BINDING=ollama
LLM_BINDING_HOST=http://host.docker.internal:11434
LLM_MODEL=mistral
EMBEDDING_BINDING=ollama
EMBEDDING_BINDING_HOST=http://host.docker.internal:11434
EMBEDDING_MODEL=bge-m3

you can't just use localhost from docker, that's why you need to use host.docker.internal which is defined in the docker compose file and should allow you to access the localhost services.

  1. Using with OpenAI:
LLM_BINDING=openai
LLM_MODEL=gpt-3.5-turbo
EMBEDDING_BINDING=openai
EMBEDDING_MODEL=text-embedding-ada-002
OPENAI_API_KEY=your-api-key

API Usage

Once deployed, you can interact with the API at http://localhost:9621

Example query using PowerShell:

$headers = @{
    "X-API-Key" = "your-api-key"
    "Content-Type" = "application/json"
}
$body = @{
    query = "your question here"
} | ConvertTo-Json

Invoke-RestMethod -Uri "http://localhost:9621/query" -Method Post -Headers $headers -Body $body

Example query using curl:

curl -X POST "http://localhost:9621/query" \
     -H "X-API-Key: your-api-key" \
     -H "Content-Type: application/json" \
     -d '{"query": "your question here"}'

🔒 Security

Remember to:

  1. Set a strong API key in production
  2. Use SSL in production environments
  3. Configure proper network security

📦 Updates

To update the Docker container:

docker-compose pull
docker-compose up -d --build

To update native installation:

# Linux/MacOS
git pull
source venv/bin/activate
pip install -r requirements.txt
# Windows PowerShell
git pull
.\venv\Scripts\Activate
pip install -r requirements.txt