* feat: Implement multi-tenant architecture with tenant and knowledge base models - Added data models for tenants, knowledge bases, and related configurations. - Introduced role and permission management for users in the multi-tenant system. - Created a service layer for managing tenants and knowledge bases, including CRUD operations. - Developed a tenant-aware instance manager for LightRAG with caching and isolation features. - Added a migration script to transition existing workspace-based deployments to the new multi-tenant architecture. * chore: ignore lightrag/api/webui/assets/ directory * chore: stop tracking lightrag/api/webui/assets (ignore in .gitignore) * feat: Initialize LightRAG Multi-Tenant Stack with PostgreSQL - Added README.md for project overview, setup instructions, and architecture details. - Created docker-compose.yml to define services: PostgreSQL, Redis, LightRAG API, and Web UI. - Introduced env.example for environment variable configuration. - Implemented init-postgres.sql for PostgreSQL schema initialization with multi-tenant support. - Added reproduce_issue.py for testing default tenant access via API. * feat: Enhance TenantSelector and update related components for improved multi-tenant support * feat: Enhance testing capabilities and update documentation - Updated Makefile to include new test commands for various modes (compatibility, isolation, multi-tenant, security, coverage, and dry-run). - Modified API health check endpoint in Makefile to reflect new port configuration. - Updated QUICK_START.md and README.md to reflect changes in service URLs and ports. - Added environment variables for testing modes in env.example. - Introduced run_all_tests.sh script to automate testing across different modes. - Created conftest.py for pytest configuration, including database fixtures and mock services. - Implemented database helper functions for streamlined database operations in tests. - Added test collection hooks to skip tests based on the current MULTITENANT_MODE. * feat: Implement multi-tenant support with demo mode enabled by default - Added multi-tenant configuration to the environment and Docker setup. - Created pre-configured demo tenants (acme-corp and techstart) for testing. - Updated API endpoints to support tenant-specific data access. - Enhanced Makefile commands for better service management and database operations. - Introduced user-tenant membership system with role-based access control. - Added comprehensive documentation for multi-tenant setup and usage. - Fixed issues with document visibility in multi-tenant environments. - Implemented necessary database migrations for user memberships and legacy support. * feat(audit): Add final audit report for multi-tenant implementation - Documented overall assessment, architecture overview, test results, security findings, and recommendations. - Included detailed findings on critical security issues and architectural concerns. fix(security): Implement security fixes based on audit findings - Removed global RAG fallback and enforced strict tenant context. - Configured super-admin access and required user authentication for tenant access. - Cleared localStorage on logout and improved error handling in WebUI. chore(logs): Create task logs for audit and security fixes implementation - Documented actions, decisions, and next steps for both audit and security fixes. - Summarized test results and remaining recommendations. chore(scripts): Enhance development stack management scripts - Added scripts for cleaning, starting, and stopping the development stack. - Improved output messages and ensured graceful shutdown of services. feat(starter): Initialize PostgreSQL with AGE extension support - Created initialization scripts for PostgreSQL extensions including uuid-ossp, vector, and AGE. - Ensured successful installation and verification of extensions. * feat: Implement auto-select for first tenant and KB on initial load in WebUI - Removed WEBUI_INITIAL_STATE_FIX.md as the issue is resolved. - Added useTenantInitialization hook to automatically select the first available tenant and KB on app load. - Integrated the new hook into the Root component of the WebUI. - Updated RetrievalTesting component to ensure a KB is selected before allowing user interaction. - Created end-to-end tests for multi-tenant isolation and real service interactions. - Added scripts for starting, stopping, and cleaning the development stack. - Enhanced API and tenant routes to support tenant-specific pipeline status initialization. - Updated constants for backend URL to reflect the correct port. - Improved error handling and logging in various components. * feat: Add multi-tenant support with enhanced E2E testing scripts and client functionality * update client * Add integration and unit tests for multi-tenant API, models, security, and storage - Implement integration tests for tenant and knowledge base management endpoints in `test_tenant_api_routes.py`. - Create unit tests for tenant isolation, model validation, and role permissions in `test_tenant_models.py`. - Add security tests to enforce role-based permissions and context validation in `test_tenant_security.py`. - Develop tests for tenant-aware storage operations and context isolation in `test_tenant_storage_phase3.py`. * feat(e2e): Implement OpenAI model support and database reset functionality * Add comprehensive test suite for gpt-5-nano compatibility - Introduced tests for parameter normalization, embeddings, and entity extraction. - Implemented direct API testing for gpt-5-nano. - Validated .env configuration loading and OpenAI API connectivity. - Analyzed reasoning token overhead with various token limits. - Documented test procedures and expected outcomes in README files. - Ensured all tests pass for production readiness. * kg(postgres_impl): ensure AGE extension is loaded in session and configure graph initialization * dev: add hybrid dev helper scripts, Makefile, docker-compose.dev-db and local development docs * feat(dev): add dev helper scripts and local development documentation for hybrid setup * feat(multi-tenant): add detailed specifications and logs for multi-tenant improvements, including UX, backend handling, and ingestion pipeline * feat(migration): add generated tenant/kb columns, indexes, triggers; drop unused tables; update schema and docs * test(backward-compat): adapt tests to new StorageNameSpace/TenantService APIs (use concrete dummy storages) * chore: multi-tenant and UX updates — docs, webui, storage, tenant service adjustments * tests: stabilize integration tests + skip external services; fix multi-tenant API behavior and idempotency - gpt5_nano_compatibility: add pytest-asyncio markers, skip when OPENAI key missing, prevent module-level asyncio.run collection, add conftest - Ollama tests: add server availability check and skip markers; avoid pytest collection warnings by renaming helper classes - Graph storage tests: rename interactive test functions to avoid pytest collection - Document & Tenant routes: support external_ids for idempotency; ensure HTTPExceptions are re-raised - LightRAG core: support external_ids in apipeline_enqueue_documents and idempotent logic - Tests updated to match API changes (tenant routes & document routes) - Add logs and scripts for inspection and audit
398 lines
15 KiB
Text
398 lines
15 KiB
Text
### This is sample file of .env
|
|
|
|
###############################################################################
|
|
### ⚡️ QUICK START: OpenAI Configuration (Recommended)
|
|
###############################################################################
|
|
### To get started with OpenAI, you only need:
|
|
### 1. Set your OpenAI API key (get from https://platform.openai.com/api-keys)
|
|
### export OPENAI_API_KEY="sk-your-actual-api-key"
|
|
### 2. Then you can start the server with default OpenAI configuration:
|
|
### lightrag-server
|
|
###
|
|
### The default configuration will use:
|
|
### - LLM: gpt-4o-mini (entity/relation extraction, graph merging, query answering)
|
|
### - Embedding: text-embedding-3-small (vector embeddings)
|
|
### No additional configuration needed!
|
|
###
|
|
### See LLM and Embedding Configuration sections below to customize models
|
|
###############################################################################
|
|
|
|
###########################
|
|
### Server Configuration
|
|
###########################
|
|
HOST=0.0.0.0
|
|
PORT=9621
|
|
WEBUI_TITLE='My Graph KB'
|
|
WEBUI_DESCRIPTION="Simple and Fast Graph Based RAG System"
|
|
# WORKERS=2
|
|
### gunicorn worker timeout(as default LLM request timeout if LLM_TIMEOUT is not set)
|
|
# TIMEOUT=150
|
|
# CORS_ORIGINS=http://localhost:3000,http://localhost:8080
|
|
|
|
### Optional SSL Configuration
|
|
# SSL=true
|
|
# SSL_CERTFILE=/path/to/cert.pem
|
|
# SSL_KEYFILE=/path/to/key.pem
|
|
|
|
### Directory Configuration (defaults to current working directory)
|
|
### Default value is ./inputs and ./rag_storage
|
|
# INPUT_DIR=<absolute_path_for_doc_input_dir>
|
|
# WORKING_DIR=<absolute_path_for_working_dir>
|
|
|
|
### Tiktoken cache directory (Store cached files in this folder for offline deployment)
|
|
# TIKTOKEN_CACHE_DIR=./temp/tiktoken
|
|
|
|
### Ollama Emulating Model and Tag (used only when EMBEDDING_BINDING=ollama)
|
|
# OLLAMA_EMULATING_MODEL_NAME=lightrag
|
|
OLLAMA_EMULATING_MODEL_TAG=latest
|
|
|
|
### Max nodes return from grap retrieval in webui
|
|
# MAX_GRAPH_NODES=1000
|
|
|
|
### Logging level
|
|
# LOG_LEVEL=INFO
|
|
# VERBOSE=False
|
|
# LOG_MAX_BYTES=10485760
|
|
# LOG_BACKUP_COUNT=5
|
|
### Logfile location (defaults to current working directory)
|
|
# LOG_DIR=/path/to/log/directory
|
|
|
|
#####################################
|
|
### Login and API-Key Configuration
|
|
#####################################
|
|
# AUTH_ACCOUNTS='admin:admin123,user1:pass456'
|
|
# TOKEN_SECRET=Your-Key-For-LightRAG-API-Server
|
|
# TOKEN_EXPIRE_HOURS=48
|
|
# GUEST_TOKEN_EXPIRE_HOURS=24
|
|
# JWT_ALGORITHM=HS256
|
|
|
|
### API-Key to access LightRAG Server API
|
|
# LIGHTRAG_API_KEY=your-secure-api-key-here
|
|
# WHITELIST_PATHS=/health,/api/*
|
|
|
|
###############################################################################
|
|
### OpenAI API Key (Required for OpenAI LLM and Embedding)
|
|
### Get your key from: https://platform.openai.com/api-keys
|
|
### This is the PRIMARY way to configure OpenAI (environment variable takes precedence)
|
|
###############################################################################
|
|
# OPENAI_API_KEY=sk-your-actual-openai-api-key-here
|
|
|
|
######################################################################################
|
|
### Query Configuration
|
|
###
|
|
### How to control the context lenght sent to LLM:
|
|
### MAX_ENTITY_TOKENS + MAX_RELATION_TOKENS < MAX_TOTAL_TOKENS
|
|
### Chunk_Tokens = MAX_TOTAL_TOKENS - Actual_Entity_Tokens - Actual_Reation_Tokens
|
|
######################################################################################
|
|
# LLM responde cache for query (Not valid for streaming response)
|
|
ENABLE_LLM_CACHE=true
|
|
# COSINE_THRESHOLD=0.2
|
|
### Number of entities or relations retrieved from KG
|
|
# TOP_K=40
|
|
### Maxmium number or chunks for naive vector search
|
|
# CHUNK_TOP_K=20
|
|
### control the actual enties send to LLM
|
|
# MAX_ENTITY_TOKENS=6000
|
|
### control the actual relations send to LLM
|
|
# MAX_RELATION_TOKENS=8000
|
|
### control the maximum tokens send to LLM (include entities, raltions and chunks)
|
|
# MAX_TOTAL_TOKENS=30000
|
|
|
|
### maximum number of related chunks per source entity or relation
|
|
### The chunk picker uses this value to determine the total number of chunks selected from KG(knowledge graph)
|
|
### Higher values increase re-ranking time
|
|
# RELATED_CHUNK_NUMBER=5
|
|
|
|
### chunk selection strategies
|
|
### VECTOR: Pick KG chunks by vector similarity, delivered chunks to the LLM aligning more closely with naive retrieval
|
|
### WEIGHT: Pick KG chunks by entity and chunk weight, delivered more solely KG related chunks to the LLM
|
|
### If reranking is enabled, the impact of chunk selection strategies will be diminished.
|
|
# KG_CHUNK_PICK_METHOD=VECTOR
|
|
|
|
#########################################################
|
|
### Reranking configuration
|
|
### RERANK_BINDING type: null, cohere, jina, aliyun
|
|
### For rerank model deployed by vLLM use cohere binding
|
|
#########################################################
|
|
RERANK_BINDING=null
|
|
### Enable rerank by default in query params when RERANK_BINDING is not null
|
|
# RERANK_BY_DEFAULT=True
|
|
### rerank score chunk filter(set to 0.0 to keep all chunks, 0.6 or above if LLM is not strong enought)
|
|
# MIN_RERANK_SCORE=0.0
|
|
|
|
### For local deployment with vLLM
|
|
# RERANK_MODEL=BAAI/bge-reranker-v2-m3
|
|
# RERANK_BINDING_HOST=http://localhost:8000/v1/rerank
|
|
# RERANK_BINDING_API_KEY=your_rerank_api_key_here
|
|
|
|
### Default value for Cohere AI
|
|
# RERANK_MODEL=rerank-v3.5
|
|
# RERANK_BINDING_HOST=https://api.cohere.com/v2/rerank
|
|
# RERANK_BINDING_API_KEY=your_rerank_api_key_here
|
|
|
|
### Default value for Jina AI
|
|
# RERANK_MODEL=jina-reranker-v2-base-multilingual
|
|
# RERANK_BINDING_HOST=https://api.jina.ai/v1/rerank
|
|
# RERANK_BINDING_API_KEY=your_rerank_api_key_here
|
|
|
|
### Default value for Aliyun
|
|
# RERANK_MODEL=gte-rerank-v2
|
|
# RERANK_BINDING_HOST=https://dashscope.aliyuncs.com/api/v1/services/rerank/text-rerank/text-rerank
|
|
# RERANK_BINDING_API_KEY=your_rerank_api_key_here
|
|
|
|
########################################
|
|
### Document processing configuration
|
|
########################################
|
|
ENABLE_LLM_CACHE_FOR_EXTRACT=true
|
|
|
|
### Document processing output language: English, Chinese, French, German ...
|
|
SUMMARY_LANGUAGE=English
|
|
|
|
### Entity types that the LLM will attempt to recognize
|
|
# ENTITY_TYPES='["Person", "Creature", "Organization", "Location", "Event", "Concept", "Method", "Content", "Data", "Artifact", "NaturalObject"]'
|
|
|
|
### Chunk size for document splitting, 500~1500 is recommended
|
|
# CHUNK_SIZE=1200
|
|
# CHUNK_OVERLAP_SIZE=100
|
|
|
|
### Number of summary semgments or tokens to trigger LLM summary on entity/relation merge (at least 3 is recommented)
|
|
# FORCE_LLM_SUMMARY_ON_MERGE=8
|
|
### Max description token size to trigger LLM summary
|
|
# SUMMARY_MAX_TOKENS = 1200
|
|
### Recommended LLM summary output length in tokens
|
|
# SUMMARY_LENGTH_RECOMMENDED_=600
|
|
### Maximum context size sent to LLM for description summary
|
|
# SUMMARY_CONTEXT_SIZE=12000
|
|
|
|
###############################
|
|
### Concurrency Configuration
|
|
###############################
|
|
### Max concurrency requests of LLM (for both query and document processing)
|
|
MAX_ASYNC=4
|
|
### Number of parallel processing documents(between 2~10, MAX_ASYNC/3 is recommended)
|
|
MAX_PARALLEL_INSERT=2
|
|
### Max concurrency requests for Embedding
|
|
# EMBEDDING_FUNC_MAX_ASYNC=8
|
|
### Num of chunks send to Embedding in single request
|
|
# EMBEDDING_BATCH_NUM=10
|
|
|
|
###########################################################
|
|
### LLM Configuration
|
|
### LLM_BINDING type: openai, ollama, lollms, azure_openai, aws_bedrock
|
|
###########################################################
|
|
### LLM request timeout setting for all llm (0 means no timeout for Ollma)
|
|
# LLM_TIMEOUT=180
|
|
|
|
# PRIMARY CONFIGURATION: OpenAI (Recommended for production)
|
|
LLM_BINDING=openai
|
|
LLM_MODEL=gpt-4o-mini
|
|
LLM_BINDING_HOST=https://api.openai.com/v1
|
|
LLM_BINDING_API_KEY=your_api_key
|
|
# Note: By default, uses OPENAI_API_KEY environment variable
|
|
|
|
### ALTERNATIVE: Using gpt-4o for higher quality (higher cost)
|
|
# LLM_BINDING=openai
|
|
# LLM_MODEL=gpt-4o
|
|
# LLM_BINDING_HOST=https://api.openai.com/v1
|
|
|
|
### Optional for Azure
|
|
# AZURE_OPENAI_API_VERSION=2024-08-01-preview
|
|
# AZURE_OPENAI_DEPLOYMENT=gpt-4o
|
|
|
|
### Openrouter example
|
|
# LLM_MODEL=google/gemini-2.5-flash
|
|
# LLM_BINDING_HOST=https://openrouter.ai/api/v1
|
|
# LLM_BINDING_API_KEY=your_api_key
|
|
# LLM_BINDING=openai
|
|
|
|
### OpenAI Compatible API Specific Parameters
|
|
### Increased temperature values may mitigate infinite inference loops in certain LLM, such as Qwen3-30B.
|
|
# OPENAI_LLM_TEMPERATURE=0.9
|
|
### Set the max_tokens to mitigate endless output of some LLM (less than LLM_TIMEOUT * llm_output_tokens/second, i.e. 9000 = 180s * 50 tokens/s)
|
|
### Typically, max_tokens does not include prompt content, though some models, such as Gemini Models, are exceptions
|
|
### For vLLM/SGLang doployed models, or most of OpenAI compatible API provider
|
|
# OPENAI_LLM_MAX_TOKENS=9000
|
|
### For OpenAI o1-mini or newer modles
|
|
OPENAI_LLM_MAX_COMPLETION_TOKENS=9000
|
|
|
|
#### OpenAI's new API utilizes max_completion_tokens instead of max_tokens
|
|
# OPENAI_LLM_MAX_COMPLETION_TOKENS=9000
|
|
|
|
### use the following command to see all support options for OpenAI, azure_openai or OpenRouter
|
|
### lightrag-server --llm-binding openai --help
|
|
### OpenAI Specific Parameters
|
|
# OPENAI_LLM_REASONING_EFFORT=minimal
|
|
### OpenRouter Specific Parameters
|
|
# OPENAI_LLM_EXTRA_BODY='{"reasoning": {"enabled": false}}'
|
|
### Qwen3 Specific Parameters depoly by vLLM
|
|
# OPENAI_LLM_EXTRA_BODY='{"chat_template_kwargs": {"enable_thinking": false}}'
|
|
|
|
### use the following command to see all support options for Ollama LLM
|
|
### lightrag-server --llm-binding ollama --help
|
|
### Ollama Server Specific Parameters
|
|
### OLLAMA_LLM_NUM_CTX must be provided, and should at least larger than MAX_TOTAL_TOKENS + 2000
|
|
OLLAMA_LLM_NUM_CTX=32768
|
|
### Set the max_output_tokens to mitigate endless output of some LLM (less than LLM_TIMEOUT * llm_output_tokens/second, i.e. 9000 = 180s * 50 tokens/s)
|
|
# OLLAMA_LLM_NUM_PREDICT=9000
|
|
### Stop sequences for Ollama LLM
|
|
# OLLAMA_LLM_STOP='["</s>", "<|EOT|>"]'
|
|
|
|
### Bedrock Specific Parameters
|
|
# BEDROCK_LLM_TEMPERATURE=1.0
|
|
|
|
####################################################################################
|
|
### Embedding Configuration (Should not be changed after the first file processed)
|
|
### EMBEDDING_BINDING: openai, ollama, azure_openai, jina, lollms, aws_bedrock
|
|
### PRIMARY CONFIGURATION: OpenAI (Recommended)
|
|
####################################################################################
|
|
# EMBEDDING_TIMEOUT=30
|
|
EMBEDDING_BINDING=openai
|
|
EMBEDDING_MODEL=text-embedding-3-small
|
|
EMBEDDING_DIM=1536
|
|
EMBEDDING_BINDING_HOST=https://api.openai.com/v1
|
|
# EMBEDDING_BINDING_API_KEY=your_openai_api_key (uses OPENAI_API_KEY env var by default)
|
|
|
|
### ALTERNATIVE: Text-embedding-3-large (higher quality, higher cost)
|
|
# EMBEDDING_BINDING=openai
|
|
# EMBEDDING_MODEL=text-embedding-3-large
|
|
# EMBEDDING_DIM=3072
|
|
# EMBEDDING_BINDING_HOST=https://api.openai.com/v1
|
|
|
|
### ALTERNATIVE: Local Ollama embedding (no API key required, requires Ollama service)
|
|
# EMBEDDING_BINDING=ollama
|
|
# EMBEDDING_MODEL=bge-m3:latest
|
|
# EMBEDDING_DIM=1024
|
|
# EMBEDDING_BINDING_HOST=http://localhost:11434
|
|
# EMBEDDING_BINDING_API_KEY=your_api_key
|
|
# If the embedding service is deployed within the same Docker stack, use host.docker.internal instead of localhost
|
|
|
|
### ALTERNATIVE: Azure OpenAI embedding
|
|
# EMBEDDING_BINDING=azure_openai
|
|
# AZURE_EMBEDDING_DEPLOYMENT=text-embedding-3-small
|
|
# AZURE_EMBEDDING_API_VERSION=2023-05-15
|
|
# AZURE_EMBEDDING_ENDPOINT=your_endpoint
|
|
# AZURE_EMBEDDING_API_KEY=your_api_key
|
|
|
|
### ALTERNATIVE: Jina AI Embedding
|
|
# EMBEDDING_BINDING=jina
|
|
# EMBEDDING_BINDING_HOST=https://api.jina.ai/v1/embeddings
|
|
# EMBEDDING_MODEL=jina-embeddings-v4
|
|
# EMBEDDING_DIM=2048
|
|
# EMBEDDING_BINDING_API_KEY=your_api_key
|
|
|
|
### Ollama embedding options (only used when EMBEDDING_BINDING=ollama)
|
|
OLLAMA_EMBEDDING_NUM_CTX=8192
|
|
### use the following command to see all support options for Ollama embedding
|
|
### lightrag-server --embedding-binding ollama --help
|
|
|
|
####################################################################
|
|
### WORKSPACE setting workspace name for all storage types
|
|
### in the purpose of isolating data from LightRAG instances.
|
|
### Valid workspace name constraints: a-z, A-Z, 0-9, and _
|
|
####################################################################
|
|
# WORKSPACE=space1
|
|
|
|
############################
|
|
### Data storage selection
|
|
############################
|
|
### Default storage (Recommended for small scale deployment)
|
|
# LIGHTRAG_KV_STORAGE=JsonKVStorage
|
|
# LIGHTRAG_DOC_STATUS_STORAGE=JsonDocStatusStorage
|
|
# LIGHTRAG_GRAPH_STORAGE=NetworkXStorage
|
|
# LIGHTRAG_VECTOR_STORAGE=NanoVectorDBStorage
|
|
|
|
### Redis Storage (Recommended for production deployment)
|
|
# LIGHTRAG_KV_STORAGE=RedisKVStorage
|
|
# LIGHTRAG_DOC_STATUS_STORAGE=RedisDocStatusStorage
|
|
|
|
### Vector Storage (Recommended for production deployment)
|
|
# LIGHTRAG_VECTOR_STORAGE=MilvusVectorDBStorage
|
|
# LIGHTRAG_VECTOR_STORAGE=QdrantVectorDBStorage
|
|
# LIGHTRAG_VECTOR_STORAGE=FaissVectorDBStorage
|
|
|
|
### Graph Storage (Recommended for production deployment)
|
|
# LIGHTRAG_GRAPH_STORAGE=Neo4JStorage
|
|
# LIGHTRAG_GRAPH_STORAGE=MemgraphStorage
|
|
|
|
### PostgreSQL
|
|
# LIGHTRAG_KV_STORAGE=PGKVStorage
|
|
# LIGHTRAG_DOC_STATUS_STORAGE=PGDocStatusStorage
|
|
# LIGHTRAG_GRAPH_STORAGE=PGGraphStorage
|
|
# LIGHTRAG_VECTOR_STORAGE=PGVectorStorage
|
|
|
|
### MongoDB (Vector storage only available on Atlas Cloud)
|
|
# LIGHTRAG_KV_STORAGE=MongoKVStorage
|
|
# LIGHTRAG_DOC_STATUS_STORAGE=MongoDocStatusStorage
|
|
# LIGHTRAG_GRAPH_STORAGE=MongoGraphStorage
|
|
# LIGHTRAG_VECTOR_STORAGE=MongoVectorDBStorage
|
|
|
|
### PostgreSQL Configuration
|
|
POSTGRES_HOST=localhost
|
|
POSTGRES_PORT=5432
|
|
POSTGRES_USER=your_username
|
|
POSTGRES_PASSWORD='your_password'
|
|
POSTGRES_DATABASE=your_database
|
|
POSTGRES_MAX_CONNECTIONS=12
|
|
# POSTGRES_WORKSPACE=forced_workspace_name
|
|
|
|
### PostgreSQL Vector Storage Configuration
|
|
### Vector storage type: HNSW, IVFFlat
|
|
POSTGRES_VECTOR_INDEX_TYPE=HNSW
|
|
POSTGRES_HNSW_M=16
|
|
POSTGRES_HNSW_EF=200
|
|
POSTGRES_IVFFLAT_LISTS=100
|
|
|
|
### PostgreSQL SSL Configuration (Optional)
|
|
# POSTGRES_SSL_MODE=require
|
|
# POSTGRES_SSL_CERT=/path/to/client-cert.pem
|
|
# POSTGRES_SSL_KEY=/path/to/client-key.pem
|
|
# POSTGRES_SSL_ROOT_CERT=/path/to/ca-cert.pem
|
|
# POSTGRES_SSL_CRL=/path/to/crl.pem
|
|
|
|
### Neo4j Configuration
|
|
NEO4J_URI=neo4j+s://xxxxxxxx.databases.neo4j.io
|
|
NEO4J_USERNAME=neo4j
|
|
NEO4J_PASSWORD='your_password'
|
|
NEO4J_DATABASE=neo4j
|
|
NEO4J_MAX_CONNECTION_POOL_SIZE=100
|
|
NEO4J_CONNECTION_TIMEOUT=30
|
|
NEO4J_CONNECTION_ACQUISITION_TIMEOUT=30
|
|
NEO4J_MAX_TRANSACTION_RETRY_TIME=30
|
|
NEO4J_MAX_CONNECTION_LIFETIME=300
|
|
NEO4J_LIVENESS_CHECK_TIMEOUT=30
|
|
NEO4J_KEEP_ALIVE=true
|
|
# NEO4J_WORKSPACE=forced_workspace_name
|
|
|
|
### MongoDB Configuration
|
|
MONGO_URI=mongodb://root:root@localhost:27017/
|
|
#MONGO_URI=mongodb+srv://xxxx
|
|
MONGO_DATABASE=LightRAG
|
|
# MONGODB_WORKSPACE=forced_workspace_name
|
|
|
|
### Milvus Configuration
|
|
MILVUS_URI=http://localhost:19530
|
|
MILVUS_DB_NAME=lightrag
|
|
# MILVUS_USER=root
|
|
# MILVUS_PASSWORD=your_password
|
|
# MILVUS_TOKEN=your_token
|
|
# MILVUS_WORKSPACE=forced_workspace_name
|
|
|
|
### Qdrant
|
|
QDRANT_URL=http://localhost:6333
|
|
# QDRANT_API_KEY=your-api-key
|
|
# QDRANT_WORKSPACE=forced_workspace_name
|
|
|
|
### Redis
|
|
REDIS_URI=redis://localhost:6379
|
|
REDIS_SOCKET_TIMEOUT=30
|
|
REDIS_CONNECT_TIMEOUT=10
|
|
REDIS_MAX_CONNECTIONS=100
|
|
REDIS_RETRY_ATTEMPTS=3
|
|
# REDIS_WORKSPACE=forced_workspace_name
|
|
|
|
### Memgraph Configuration
|
|
MEMGRAPH_URI=bolt://localhost:7687
|
|
MEMGRAPH_USERNAME=
|
|
MEMGRAPH_PASSWORD=
|
|
MEMGRAPH_DATABASE=memgraph
|
|
# MEMGRAPH_WORKSPACE=forced_workspace_name
|