* feat: Implement multi-tenant architecture with tenant and knowledge base models - Added data models for tenants, knowledge bases, and related configurations. - Introduced role and permission management for users in the multi-tenant system. - Created a service layer for managing tenants and knowledge bases, including CRUD operations. - Developed a tenant-aware instance manager for LightRAG with caching and isolation features. - Added a migration script to transition existing workspace-based deployments to the new multi-tenant architecture. * chore: ignore lightrag/api/webui/assets/ directory * chore: stop tracking lightrag/api/webui/assets (ignore in .gitignore) * feat: Initialize LightRAG Multi-Tenant Stack with PostgreSQL - Added README.md for project overview, setup instructions, and architecture details. - Created docker-compose.yml to define services: PostgreSQL, Redis, LightRAG API, and Web UI. - Introduced env.example for environment variable configuration. - Implemented init-postgres.sql for PostgreSQL schema initialization with multi-tenant support. - Added reproduce_issue.py for testing default tenant access via API. * feat: Enhance TenantSelector and update related components for improved multi-tenant support * feat: Enhance testing capabilities and update documentation - Updated Makefile to include new test commands for various modes (compatibility, isolation, multi-tenant, security, coverage, and dry-run). - Modified API health check endpoint in Makefile to reflect new port configuration. - Updated QUICK_START.md and README.md to reflect changes in service URLs and ports. - Added environment variables for testing modes in env.example. - Introduced run_all_tests.sh script to automate testing across different modes. - Created conftest.py for pytest configuration, including database fixtures and mock services. - Implemented database helper functions for streamlined database operations in tests. - Added test collection hooks to skip tests based on the current MULTITENANT_MODE. * feat: Implement multi-tenant support with demo mode enabled by default - Added multi-tenant configuration to the environment and Docker setup. - Created pre-configured demo tenants (acme-corp and techstart) for testing. - Updated API endpoints to support tenant-specific data access. - Enhanced Makefile commands for better service management and database operations. - Introduced user-tenant membership system with role-based access control. - Added comprehensive documentation for multi-tenant setup and usage. - Fixed issues with document visibility in multi-tenant environments. - Implemented necessary database migrations for user memberships and legacy support. * feat(audit): Add final audit report for multi-tenant implementation - Documented overall assessment, architecture overview, test results, security findings, and recommendations. - Included detailed findings on critical security issues and architectural concerns. fix(security): Implement security fixes based on audit findings - Removed global RAG fallback and enforced strict tenant context. - Configured super-admin access and required user authentication for tenant access. - Cleared localStorage on logout and improved error handling in WebUI. chore(logs): Create task logs for audit and security fixes implementation - Documented actions, decisions, and next steps for both audit and security fixes. - Summarized test results and remaining recommendations. chore(scripts): Enhance development stack management scripts - Added scripts for cleaning, starting, and stopping the development stack. - Improved output messages and ensured graceful shutdown of services. feat(starter): Initialize PostgreSQL with AGE extension support - Created initialization scripts for PostgreSQL extensions including uuid-ossp, vector, and AGE. - Ensured successful installation and verification of extensions. * feat: Implement auto-select for first tenant and KB on initial load in WebUI - Removed WEBUI_INITIAL_STATE_FIX.md as the issue is resolved. - Added useTenantInitialization hook to automatically select the first available tenant and KB on app load. - Integrated the new hook into the Root component of the WebUI. - Updated RetrievalTesting component to ensure a KB is selected before allowing user interaction. - Created end-to-end tests for multi-tenant isolation and real service interactions. - Added scripts for starting, stopping, and cleaning the development stack. - Enhanced API and tenant routes to support tenant-specific pipeline status initialization. - Updated constants for backend URL to reflect the correct port. - Improved error handling and logging in various components. * feat: Add multi-tenant support with enhanced E2E testing scripts and client functionality * update client * Add integration and unit tests for multi-tenant API, models, security, and storage - Implement integration tests for tenant and knowledge base management endpoints in `test_tenant_api_routes.py`. - Create unit tests for tenant isolation, model validation, and role permissions in `test_tenant_models.py`. - Add security tests to enforce role-based permissions and context validation in `test_tenant_security.py`. - Develop tests for tenant-aware storage operations and context isolation in `test_tenant_storage_phase3.py`. * feat(e2e): Implement OpenAI model support and database reset functionality * Add comprehensive test suite for gpt-5-nano compatibility - Introduced tests for parameter normalization, embeddings, and entity extraction. - Implemented direct API testing for gpt-5-nano. - Validated .env configuration loading and OpenAI API connectivity. - Analyzed reasoning token overhead with various token limits. - Documented test procedures and expected outcomes in README files. - Ensured all tests pass for production readiness. * kg(postgres_impl): ensure AGE extension is loaded in session and configure graph initialization * dev: add hybrid dev helper scripts, Makefile, docker-compose.dev-db and local development docs * feat(dev): add dev helper scripts and local development documentation for hybrid setup * feat(multi-tenant): add detailed specifications and logs for multi-tenant improvements, including UX, backend handling, and ingestion pipeline * feat(migration): add generated tenant/kb columns, indexes, triggers; drop unused tables; update schema and docs * test(backward-compat): adapt tests to new StorageNameSpace/TenantService APIs (use concrete dummy storages) * chore: multi-tenant and UX updates — docs, webui, storage, tenant service adjustments * tests: stabilize integration tests + skip external services; fix multi-tenant API behavior and idempotency - gpt5_nano_compatibility: add pytest-asyncio markers, skip when OPENAI key missing, prevent module-level asyncio.run collection, add conftest - Ollama tests: add server availability check and skip markers; avoid pytest collection warnings by renaming helper classes - Graph storage tests: rename interactive test functions to avoid pytest collection - Document & Tenant routes: support external_ids for idempotency; ensure HTTPExceptions are re-raised - LightRAG core: support external_ids in apipeline_enqueue_documents and idempotent logic - Tests updated to match API changes (tenant routes & document routes) - Add logs and scripts for inspection and audit
447 lines
16 KiB
Python
447 lines
16 KiB
Python
"""
|
|
Configs for the LightRAG API.
|
|
"""
|
|
|
|
import os
|
|
import argparse
|
|
import logging
|
|
from dotenv import load_dotenv
|
|
from lightrag.utils import get_env_value
|
|
from lightrag.llm.binding_options import (
|
|
OllamaEmbeddingOptions,
|
|
OllamaLLMOptions,
|
|
OpenAILLMOptions,
|
|
)
|
|
from lightrag.base import OllamaServerInfos
|
|
import sys
|
|
|
|
from lightrag.constants import (
|
|
DEFAULT_WOKERS,
|
|
DEFAULT_TIMEOUT,
|
|
DEFAULT_TOP_K,
|
|
DEFAULT_CHUNK_TOP_K,
|
|
DEFAULT_HISTORY_TURNS,
|
|
DEFAULT_MAX_ENTITY_TOKENS,
|
|
DEFAULT_MAX_RELATION_TOKENS,
|
|
DEFAULT_MAX_TOTAL_TOKENS,
|
|
DEFAULT_COSINE_THRESHOLD,
|
|
DEFAULT_RELATED_CHUNK_NUMBER,
|
|
DEFAULT_MIN_RERANK_SCORE,
|
|
DEFAULT_FORCE_LLM_SUMMARY_ON_MERGE,
|
|
DEFAULT_MAX_ASYNC,
|
|
DEFAULT_SUMMARY_MAX_TOKENS,
|
|
DEFAULT_SUMMARY_LENGTH_RECOMMENDED,
|
|
DEFAULT_SUMMARY_CONTEXT_SIZE,
|
|
DEFAULT_SUMMARY_LANGUAGE,
|
|
DEFAULT_EMBEDDING_FUNC_MAX_ASYNC,
|
|
DEFAULT_EMBEDDING_BATCH_NUM,
|
|
DEFAULT_OLLAMA_MODEL_NAME,
|
|
DEFAULT_OLLAMA_MODEL_TAG,
|
|
DEFAULT_RERANK_BINDING,
|
|
DEFAULT_ENTITY_TYPES,
|
|
)
|
|
|
|
# Multi-tenant security configuration
|
|
# When True, enforces tenant context on all data endpoints (SEC-001 fix)
|
|
MULTI_TENANT_STRICT_MODE = get_env_value("LIGHTRAG_MULTI_TENANT_STRICT", True, bool)
|
|
# When True, requires user authentication for tenant access (SEC-003 fix)
|
|
REQUIRE_USER_AUTH = get_env_value("LIGHTRAG_REQUIRE_USER_AUTH", True, bool)
|
|
# Comma-separated list of super-admin usernames (SEC-002 fix)
|
|
SUPER_ADMIN_USERS = get_env_value("LIGHTRAG_SUPER_ADMIN_USERS", "admin")
|
|
|
|
# use the .env that is inside the current folder
|
|
# allows to use different .env file for each lightrag instance
|
|
# the OS environment variables take precedence over the .env file
|
|
load_dotenv(dotenv_path=".env", override=False)
|
|
|
|
|
|
ollama_server_infos = OllamaServerInfos()
|
|
|
|
|
|
class DefaultRAGStorageConfig:
|
|
KV_STORAGE = "JsonKVStorage"
|
|
VECTOR_STORAGE = "NanoVectorDBStorage"
|
|
GRAPH_STORAGE = "NetworkXStorage"
|
|
DOC_STATUS_STORAGE = "JsonDocStatusStorage"
|
|
|
|
|
|
def get_default_host(binding_type: str) -> str:
|
|
default_hosts = {
|
|
"ollama": os.getenv("LLM_BINDING_HOST", "http://localhost:11434"),
|
|
"lollms": os.getenv("LLM_BINDING_HOST", "http://localhost:9600"),
|
|
"azure_openai": os.getenv("AZURE_OPENAI_ENDPOINT", "https://api.openai.com/v1"),
|
|
"openai": os.getenv("LLM_BINDING_HOST", "https://api.openai.com/v1"),
|
|
}
|
|
return default_hosts.get(
|
|
binding_type, os.getenv("LLM_BINDING_HOST", "http://localhost:11434")
|
|
) # fallback to ollama if unknown
|
|
|
|
|
|
def _is_running_under_test() -> bool:
|
|
"""Check if we're running under pytest or other test framework."""
|
|
return (
|
|
"pytest" in sys.modules or
|
|
"py.test" in sys.modules or
|
|
"_pytest" in sys.modules or
|
|
os.getenv("PYTEST_CURRENT_TEST") is not None or
|
|
any("pytest" in arg for arg in sys.argv)
|
|
)
|
|
|
|
|
|
def parse_args() -> argparse.Namespace:
|
|
"""
|
|
Parse command line arguments with environment variable fallback
|
|
|
|
Args:
|
|
is_uvicorn_mode: Whether running under uvicorn mode
|
|
|
|
Returns:
|
|
argparse.Namespace: Parsed arguments
|
|
"""
|
|
# When running under pytest, use parse_known_args to avoid conflicts
|
|
use_known_args = _is_running_under_test()
|
|
|
|
parser = argparse.ArgumentParser(description="LightRAG API Server")
|
|
|
|
# Server configuration
|
|
parser.add_argument(
|
|
"--host",
|
|
default=get_env_value("HOST", "0.0.0.0"),
|
|
help="Server host (default: from env or 0.0.0.0)",
|
|
)
|
|
parser.add_argument(
|
|
"--port",
|
|
type=int,
|
|
default=get_env_value("PORT", 9621, int),
|
|
help="Server port (default: from env or 9621)",
|
|
)
|
|
|
|
# Directory configuration
|
|
parser.add_argument(
|
|
"--working-dir",
|
|
default=get_env_value("WORKING_DIR", "./rag_storage"),
|
|
help="Working directory for RAG storage (default: from env or ./rag_storage)",
|
|
)
|
|
parser.add_argument(
|
|
"--input-dir",
|
|
default=get_env_value("INPUT_DIR", "./inputs"),
|
|
help="Directory containing input documents (default: from env or ./inputs)",
|
|
)
|
|
|
|
parser.add_argument(
|
|
"--timeout",
|
|
default=get_env_value("TIMEOUT", DEFAULT_TIMEOUT, int, special_none=True),
|
|
type=int,
|
|
help="Timeout in seconds (useful when using slow AI). Use None for infinite timeout",
|
|
)
|
|
|
|
# RAG configuration
|
|
parser.add_argument(
|
|
"--max-async",
|
|
type=int,
|
|
default=get_env_value("MAX_ASYNC", DEFAULT_MAX_ASYNC, int),
|
|
help=f"Maximum async operations (default: from env or {DEFAULT_MAX_ASYNC})",
|
|
)
|
|
parser.add_argument(
|
|
"--summary-max-tokens",
|
|
type=int,
|
|
default=get_env_value("SUMMARY_MAX_TOKENS", DEFAULT_SUMMARY_MAX_TOKENS, int),
|
|
help=f"Maximum token size for entity/relation summary(default: from env or {DEFAULT_SUMMARY_MAX_TOKENS})",
|
|
)
|
|
parser.add_argument(
|
|
"--summary-context-size",
|
|
type=int,
|
|
default=get_env_value(
|
|
"SUMMARY_CONTEXT_SIZE", DEFAULT_SUMMARY_CONTEXT_SIZE, int
|
|
),
|
|
help=f"LLM Summary Context size (default: from env or {DEFAULT_SUMMARY_CONTEXT_SIZE})",
|
|
)
|
|
parser.add_argument(
|
|
"--summary-length-recommended",
|
|
type=int,
|
|
default=get_env_value(
|
|
"SUMMARY_LENGTH_RECOMMENDED", DEFAULT_SUMMARY_LENGTH_RECOMMENDED, int
|
|
),
|
|
help=f"LLM Summary Context size (default: from env or {DEFAULT_SUMMARY_LENGTH_RECOMMENDED})",
|
|
)
|
|
|
|
# Logging configuration
|
|
parser.add_argument(
|
|
"--log-level",
|
|
default=get_env_value("LOG_LEVEL", "INFO"),
|
|
choices=["DEBUG", "INFO", "WARNING", "ERROR", "CRITICAL"],
|
|
help="Logging level (default: from env or INFO)",
|
|
)
|
|
parser.add_argument(
|
|
"--verbose",
|
|
action="store_true",
|
|
default=get_env_value("VERBOSE", False, bool),
|
|
help="Enable verbose debug output(only valid for DEBUG log-level)",
|
|
)
|
|
|
|
parser.add_argument(
|
|
"--key",
|
|
type=str,
|
|
default=get_env_value("LIGHTRAG_API_KEY", None),
|
|
help="API key for authentication. This protects lightrag server against unauthorized access",
|
|
)
|
|
|
|
# Optional https parameters
|
|
parser.add_argument(
|
|
"--ssl",
|
|
action="store_true",
|
|
default=get_env_value("SSL", False, bool),
|
|
help="Enable HTTPS (default: from env or False)",
|
|
)
|
|
parser.add_argument(
|
|
"--ssl-certfile",
|
|
default=get_env_value("SSL_CERTFILE", None),
|
|
help="Path to SSL certificate file (required if --ssl is enabled)",
|
|
)
|
|
parser.add_argument(
|
|
"--ssl-keyfile",
|
|
default=get_env_value("SSL_KEYFILE", None),
|
|
help="Path to SSL private key file (required if --ssl is enabled)",
|
|
)
|
|
|
|
# Ollama model configuration
|
|
parser.add_argument(
|
|
"--simulated-model-name",
|
|
type=str,
|
|
default=get_env_value("OLLAMA_EMULATING_MODEL_NAME", DEFAULT_OLLAMA_MODEL_NAME),
|
|
help="Name for the simulated Ollama model (default: from env or lightrag)",
|
|
)
|
|
|
|
parser.add_argument(
|
|
"--simulated-model-tag",
|
|
type=str,
|
|
default=get_env_value("OLLAMA_EMULATING_MODEL_TAG", DEFAULT_OLLAMA_MODEL_TAG),
|
|
help="Tag for the simulated Ollama model (default: from env or latest)",
|
|
)
|
|
|
|
# Namespace
|
|
parser.add_argument(
|
|
"--workspace",
|
|
type=str,
|
|
default=get_env_value("WORKSPACE", ""),
|
|
help="Default workspace for all storage",
|
|
)
|
|
|
|
# Server workers configuration
|
|
parser.add_argument(
|
|
"--workers",
|
|
type=int,
|
|
default=get_env_value("WORKERS", DEFAULT_WOKERS, int),
|
|
help="Number of worker processes (default: from env or 1)",
|
|
)
|
|
|
|
# LLM and embedding bindings
|
|
parser.add_argument(
|
|
"--llm-binding",
|
|
type=str,
|
|
default=get_env_value("LLM_BINDING", "openai"),
|
|
choices=[
|
|
"lollms",
|
|
"ollama",
|
|
"openai",
|
|
"openai-ollama",
|
|
"azure_openai",
|
|
"aws_bedrock",
|
|
],
|
|
help="LLM binding type (default: from env or openai)",
|
|
)
|
|
parser.add_argument(
|
|
"--embedding-binding",
|
|
type=str,
|
|
default=get_env_value("EMBEDDING_BINDING", "openai"),
|
|
choices=["lollms", "ollama", "openai", "azure_openai", "aws_bedrock", "jina"],
|
|
help="Embedding binding type (default: from env or openai)",
|
|
)
|
|
parser.add_argument(
|
|
"--rerank-binding",
|
|
type=str,
|
|
default=get_env_value("RERANK_BINDING", DEFAULT_RERANK_BINDING),
|
|
choices=["null", "cohere", "jina", "aliyun"],
|
|
help=f"Rerank binding type (default: from env or {DEFAULT_RERANK_BINDING})",
|
|
)
|
|
|
|
# Conditionally add binding options defined in binding_options module
|
|
# This will add command line arguments for all binding options (e.g., --ollama-embedding-num_ctx)
|
|
# and corresponding environment variables (e.g., OLLAMA_EMBEDDING_NUM_CTX)
|
|
if "--llm-binding" in sys.argv:
|
|
try:
|
|
idx = sys.argv.index("--llm-binding")
|
|
if idx + 1 < len(sys.argv) and sys.argv[idx + 1] == "ollama":
|
|
OllamaLLMOptions.add_args(parser)
|
|
except IndexError:
|
|
pass
|
|
elif os.environ.get("LLM_BINDING") == "ollama":
|
|
OllamaLLMOptions.add_args(parser)
|
|
|
|
if "--embedding-binding" in sys.argv:
|
|
try:
|
|
idx = sys.argv.index("--embedding-binding")
|
|
if idx + 1 < len(sys.argv) and sys.argv[idx + 1] == "ollama":
|
|
OllamaEmbeddingOptions.add_args(parser)
|
|
except IndexError:
|
|
pass
|
|
elif os.environ.get("EMBEDDING_BINDING") == "ollama":
|
|
OllamaEmbeddingOptions.add_args(parser)
|
|
|
|
# Add OpenAI LLM options when llm-binding is openai or azure_openai
|
|
if "--llm-binding" in sys.argv:
|
|
try:
|
|
idx = sys.argv.index("--llm-binding")
|
|
if idx + 1 < len(sys.argv) and sys.argv[idx + 1] in [
|
|
"openai",
|
|
"azure_openai",
|
|
]:
|
|
OpenAILLMOptions.add_args(parser)
|
|
except IndexError:
|
|
pass
|
|
elif os.environ.get("LLM_BINDING") in ["openai", "azure_openai"]:
|
|
OpenAILLMOptions.add_args(parser)
|
|
|
|
# Use parse_known_args under test frameworks to avoid argument conflicts
|
|
if use_known_args:
|
|
args, _ = parser.parse_known_args()
|
|
else:
|
|
args = parser.parse_args()
|
|
|
|
# convert relative path to absolute path
|
|
args.working_dir = os.path.abspath(args.working_dir)
|
|
args.input_dir = os.path.abspath(args.input_dir)
|
|
|
|
# Inject storage configuration from environment variables
|
|
args.kv_storage = get_env_value(
|
|
"LIGHTRAG_KV_STORAGE", DefaultRAGStorageConfig.KV_STORAGE
|
|
)
|
|
args.doc_status_storage = get_env_value(
|
|
"LIGHTRAG_DOC_STATUS_STORAGE", DefaultRAGStorageConfig.DOC_STATUS_STORAGE
|
|
)
|
|
args.graph_storage = get_env_value(
|
|
"LIGHTRAG_GRAPH_STORAGE", DefaultRAGStorageConfig.GRAPH_STORAGE
|
|
)
|
|
args.vector_storage = get_env_value(
|
|
"LIGHTRAG_VECTOR_STORAGE", DefaultRAGStorageConfig.VECTOR_STORAGE
|
|
)
|
|
|
|
# Get MAX_PARALLEL_INSERT from environment
|
|
args.max_parallel_insert = get_env_value("MAX_PARALLEL_INSERT", 2, int)
|
|
|
|
# Get MAX_GRAPH_NODES from environment
|
|
args.max_graph_nodes = get_env_value("MAX_GRAPH_NODES", 1000, int)
|
|
|
|
# Handle openai-ollama special case
|
|
if args.llm_binding == "openai-ollama":
|
|
args.llm_binding = "openai"
|
|
args.embedding_binding = "ollama"
|
|
|
|
# Ollama ctx_num
|
|
args.ollama_num_ctx = get_env_value("OLLAMA_NUM_CTX", 32768, int)
|
|
|
|
args.llm_binding_host = get_env_value(
|
|
"LLM_BINDING_HOST", get_default_host(args.llm_binding)
|
|
)
|
|
args.embedding_binding_host = get_env_value(
|
|
"EMBEDDING_BINDING_HOST", get_default_host(args.embedding_binding)
|
|
)
|
|
args.llm_binding_api_key = get_env_value("LLM_BINDING_API_KEY", None)
|
|
args.embedding_binding_api_key = get_env_value("EMBEDDING_BINDING_API_KEY", "")
|
|
|
|
# Inject model configuration
|
|
args.llm_model = get_env_value("LLM_MODEL", "mistral-nemo:latest")
|
|
args.embedding_model = get_env_value("EMBEDDING_MODEL", "bge-m3:latest")
|
|
args.embedding_dim = get_env_value("EMBEDDING_DIM", 1024, int)
|
|
|
|
# Inject chunk configuration
|
|
args.chunk_size = get_env_value("CHUNK_SIZE", 1200, int)
|
|
args.chunk_overlap_size = get_env_value("CHUNK_OVERLAP_SIZE", 100, int)
|
|
|
|
# Inject LLM cache configuration
|
|
args.enable_llm_cache_for_extract = get_env_value(
|
|
"ENABLE_LLM_CACHE_FOR_EXTRACT", True, bool
|
|
)
|
|
args.enable_llm_cache = get_env_value("ENABLE_LLM_CACHE", True, bool)
|
|
|
|
# Select Document loading tool (DOCLING, DEFAULT)
|
|
args.document_loading_engine = get_env_value("DOCUMENT_LOADING_ENGINE", "DEFAULT")
|
|
|
|
# Add environment variables that were previously read directly
|
|
args.cors_origins = get_env_value("CORS_ORIGINS", "*")
|
|
args.summary_language = get_env_value("SUMMARY_LANGUAGE", DEFAULT_SUMMARY_LANGUAGE)
|
|
args.entity_types = get_env_value("ENTITY_TYPES", DEFAULT_ENTITY_TYPES, list)
|
|
args.whitelist_paths = get_env_value("WHITELIST_PATHS", "/health,/api/*")
|
|
|
|
# For JWT Auth
|
|
args.auth_accounts = get_env_value("AUTH_ACCOUNTS", "")
|
|
if not args.auth_accounts:
|
|
auth_user = get_env_value("AUTH_USER", "")
|
|
auth_pass = get_env_value("AUTH_PASS", "")
|
|
if auth_user and auth_pass:
|
|
args.auth_accounts = f"{auth_user}:{auth_pass}"
|
|
args.token_secret = get_env_value("TOKEN_SECRET", "lightrag-jwt-default-secret")
|
|
args.token_expire_hours = get_env_value("TOKEN_EXPIRE_HOURS", 48, int)
|
|
args.guest_token_expire_hours = get_env_value("GUEST_TOKEN_EXPIRE_HOURS", 24, int)
|
|
args.jwt_algorithm = get_env_value("JWT_ALGORITHM", "HS256")
|
|
|
|
# Rerank model configuration
|
|
args.rerank_model = get_env_value("RERANK_MODEL", None)
|
|
args.rerank_binding_host = get_env_value("RERANK_BINDING_HOST", None)
|
|
args.rerank_binding_api_key = get_env_value("RERANK_BINDING_API_KEY", None)
|
|
# Note: rerank_binding is already set by argparse, no need to override from env
|
|
|
|
# Min rerank score configuration
|
|
args.min_rerank_score = get_env_value(
|
|
"MIN_RERANK_SCORE", DEFAULT_MIN_RERANK_SCORE, float
|
|
)
|
|
|
|
# Query configuration
|
|
args.history_turns = get_env_value("HISTORY_TURNS", DEFAULT_HISTORY_TURNS, int)
|
|
args.top_k = get_env_value("TOP_K", DEFAULT_TOP_K, int)
|
|
args.chunk_top_k = get_env_value("CHUNK_TOP_K", DEFAULT_CHUNK_TOP_K, int)
|
|
args.max_entity_tokens = get_env_value(
|
|
"MAX_ENTITY_TOKENS", DEFAULT_MAX_ENTITY_TOKENS, int
|
|
)
|
|
args.max_relation_tokens = get_env_value(
|
|
"MAX_RELATION_TOKENS", DEFAULT_MAX_RELATION_TOKENS, int
|
|
)
|
|
args.max_total_tokens = get_env_value(
|
|
"MAX_TOTAL_TOKENS", DEFAULT_MAX_TOTAL_TOKENS, int
|
|
)
|
|
args.cosine_threshold = get_env_value(
|
|
"COSINE_THRESHOLD", DEFAULT_COSINE_THRESHOLD, float
|
|
)
|
|
args.related_chunk_number = get_env_value(
|
|
"RELATED_CHUNK_NUMBER", DEFAULT_RELATED_CHUNK_NUMBER, int
|
|
)
|
|
|
|
# Add missing environment variables for health endpoint
|
|
args.force_llm_summary_on_merge = get_env_value(
|
|
"FORCE_LLM_SUMMARY_ON_MERGE", DEFAULT_FORCE_LLM_SUMMARY_ON_MERGE, int
|
|
)
|
|
args.embedding_func_max_async = get_env_value(
|
|
"EMBEDDING_FUNC_MAX_ASYNC", DEFAULT_EMBEDDING_FUNC_MAX_ASYNC, int
|
|
)
|
|
args.embedding_batch_num = get_env_value(
|
|
"EMBEDDING_BATCH_NUM", DEFAULT_EMBEDDING_BATCH_NUM, int
|
|
)
|
|
|
|
ollama_server_infos.LIGHTRAG_NAME = args.simulated_model_name
|
|
ollama_server_infos.LIGHTRAG_TAG = args.simulated_model_tag
|
|
|
|
return args
|
|
|
|
|
|
def update_uvicorn_mode_config():
|
|
# If in uvicorn mode and workers > 1, force it to 1 and log warning
|
|
if global_args.workers > 1:
|
|
original_workers = global_args.workers
|
|
global_args.workers = 1
|
|
# Log warning directly here
|
|
logging.warning(
|
|
f">> Forcing workers=1 in uvicorn mode(Ignoring workers={original_workers})"
|
|
)
|
|
|
|
|
|
global_args = parse_args()
|