* feat: Implement multi-tenant architecture with tenant and knowledge base models - Added data models for tenants, knowledge bases, and related configurations. - Introduced role and permission management for users in the multi-tenant system. - Created a service layer for managing tenants and knowledge bases, including CRUD operations. - Developed a tenant-aware instance manager for LightRAG with caching and isolation features. - Added a migration script to transition existing workspace-based deployments to the new multi-tenant architecture. * chore: ignore lightrag/api/webui/assets/ directory * chore: stop tracking lightrag/api/webui/assets (ignore in .gitignore) * feat: Initialize LightRAG Multi-Tenant Stack with PostgreSQL - Added README.md for project overview, setup instructions, and architecture details. - Created docker-compose.yml to define services: PostgreSQL, Redis, LightRAG API, and Web UI. - Introduced env.example for environment variable configuration. - Implemented init-postgres.sql for PostgreSQL schema initialization with multi-tenant support. - Added reproduce_issue.py for testing default tenant access via API. * feat: Enhance TenantSelector and update related components for improved multi-tenant support * feat: Enhance testing capabilities and update documentation - Updated Makefile to include new test commands for various modes (compatibility, isolation, multi-tenant, security, coverage, and dry-run). - Modified API health check endpoint in Makefile to reflect new port configuration. - Updated QUICK_START.md and README.md to reflect changes in service URLs and ports. - Added environment variables for testing modes in env.example. - Introduced run_all_tests.sh script to automate testing across different modes. - Created conftest.py for pytest configuration, including database fixtures and mock services. - Implemented database helper functions for streamlined database operations in tests. - Added test collection hooks to skip tests based on the current MULTITENANT_MODE. * feat: Implement multi-tenant support with demo mode enabled by default - Added multi-tenant configuration to the environment and Docker setup. - Created pre-configured demo tenants (acme-corp and techstart) for testing. - Updated API endpoints to support tenant-specific data access. - Enhanced Makefile commands for better service management and database operations. - Introduced user-tenant membership system with role-based access control. - Added comprehensive documentation for multi-tenant setup and usage. - Fixed issues with document visibility in multi-tenant environments. - Implemented necessary database migrations for user memberships and legacy support. * feat(audit): Add final audit report for multi-tenant implementation - Documented overall assessment, architecture overview, test results, security findings, and recommendations. - Included detailed findings on critical security issues and architectural concerns. fix(security): Implement security fixes based on audit findings - Removed global RAG fallback and enforced strict tenant context. - Configured super-admin access and required user authentication for tenant access. - Cleared localStorage on logout and improved error handling in WebUI. chore(logs): Create task logs for audit and security fixes implementation - Documented actions, decisions, and next steps for both audit and security fixes. - Summarized test results and remaining recommendations. chore(scripts): Enhance development stack management scripts - Added scripts for cleaning, starting, and stopping the development stack. - Improved output messages and ensured graceful shutdown of services. feat(starter): Initialize PostgreSQL with AGE extension support - Created initialization scripts for PostgreSQL extensions including uuid-ossp, vector, and AGE. - Ensured successful installation and verification of extensions. * feat: Implement auto-select for first tenant and KB on initial load in WebUI - Removed WEBUI_INITIAL_STATE_FIX.md as the issue is resolved. - Added useTenantInitialization hook to automatically select the first available tenant and KB on app load. - Integrated the new hook into the Root component of the WebUI. - Updated RetrievalTesting component to ensure a KB is selected before allowing user interaction. - Created end-to-end tests for multi-tenant isolation and real service interactions. - Added scripts for starting, stopping, and cleaning the development stack. - Enhanced API and tenant routes to support tenant-specific pipeline status initialization. - Updated constants for backend URL to reflect the correct port. - Improved error handling and logging in various components. * feat: Add multi-tenant support with enhanced E2E testing scripts and client functionality * update client * Add integration and unit tests for multi-tenant API, models, security, and storage - Implement integration tests for tenant and knowledge base management endpoints in `test_tenant_api_routes.py`. - Create unit tests for tenant isolation, model validation, and role permissions in `test_tenant_models.py`. - Add security tests to enforce role-based permissions and context validation in `test_tenant_security.py`. - Develop tests for tenant-aware storage operations and context isolation in `test_tenant_storage_phase3.py`. * feat(e2e): Implement OpenAI model support and database reset functionality * Add comprehensive test suite for gpt-5-nano compatibility - Introduced tests for parameter normalization, embeddings, and entity extraction. - Implemented direct API testing for gpt-5-nano. - Validated .env configuration loading and OpenAI API connectivity. - Analyzed reasoning token overhead with various token limits. - Documented test procedures and expected outcomes in README files. - Ensured all tests pass for production readiness. * kg(postgres_impl): ensure AGE extension is loaded in session and configure graph initialization * dev: add hybrid dev helper scripts, Makefile, docker-compose.dev-db and local development docs * feat(dev): add dev helper scripts and local development documentation for hybrid setup * feat(multi-tenant): add detailed specifications and logs for multi-tenant improvements, including UX, backend handling, and ingestion pipeline * feat(migration): add generated tenant/kb columns, indexes, triggers; drop unused tables; update schema and docs * test(backward-compat): adapt tests to new StorageNameSpace/TenantService APIs (use concrete dummy storages) * chore: multi-tenant and UX updates — docs, webui, storage, tenant service adjustments * tests: stabilize integration tests + skip external services; fix multi-tenant API behavior and idempotency - gpt5_nano_compatibility: add pytest-asyncio markers, skip when OPENAI key missing, prevent module-level asyncio.run collection, add conftest - Ollama tests: add server availability check and skip markers; avoid pytest collection warnings by renaming helper classes - Graph storage tests: rename interactive test functions to avoid pytest collection - Document & Tenant routes: support external_ids for idempotency; ensure HTTPExceptions are re-raised - LightRAG core: support external_ids in apipeline_enqueue_documents and idempotent logic - Tests updated to match API changes (tenant routes & document routes) - Add logs and scripts for inspection and audit
291 lines
11 KiB
Python
291 lines
11 KiB
Python
import os
|
|
from dataclasses import dataclass
|
|
from typing import Any, final
|
|
|
|
from lightrag.base import (
|
|
BaseKVStorage,
|
|
)
|
|
from lightrag.utils import (
|
|
load_json,
|
|
logger,
|
|
write_json,
|
|
)
|
|
from lightrag.exceptions import StorageNotInitializedError
|
|
from .shared_storage import (
|
|
get_namespace_data,
|
|
get_storage_lock,
|
|
get_data_init_lock,
|
|
get_update_flag,
|
|
set_all_update_flags,
|
|
clear_all_update_flags,
|
|
try_initialize_namespace,
|
|
)
|
|
|
|
|
|
@final
|
|
@dataclass
|
|
class JsonKVStorage(BaseKVStorage):
|
|
def __post_init__(self):
|
|
working_dir = self.global_config["working_dir"]
|
|
|
|
# Get composite workspace (supports multi-tenant isolation)
|
|
composite_workspace = self._get_composite_workspace()
|
|
|
|
if composite_workspace and composite_workspace != "_":
|
|
# Include composite workspace in the file path for data isolation
|
|
# For multi-tenant: tenant_id:kb_id:workspace
|
|
# For single-tenant: just workspace
|
|
workspace_dir = os.path.join(working_dir, composite_workspace)
|
|
self.final_namespace = f"{composite_workspace}_{self.namespace}"
|
|
else:
|
|
# Default behavior when workspace is empty
|
|
workspace_dir = working_dir
|
|
self.final_namespace = self.namespace
|
|
composite_workspace = "_"
|
|
|
|
os.makedirs(workspace_dir, exist_ok=True)
|
|
self._file_name = os.path.join(workspace_dir, f"kv_store_{self.namespace}.json")
|
|
|
|
self._data = None
|
|
self._storage_lock = None
|
|
self.storage_updated = None
|
|
|
|
async def initialize(self):
|
|
"""Initialize storage data"""
|
|
self._storage_lock = get_storage_lock()
|
|
self.storage_updated = await get_update_flag(self.final_namespace)
|
|
async with get_data_init_lock():
|
|
# check need_init must before get_namespace_data
|
|
need_init = await try_initialize_namespace(self.final_namespace)
|
|
self._data = await get_namespace_data(self.final_namespace)
|
|
if need_init:
|
|
loaded_data = load_json(self._file_name) or {}
|
|
async with self._storage_lock:
|
|
# Migrate legacy cache structure if needed
|
|
if self.namespace.endswith("_cache"):
|
|
loaded_data = await self._migrate_legacy_cache_structure(
|
|
loaded_data
|
|
)
|
|
|
|
self._data.update(loaded_data)
|
|
data_count = len(loaded_data)
|
|
|
|
logger.info(
|
|
f"[{self.workspace}] Process {os.getpid()} KV load {self.namespace} with {data_count} records"
|
|
)
|
|
|
|
async def index_done_callback(self) -> None:
|
|
async with self._storage_lock:
|
|
if self.storage_updated.value:
|
|
data_dict = (
|
|
dict(self._data) if hasattr(self._data, "_getvalue") else self._data
|
|
)
|
|
|
|
# Calculate data count - all data is now flattened
|
|
data_count = len(data_dict)
|
|
|
|
logger.debug(
|
|
f"[{self.workspace}] Process {os.getpid()} KV writting {data_count} records to {self.namespace}"
|
|
)
|
|
write_json(data_dict, self._file_name)
|
|
await clear_all_update_flags(self.final_namespace)
|
|
|
|
async def get_all(self) -> dict[str, Any]:
|
|
"""Get all data from storage
|
|
|
|
Returns:
|
|
Dictionary containing all stored data
|
|
"""
|
|
async with self._storage_lock:
|
|
result = {}
|
|
for key, value in self._data.items():
|
|
if value:
|
|
# Create a copy to avoid modifying the original data
|
|
data = dict(value)
|
|
# Ensure time fields are present, provide default values for old data
|
|
data.setdefault("create_time", 0)
|
|
data.setdefault("update_time", 0)
|
|
result[key] = data
|
|
else:
|
|
result[key] = value
|
|
return result
|
|
|
|
async def get_by_id(self, id: str) -> dict[str, Any] | None:
|
|
async with self._storage_lock:
|
|
result = self._data.get(id)
|
|
if result:
|
|
# Create a copy to avoid modifying the original data
|
|
result = dict(result)
|
|
# Ensure time fields are present, provide default values for old data
|
|
result.setdefault("create_time", 0)
|
|
result.setdefault("update_time", 0)
|
|
# Ensure _id field contains the clean ID
|
|
result["_id"] = id
|
|
return result
|
|
|
|
async def get_by_ids(self, ids: list[str]) -> list[dict[str, Any]]:
|
|
async with self._storage_lock:
|
|
results = []
|
|
for id in ids:
|
|
data = self._data.get(id, None)
|
|
if data:
|
|
# Create a copy to avoid modifying the original data
|
|
result = {k: v for k, v in data.items()}
|
|
# Ensure time fields are present, provide default values for old data
|
|
result.setdefault("create_time", 0)
|
|
result.setdefault("update_time", 0)
|
|
# Ensure _id field contains the clean ID
|
|
result["_id"] = id
|
|
results.append(result)
|
|
else:
|
|
results.append(None)
|
|
return results
|
|
|
|
async def filter_keys(self, keys: set[str]) -> set[str]:
|
|
async with self._storage_lock:
|
|
return set(keys) - set(self._data.keys())
|
|
|
|
async def upsert(self, data: dict[str, dict[str, Any]]) -> None:
|
|
"""
|
|
Importance notes for in-memory storage:
|
|
1. Changes will be persisted to disk during the next index_done_callback
|
|
2. update flags to notify other processes that data persistence is needed
|
|
"""
|
|
if not data:
|
|
return
|
|
|
|
import time
|
|
|
|
current_time = int(time.time()) # Get current Unix timestamp
|
|
|
|
logger.debug(
|
|
f"[{self.workspace}] Inserting {len(data)} records to {self.namespace}"
|
|
)
|
|
if self._storage_lock is None:
|
|
raise StorageNotInitializedError("JsonKVStorage")
|
|
async with self._storage_lock:
|
|
# Add timestamps to data based on whether key exists
|
|
for k, v in data.items():
|
|
# For text_chunks namespace, ensure llm_cache_list field exists
|
|
if self.namespace.endswith("text_chunks"):
|
|
if "llm_cache_list" not in v:
|
|
v["llm_cache_list"] = []
|
|
|
|
# Add timestamps based on whether key exists
|
|
if k in self._data: # Key exists, only update update_time
|
|
v["update_time"] = current_time
|
|
else: # New key, set both create_time and update_time
|
|
v["create_time"] = current_time
|
|
v["update_time"] = current_time
|
|
|
|
v["_id"] = k
|
|
|
|
self._data.update(data)
|
|
await set_all_update_flags(self.final_namespace)
|
|
|
|
async def delete(self, ids: list[str]) -> None:
|
|
"""Delete specific records from storage by their IDs
|
|
|
|
Importance notes for in-memory storage:
|
|
1. Changes will be persisted to disk during the next index_done_callback
|
|
2. update flags to notify other processes that data persistence is needed
|
|
|
|
Args:
|
|
ids (list[str]): List of document IDs to be deleted from storage
|
|
|
|
Returns:
|
|
None
|
|
"""
|
|
async with self._storage_lock:
|
|
any_deleted = False
|
|
for doc_id in ids:
|
|
result = self._data.pop(doc_id, None)
|
|
if result is not None:
|
|
any_deleted = True
|
|
|
|
if any_deleted:
|
|
await set_all_update_flags(self.final_namespace)
|
|
|
|
async def drop(self) -> dict[str, str]:
|
|
"""Drop all data from storage and clean up resources
|
|
This action will persistent the data to disk immediately.
|
|
|
|
This method will:
|
|
1. Clear all data from memory
|
|
2. Update flags to notify other processes
|
|
3. Trigger index_done_callback to save the empty state
|
|
|
|
Returns:
|
|
dict[str, str]: Operation status and message
|
|
- On success: {"status": "success", "message": "data dropped"}
|
|
- On failure: {"status": "error", "message": "<error details>"}
|
|
"""
|
|
try:
|
|
async with self._storage_lock:
|
|
self._data.clear()
|
|
await set_all_update_flags(self.final_namespace)
|
|
|
|
await self.index_done_callback()
|
|
logger.info(
|
|
f"[{self.workspace}] Process {os.getpid()} drop {self.namespace}"
|
|
)
|
|
return {"status": "success", "message": "data dropped"}
|
|
except Exception as e:
|
|
logger.error(f"[{self.workspace}] Error dropping {self.namespace}: {e}")
|
|
return {"status": "error", "message": str(e)}
|
|
|
|
async def _migrate_legacy_cache_structure(self, data: dict) -> dict:
|
|
"""Migrate legacy nested cache structure to flattened structure
|
|
|
|
Args:
|
|
data: Original data dictionary that may contain legacy structure
|
|
|
|
Returns:
|
|
Migrated data dictionary with flattened cache keys
|
|
"""
|
|
from lightrag.utils import generate_cache_key
|
|
|
|
# Early return if data is empty
|
|
if not data:
|
|
return data
|
|
|
|
# Check first entry to see if it's already in new format
|
|
first_key = next(iter(data.keys()))
|
|
if ":" in first_key and len(first_key.split(":")) == 3:
|
|
# Already in flattened format, return as-is
|
|
return data
|
|
|
|
migrated_data = {}
|
|
migration_count = 0
|
|
|
|
for key, value in data.items():
|
|
# Check if this is a legacy nested cache structure
|
|
if isinstance(value, dict) and all(
|
|
isinstance(v, dict) and "return" in v for v in value.values()
|
|
):
|
|
# This looks like a legacy cache mode with nested structure
|
|
mode = key
|
|
for cache_hash, cache_entry in value.items():
|
|
cache_type = cache_entry.get("cache_type", "extract")
|
|
flattened_key = generate_cache_key(mode, cache_type, cache_hash)
|
|
migrated_data[flattened_key] = cache_entry
|
|
migration_count += 1
|
|
else:
|
|
# Keep non-cache data or already flattened cache data as-is
|
|
migrated_data[key] = value
|
|
|
|
if migration_count > 0:
|
|
logger.info(
|
|
f"[{self.workspace}] Migrated {migration_count} legacy cache entries to flattened structure"
|
|
)
|
|
# Persist migrated data immediately
|
|
write_json(migrated_data, self._file_name)
|
|
|
|
return migrated_data
|
|
|
|
async def finalize(self):
|
|
"""Finalize storage resources
|
|
Persistence cache data to disk before exiting
|
|
"""
|
|
if self.namespace.endswith("_cache"):
|
|
await self.index_done_callback()
|