* feat: Implement multi-tenant architecture with tenant and knowledge base models - Added data models for tenants, knowledge bases, and related configurations. - Introduced role and permission management for users in the multi-tenant system. - Created a service layer for managing tenants and knowledge bases, including CRUD operations. - Developed a tenant-aware instance manager for LightRAG with caching and isolation features. - Added a migration script to transition existing workspace-based deployments to the new multi-tenant architecture. * chore: ignore lightrag/api/webui/assets/ directory * chore: stop tracking lightrag/api/webui/assets (ignore in .gitignore) * feat: Initialize LightRAG Multi-Tenant Stack with PostgreSQL - Added README.md for project overview, setup instructions, and architecture details. - Created docker-compose.yml to define services: PostgreSQL, Redis, LightRAG API, and Web UI. - Introduced env.example for environment variable configuration. - Implemented init-postgres.sql for PostgreSQL schema initialization with multi-tenant support. - Added reproduce_issue.py for testing default tenant access via API. * feat: Enhance TenantSelector and update related components for improved multi-tenant support * feat: Enhance testing capabilities and update documentation - Updated Makefile to include new test commands for various modes (compatibility, isolation, multi-tenant, security, coverage, and dry-run). - Modified API health check endpoint in Makefile to reflect new port configuration. - Updated QUICK_START.md and README.md to reflect changes in service URLs and ports. - Added environment variables for testing modes in env.example. - Introduced run_all_tests.sh script to automate testing across different modes. - Created conftest.py for pytest configuration, including database fixtures and mock services. - Implemented database helper functions for streamlined database operations in tests. - Added test collection hooks to skip tests based on the current MULTITENANT_MODE. * feat: Implement multi-tenant support with demo mode enabled by default - Added multi-tenant configuration to the environment and Docker setup. - Created pre-configured demo tenants (acme-corp and techstart) for testing. - Updated API endpoints to support tenant-specific data access. - Enhanced Makefile commands for better service management and database operations. - Introduced user-tenant membership system with role-based access control. - Added comprehensive documentation for multi-tenant setup and usage. - Fixed issues with document visibility in multi-tenant environments. - Implemented necessary database migrations for user memberships and legacy support. * feat(audit): Add final audit report for multi-tenant implementation - Documented overall assessment, architecture overview, test results, security findings, and recommendations. - Included detailed findings on critical security issues and architectural concerns. fix(security): Implement security fixes based on audit findings - Removed global RAG fallback and enforced strict tenant context. - Configured super-admin access and required user authentication for tenant access. - Cleared localStorage on logout and improved error handling in WebUI. chore(logs): Create task logs for audit and security fixes implementation - Documented actions, decisions, and next steps for both audit and security fixes. - Summarized test results and remaining recommendations. chore(scripts): Enhance development stack management scripts - Added scripts for cleaning, starting, and stopping the development stack. - Improved output messages and ensured graceful shutdown of services. feat(starter): Initialize PostgreSQL with AGE extension support - Created initialization scripts for PostgreSQL extensions including uuid-ossp, vector, and AGE. - Ensured successful installation and verification of extensions. * feat: Implement auto-select for first tenant and KB on initial load in WebUI - Removed WEBUI_INITIAL_STATE_FIX.md as the issue is resolved. - Added useTenantInitialization hook to automatically select the first available tenant and KB on app load. - Integrated the new hook into the Root component of the WebUI. - Updated RetrievalTesting component to ensure a KB is selected before allowing user interaction. - Created end-to-end tests for multi-tenant isolation and real service interactions. - Added scripts for starting, stopping, and cleaning the development stack. - Enhanced API and tenant routes to support tenant-specific pipeline status initialization. - Updated constants for backend URL to reflect the correct port. - Improved error handling and logging in various components. * feat: Add multi-tenant support with enhanced E2E testing scripts and client functionality * update client * Add integration and unit tests for multi-tenant API, models, security, and storage - Implement integration tests for tenant and knowledge base management endpoints in `test_tenant_api_routes.py`. - Create unit tests for tenant isolation, model validation, and role permissions in `test_tenant_models.py`. - Add security tests to enforce role-based permissions and context validation in `test_tenant_security.py`. - Develop tests for tenant-aware storage operations and context isolation in `test_tenant_storage_phase3.py`. * feat(e2e): Implement OpenAI model support and database reset functionality * Add comprehensive test suite for gpt-5-nano compatibility - Introduced tests for parameter normalization, embeddings, and entity extraction. - Implemented direct API testing for gpt-5-nano. - Validated .env configuration loading and OpenAI API connectivity. - Analyzed reasoning token overhead with various token limits. - Documented test procedures and expected outcomes in README files. - Ensured all tests pass for production readiness. * kg(postgres_impl): ensure AGE extension is loaded in session and configure graph initialization * dev: add hybrid dev helper scripts, Makefile, docker-compose.dev-db and local development docs * feat(dev): add dev helper scripts and local development documentation for hybrid setup * feat(multi-tenant): add detailed specifications and logs for multi-tenant improvements, including UX, backend handling, and ingestion pipeline * feat(migration): add generated tenant/kb columns, indexes, triggers; drop unused tables; update schema and docs * test(backward-compat): adapt tests to new StorageNameSpace/TenantService APIs (use concrete dummy storages) * chore: multi-tenant and UX updates — docs, webui, storage, tenant service adjustments * tests: stabilize integration tests + skip external services; fix multi-tenant API behavior and idempotency - gpt5_nano_compatibility: add pytest-asyncio markers, skip when OPENAI key missing, prevent module-level asyncio.run collection, add conftest - Ollama tests: add server availability check and skip markers; avoid pytest collection warnings by renaming helper classes - Graph storage tests: rename interactive test functions to avoid pytest collection - Document & Tenant routes: support external_ids for idempotency; ensure HTTPExceptions are re-raised - LightRAG core: support external_ids in apipeline_enqueue_documents and idempotent logic - Tests updated to match API changes (tenant routes & document routes) - Add logs and scripts for inspection and audit
281 lines
10 KiB
Python
281 lines
10 KiB
Python
import requests
|
||
import time
|
||
import os
|
||
import json
|
||
import sys
|
||
|
||
# Colors for output
|
||
GREEN = "\033[92m"
|
||
RED = "\033[91m"
|
||
YELLOW = "\033[93m"
|
||
BLUE = "\033[94m"
|
||
CYAN = "\033[96m"
|
||
DIM = "\033[2m"
|
||
RESET = "\033[0m"
|
||
BOLD = "\033[1m"
|
||
|
||
def print_success(msg):
|
||
print(f"{GREEN}✅ {msg}{RESET}")
|
||
|
||
def print_error(msg):
|
||
print(f"{RED}❌ {msg}{RESET}")
|
||
|
||
def print_warning(msg):
|
||
print(f"{YELLOW}⚠️ {msg}{RESET}")
|
||
|
||
def print_info(msg):
|
||
print(f"{BLUE}ℹ️ {msg}{RESET}")
|
||
|
||
def print_step(msg):
|
||
print(f"\n{BOLD}👉 {msg}{RESET}")
|
||
|
||
class LightRAGClient:
|
||
def __init__(self, base_url, username, password):
|
||
self.base_url = base_url
|
||
self.username = username
|
||
self.password = password
|
||
self.token = None
|
||
self.session = requests.Session()
|
||
|
||
def login(self):
|
||
print_step("Authenticating...")
|
||
try:
|
||
# Try form data first (FastAPI OAuth2PasswordRequestForm)
|
||
response = self.session.post(
|
||
f"{self.base_url}/login",
|
||
data={"username": self.username, "password": self.password}
|
||
)
|
||
if response.status_code != 200:
|
||
# Try JSON if form data fails
|
||
response = self.session.post(
|
||
f"{self.base_url}/login",
|
||
json={"username": self.username, "password": self.password}
|
||
)
|
||
|
||
if response.status_code != 200:
|
||
raise Exception(f"Login failed: {response.text}")
|
||
|
||
data = response.json()
|
||
self.token = data.get("access_token")
|
||
self.session.headers.update({"Authorization": f"Bearer {self.token}"})
|
||
print_success(f"Authenticated as {self.username}")
|
||
except Exception as e:
|
||
print_error(f"Authentication failed: {e}")
|
||
sys.exit(1)
|
||
|
||
def create_tenant(self, name, description):
|
||
print_step(f"Creating Tenant: {name}")
|
||
response = self.session.post(
|
||
f"{self.base_url}/api/v1/tenants",
|
||
json={"name": name, "description": description}
|
||
)
|
||
if response.status_code not in [200, 201]:
|
||
# If tenant already exists, try to find it
|
||
if response.status_code == 409 or "already exists" in response.text:
|
||
print(f"Tenant {name} might already exist, fetching list...")
|
||
pass
|
||
else:
|
||
raise Exception(f"Failed to create tenant: {response.text}")
|
||
|
||
list_resp = self.session.get(f"{self.base_url}/api/v1/tenants")
|
||
tenants = list_resp.json()
|
||
items = tenants.get("items", tenants) if isinstance(tenants, dict) else tenants
|
||
|
||
for t in items:
|
||
if t.get("name") == name:
|
||
print_success(f"Tenant '{name}' ID: {t['tenant_id']}")
|
||
return t['tenant_id']
|
||
|
||
if response.status_code in [200, 201]:
|
||
data = response.json()
|
||
print_success(f"Tenant '{name}' created with ID: {data['tenant_id']}")
|
||
return data['tenant_id']
|
||
|
||
raise Exception(f"Could not resolve Tenant ID for {name}")
|
||
|
||
def create_kb(self, tenant_id, name, description):
|
||
print_step(f"Creating KB '{name}' for Tenant '{tenant_id}'")
|
||
headers = {"X-Tenant-ID": tenant_id}
|
||
response = self.session.post(
|
||
f"{self.base_url}/api/v1/knowledge-bases",
|
||
json={"name": name, "description": description},
|
||
headers=headers
|
||
)
|
||
|
||
list_resp = self.session.get(f"{self.base_url}/api/v1/knowledge-bases", headers=headers)
|
||
kbs = list_resp.json()
|
||
items = kbs.get("items", kbs) if isinstance(kbs, dict) else kbs
|
||
|
||
for kb in items:
|
||
if kb.get("name") == name:
|
||
print_success(f"KB '{name}' ID: {kb['kb_id']}")
|
||
return kb['kb_id']
|
||
|
||
if response.status_code in [200, 201]:
|
||
data = response.json()
|
||
return data['kb_id']
|
||
|
||
raise Exception(f"Could not resolve KB ID for {name}")
|
||
|
||
def ingest_text(self, tenant_id, kb_id, text):
|
||
print_step(f"Ingesting text into Tenant: {tenant_id}, KB: {kb_id}")
|
||
headers = {"X-Tenant-ID": tenant_id, "X-KB-ID": kb_id}
|
||
response = self.session.post(
|
||
f"{self.base_url}/documents/text",
|
||
json={"text": text},
|
||
headers=headers
|
||
)
|
||
if response.status_code != 200:
|
||
raise Exception(f"Ingestion failed: {response.text}")
|
||
print_success("Text ingested successfully")
|
||
|
||
def wait_for_indexing(self, tenant_id, kb_id, timeout=300):
|
||
print_step(f"Waiting for indexing in Tenant: {tenant_id}, KB: {kb_id}...")
|
||
headers = {"X-Tenant-ID": tenant_id, "X-KB-ID": kb_id}
|
||
start_time = time.time()
|
||
last_status = ""
|
||
poll_count = 0
|
||
|
||
while time.time() - start_time < timeout:
|
||
poll_count += 1
|
||
elapsed = int(time.time() - start_time)
|
||
|
||
response = self.session.get(f"{self.base_url}/documents", headers=headers)
|
||
if response.status_code != 200:
|
||
print(f" [{elapsed}s] Error checking documents: {response.text}")
|
||
time.sleep(2)
|
||
continue
|
||
|
||
data = response.json()
|
||
|
||
docs = []
|
||
if "statuses" in data:
|
||
for status_key, status_list in data["statuses"].items():
|
||
docs.extend(status_list)
|
||
elif "items" in data:
|
||
docs = data["items"]
|
||
elif isinstance(data, list):
|
||
docs = data
|
||
else:
|
||
docs = []
|
||
|
||
if not docs:
|
||
if poll_count % 5 == 1: # Print every 10 seconds
|
||
print(f" [{elapsed}s] No documents found yet, waiting...")
|
||
time.sleep(2)
|
||
continue
|
||
|
||
# Count statuses
|
||
status_counts = {}
|
||
for doc in docs:
|
||
if isinstance(doc, str):
|
||
status = "pending"
|
||
else:
|
||
status = doc.get("status", "unknown")
|
||
status_counts[status] = status_counts.get(status, 0) + 1
|
||
|
||
current_status = ", ".join([f"{k}: {v}" for k, v in sorted(status_counts.items())])
|
||
|
||
# Only print if status changed or every 10 seconds
|
||
if current_status != last_status or poll_count % 5 == 1:
|
||
print(f" [{elapsed}s] Documents: {current_status}")
|
||
last_status = current_status
|
||
|
||
all_processed = all(
|
||
(isinstance(doc, dict) and doc.get("status") == "processed")
|
||
for doc in docs
|
||
)
|
||
|
||
if all_processed and len(docs) > 0:
|
||
print_success(f"All {len(docs)} document(s) processed in {elapsed}s")
|
||
return
|
||
|
||
time.sleep(2)
|
||
|
||
raise Exception(f"Timeout ({timeout}s) waiting for indexing. Last status: {last_status}")
|
||
|
||
def clear_cache(self, tenant_id, kb_id):
|
||
print_step(f"Clearing cache in Tenant: {tenant_id}, KB: {kb_id}")
|
||
headers = {"X-Tenant-ID": tenant_id, "X-KB-ID": kb_id}
|
||
response = self.session.post(f"{self.base_url}/documents/clear_cache", json={}, headers=headers)
|
||
if response.status_code != 200:
|
||
raise Exception(f"Clear cache failed: {response.text}")
|
||
print_success("Cache cleared")
|
||
|
||
def wait_for_pipeline(self, tenant_id, kb_id, timeout=60):
|
||
print_step(f"Waiting for pipeline to be idle in Tenant: {tenant_id}, KB: {kb_id}...")
|
||
headers = {"X-Tenant-ID": tenant_id, "X-KB-ID": kb_id}
|
||
start_time = time.time()
|
||
while time.time() - start_time < timeout:
|
||
response = self.session.get(f"{self.base_url}/documents/pipeline_status", headers=headers)
|
||
if response.status_code != 200:
|
||
print(f"Error checking pipeline status: {response.text}")
|
||
time.sleep(2)
|
||
continue
|
||
|
||
data = response.json()
|
||
if not data.get("busy", False):
|
||
print_success("Pipeline is idle")
|
||
return
|
||
|
||
time.sleep(1)
|
||
|
||
raise Exception("Timeout waiting for pipeline to be idle")
|
||
|
||
def query(self, tenant_id, kb_id, query_text, verbose=True):
|
||
if verbose:
|
||
print_step(f"Querying '{query_text}' in Tenant: {tenant_id}, KB: {kb_id}")
|
||
headers = {"X-Tenant-ID": tenant_id, "X-KB-ID": kb_id}
|
||
|
||
start_time = time.time()
|
||
response = self.session.post(
|
||
f"{self.base_url}/query",
|
||
json={"query": query_text, "mode": "global"},
|
||
headers=headers
|
||
)
|
||
elapsed = time.time() - start_time
|
||
|
||
if response.status_code != 200:
|
||
raise Exception(f"Query failed: {response.text}")
|
||
|
||
result = response.json()
|
||
response_text = result.get('response', '')
|
||
if verbose:
|
||
print(f" Response ({elapsed:.1f}s): {response_text[:150]}{'...' if len(response_text) > 150 else ''}")
|
||
return response_text
|
||
|
||
def delete_document(self, tenant_id, kb_id, doc_id):
|
||
print_step(f"Deleting document '{doc_id}' in Tenant: {tenant_id}, KB: {kb_id}")
|
||
headers = {"X-Tenant-ID": tenant_id, "X-KB-ID": kb_id}
|
||
response = self.session.request(
|
||
"DELETE",
|
||
f"{self.base_url}/documents/delete_document",
|
||
json={"doc_ids": [doc_id]},
|
||
headers=headers
|
||
)
|
||
if response.status_code != 200:
|
||
raise Exception(f"Deletion failed: {response.text}")
|
||
print_success(f"Document {doc_id} deleted successfully")
|
||
|
||
def get_documents(self, tenant_id, kb_id):
|
||
headers = {"X-Tenant-ID": tenant_id, "X-KB-ID": kb_id}
|
||
response = self.session.get(f"{self.base_url}/documents", headers=headers)
|
||
if response.status_code != 200:
|
||
raise Exception(f"Failed to get documents: {response.text}")
|
||
|
||
data = response.json()
|
||
# print(f"DEBUG: RAW DATA: {data}")
|
||
docs = []
|
||
if "statuses" in data:
|
||
for status_key, status_list in data["statuses"].items():
|
||
docs.extend(status_list)
|
||
elif "items" in data:
|
||
docs = data["items"]
|
||
elif isinstance(data, list):
|
||
docs = data
|
||
|
||
print(f"DEBUG: get_documents returned {len(docs)} docs")
|
||
for d in docs:
|
||
print(f"DEBUG: Doc: {d}")
|
||
|
||
return docs
|