* feat: Implement multi-tenant architecture with tenant and knowledge base models - Added data models for tenants, knowledge bases, and related configurations. - Introduced role and permission management for users in the multi-tenant system. - Created a service layer for managing tenants and knowledge bases, including CRUD operations. - Developed a tenant-aware instance manager for LightRAG with caching and isolation features. - Added a migration script to transition existing workspace-based deployments to the new multi-tenant architecture. * chore: ignore lightrag/api/webui/assets/ directory * chore: stop tracking lightrag/api/webui/assets (ignore in .gitignore) * feat: Initialize LightRAG Multi-Tenant Stack with PostgreSQL - Added README.md for project overview, setup instructions, and architecture details. - Created docker-compose.yml to define services: PostgreSQL, Redis, LightRAG API, and Web UI. - Introduced env.example for environment variable configuration. - Implemented init-postgres.sql for PostgreSQL schema initialization with multi-tenant support. - Added reproduce_issue.py for testing default tenant access via API. * feat: Enhance TenantSelector and update related components for improved multi-tenant support * feat: Enhance testing capabilities and update documentation - Updated Makefile to include new test commands for various modes (compatibility, isolation, multi-tenant, security, coverage, and dry-run). - Modified API health check endpoint in Makefile to reflect new port configuration. - Updated QUICK_START.md and README.md to reflect changes in service URLs and ports. - Added environment variables for testing modes in env.example. - Introduced run_all_tests.sh script to automate testing across different modes. - Created conftest.py for pytest configuration, including database fixtures and mock services. - Implemented database helper functions for streamlined database operations in tests. - Added test collection hooks to skip tests based on the current MULTITENANT_MODE. * feat: Implement multi-tenant support with demo mode enabled by default - Added multi-tenant configuration to the environment and Docker setup. - Created pre-configured demo tenants (acme-corp and techstart) for testing. - Updated API endpoints to support tenant-specific data access. - Enhanced Makefile commands for better service management and database operations. - Introduced user-tenant membership system with role-based access control. - Added comprehensive documentation for multi-tenant setup and usage. - Fixed issues with document visibility in multi-tenant environments. - Implemented necessary database migrations for user memberships and legacy support. * feat(audit): Add final audit report for multi-tenant implementation - Documented overall assessment, architecture overview, test results, security findings, and recommendations. - Included detailed findings on critical security issues and architectural concerns. fix(security): Implement security fixes based on audit findings - Removed global RAG fallback and enforced strict tenant context. - Configured super-admin access and required user authentication for tenant access. - Cleared localStorage on logout and improved error handling in WebUI. chore(logs): Create task logs for audit and security fixes implementation - Documented actions, decisions, and next steps for both audit and security fixes. - Summarized test results and remaining recommendations. chore(scripts): Enhance development stack management scripts - Added scripts for cleaning, starting, and stopping the development stack. - Improved output messages and ensured graceful shutdown of services. feat(starter): Initialize PostgreSQL with AGE extension support - Created initialization scripts for PostgreSQL extensions including uuid-ossp, vector, and AGE. - Ensured successful installation and verification of extensions. * feat: Implement auto-select for first tenant and KB on initial load in WebUI - Removed WEBUI_INITIAL_STATE_FIX.md as the issue is resolved. - Added useTenantInitialization hook to automatically select the first available tenant and KB on app load. - Integrated the new hook into the Root component of the WebUI. - Updated RetrievalTesting component to ensure a KB is selected before allowing user interaction. - Created end-to-end tests for multi-tenant isolation and real service interactions. - Added scripts for starting, stopping, and cleaning the development stack. - Enhanced API and tenant routes to support tenant-specific pipeline status initialization. - Updated constants for backend URL to reflect the correct port. - Improved error handling and logging in various components. * feat: Add multi-tenant support with enhanced E2E testing scripts and client functionality * update client * Add integration and unit tests for multi-tenant API, models, security, and storage - Implement integration tests for tenant and knowledge base management endpoints in `test_tenant_api_routes.py`. - Create unit tests for tenant isolation, model validation, and role permissions in `test_tenant_models.py`. - Add security tests to enforce role-based permissions and context validation in `test_tenant_security.py`. - Develop tests for tenant-aware storage operations and context isolation in `test_tenant_storage_phase3.py`. * feat(e2e): Implement OpenAI model support and database reset functionality * Add comprehensive test suite for gpt-5-nano compatibility - Introduced tests for parameter normalization, embeddings, and entity extraction. - Implemented direct API testing for gpt-5-nano. - Validated .env configuration loading and OpenAI API connectivity. - Analyzed reasoning token overhead with various token limits. - Documented test procedures and expected outcomes in README files. - Ensured all tests pass for production readiness. * kg(postgres_impl): ensure AGE extension is loaded in session and configure graph initialization * dev: add hybrid dev helper scripts, Makefile, docker-compose.dev-db and local development docs * feat(dev): add dev helper scripts and local development documentation for hybrid setup * feat(multi-tenant): add detailed specifications and logs for multi-tenant improvements, including UX, backend handling, and ingestion pipeline * feat(migration): add generated tenant/kb columns, indexes, triggers; drop unused tables; update schema and docs * test(backward-compat): adapt tests to new StorageNameSpace/TenantService APIs (use concrete dummy storages) * chore: multi-tenant and UX updates — docs, webui, storage, tenant service adjustments * tests: stabilize integration tests + skip external services; fix multi-tenant API behavior and idempotency - gpt5_nano_compatibility: add pytest-asyncio markers, skip when OPENAI key missing, prevent module-level asyncio.run collection, add conftest - Ollama tests: add server availability check and skip markers; avoid pytest collection warnings by renaming helper classes - Graph storage tests: rename interactive test functions to avoid pytest collection - Document & Tenant routes: support external_ids for idempotency; ensure HTTPExceptions are re-raised - LightRAG core: support external_ids in apipeline_enqueue_documents and idempotent logic - Tests updated to match API changes (tenant routes & document routes) - Add logs and scripts for inspection and audit
230 lines
7.3 KiB
Python
230 lines
7.3 KiB
Python
import sys
|
|
|
|
if sys.version_info < (3, 9):
|
|
from typing import AsyncIterator
|
|
else:
|
|
from collections.abc import AsyncIterator
|
|
|
|
import pipmaster as pm # Pipmaster for dynamic library install
|
|
|
|
# install specific modules
|
|
if not pm.is_installed("ollama"):
|
|
pm.install("ollama")
|
|
|
|
import ollama
|
|
|
|
from tenacity import (
|
|
retry,
|
|
stop_after_attempt,
|
|
wait_exponential,
|
|
retry_if_exception_type,
|
|
)
|
|
from lightrag.exceptions import (
|
|
APIConnectionError,
|
|
RateLimitError,
|
|
APITimeoutError,
|
|
)
|
|
from lightrag.api import __api_version__
|
|
|
|
import numpy as np
|
|
from typing import Union
|
|
from lightrag.utils import logger
|
|
|
|
|
|
@retry(
|
|
stop=stop_after_attempt(3),
|
|
wait=wait_exponential(multiplier=1, min=4, max=10),
|
|
retry=retry_if_exception_type(
|
|
(RateLimitError, APIConnectionError, APITimeoutError)
|
|
),
|
|
)
|
|
async def _ollama_model_if_cache(
|
|
model,
|
|
prompt,
|
|
system_prompt=None,
|
|
history_messages=[],
|
|
enable_cot: bool = False,
|
|
**kwargs,
|
|
) -> Union[str, AsyncIterator[str]]:
|
|
if enable_cot:
|
|
logger.debug("enable_cot=True is not supported for ollama and will be ignored.")
|
|
stream = True if kwargs.get("stream") else False
|
|
|
|
kwargs.pop("max_tokens", None)
|
|
# kwargs.pop("response_format", None) # allow json
|
|
host = kwargs.pop("host", None)
|
|
timeout = kwargs.pop("timeout", None)
|
|
if timeout == 0:
|
|
timeout = None
|
|
kwargs.pop("hashing_kv", None)
|
|
api_key = kwargs.pop("api_key", None)
|
|
headers = {
|
|
"Content-Type": "application/json",
|
|
"User-Agent": f"LightRAG/{__api_version__}",
|
|
}
|
|
if api_key:
|
|
headers["Authorization"] = f"Bearer {api_key}"
|
|
|
|
ollama_client = ollama.AsyncClient(host=host, timeout=timeout, headers=headers)
|
|
|
|
try:
|
|
messages = []
|
|
if system_prompt:
|
|
messages.append({"role": "system", "content": system_prompt})
|
|
messages.extend(history_messages)
|
|
messages.append({"role": "user", "content": prompt})
|
|
|
|
response = await ollama_client.chat(model=model, messages=messages, **kwargs)
|
|
if stream:
|
|
"""cannot cache stream response and process reasoning"""
|
|
|
|
async def inner():
|
|
try:
|
|
async for chunk in response:
|
|
yield chunk["message"]["content"]
|
|
except Exception as e:
|
|
logger.error(f"Error in stream response: {str(e)}")
|
|
raise
|
|
finally:
|
|
try:
|
|
await ollama_client._client.aclose()
|
|
logger.debug("Successfully closed Ollama client for streaming")
|
|
except Exception as close_error:
|
|
logger.warning(f"Failed to close Ollama client: {close_error}")
|
|
|
|
return inner()
|
|
else:
|
|
model_response = response["message"]["content"]
|
|
|
|
"""
|
|
If the model also wraps its thoughts in a specific tag,
|
|
this information is not needed for the final
|
|
response and can simply be trimmed.
|
|
"""
|
|
|
|
return model_response
|
|
except Exception as e:
|
|
try:
|
|
await ollama_client._client.aclose()
|
|
logger.debug("Successfully closed Ollama client after exception")
|
|
except Exception as close_error:
|
|
logger.warning(
|
|
f"Failed to close Ollama client after exception: {close_error}"
|
|
)
|
|
raise e
|
|
finally:
|
|
if not stream:
|
|
try:
|
|
await ollama_client._client.aclose()
|
|
logger.debug(
|
|
"Successfully closed Ollama client for non-streaming response"
|
|
)
|
|
except Exception as close_error:
|
|
logger.warning(
|
|
f"Failed to close Ollama client in finally block: {close_error}"
|
|
)
|
|
|
|
|
|
async def ollama_model_complete(
|
|
prompt,
|
|
system_prompt=None,
|
|
history_messages=[],
|
|
enable_cot: bool = False,
|
|
keyword_extraction=False,
|
|
**kwargs,
|
|
) -> Union[str, AsyncIterator[str]]:
|
|
keyword_extraction = kwargs.pop("keyword_extraction", None)
|
|
if keyword_extraction:
|
|
kwargs["format"] = "json"
|
|
model_name = kwargs["hashing_kv"].global_config["llm_model_name"]
|
|
return await _ollama_model_if_cache(
|
|
model_name,
|
|
prompt,
|
|
system_prompt=system_prompt,
|
|
history_messages=history_messages,
|
|
enable_cot=enable_cot,
|
|
**kwargs,
|
|
)
|
|
|
|
|
|
async def ollama_embed(texts: list[str], embed_model, **kwargs) -> np.ndarray:
|
|
"""
|
|
Generate embeddings using Ollama API.
|
|
|
|
Uses httpx directly instead of ollama.AsyncClient to work around a bug in ollama SDK v0.6.1
|
|
where the host parameter is not properly used for the embed endpoint.
|
|
"""
|
|
import httpx
|
|
import json
|
|
|
|
api_key = kwargs.pop("api_key", None)
|
|
headers = {
|
|
"Content-Type": "application/json",
|
|
"User-Agent": f"LightRAG/{__api_version__}",
|
|
}
|
|
if api_key:
|
|
headers["Authorization"] = f"Bearer {api_key}"
|
|
|
|
host = kwargs.pop("host", None)
|
|
timeout = kwargs.pop("timeout", None)
|
|
|
|
# Ensure host has proper format
|
|
if host and not host.startswith("http"):
|
|
host = f"http://{host}"
|
|
if not host:
|
|
host = "http://localhost:11434"
|
|
|
|
# Validate host format to catch any corruption
|
|
if not isinstance(host, str) or not host.startswith("http"):
|
|
logger.error(f"Invalid host format for Ollama embed: {host} (type: {type(host).__name__})")
|
|
raise ValueError(f"Invalid host format for Ollama: {host}")
|
|
|
|
logger.info(f"Ollama embed called with host: {host}, model: {embed_model}")
|
|
|
|
# Use httpx directly to avoid ollama SDK bug with embed endpoint
|
|
async with httpx.AsyncClient(timeout=timeout if timeout else 120.0) as client:
|
|
try:
|
|
options = kwargs.pop("options", {})
|
|
|
|
# Construct the embed API endpoint
|
|
embed_url = f"{host}/api/embed"
|
|
|
|
# Prepare request payload
|
|
payload = {
|
|
"model": embed_model,
|
|
"input": texts,
|
|
}
|
|
if options:
|
|
payload["options"] = options
|
|
|
|
logger.debug(f"Sending embed request to {embed_url}")
|
|
|
|
# Make the request
|
|
response = await client.post(
|
|
embed_url,
|
|
json=payload,
|
|
headers=headers
|
|
)
|
|
|
|
# Check for errors
|
|
response.raise_for_status()
|
|
|
|
# Parse response
|
|
data = response.json()
|
|
|
|
if "embeddings" not in data:
|
|
raise ValueError(f"Invalid response from Ollama: {data}")
|
|
|
|
return np.array(data["embeddings"])
|
|
|
|
except httpx.HTTPStatusError as e:
|
|
error_msg = f"HTTP error from Ollama: {e.response.status_code} - {e.response.text}"
|
|
logger.error(error_msg)
|
|
raise Exception(error_msg) from e
|
|
except httpx.RequestError as e:
|
|
error_msg = f"Connection error to Ollama at {host}: {str(e)}"
|
|
logger.error(error_msg)
|
|
raise Exception(error_msg) from e
|
|
except Exception as e:
|
|
logger.error(f"Error in ollama_embed: {str(e)}")
|
|
raise
|