* feat: Implement multi-tenant architecture with tenant and knowledge base models - Added data models for tenants, knowledge bases, and related configurations. - Introduced role and permission management for users in the multi-tenant system. - Created a service layer for managing tenants and knowledge bases, including CRUD operations. - Developed a tenant-aware instance manager for LightRAG with caching and isolation features. - Added a migration script to transition existing workspace-based deployments to the new multi-tenant architecture. * chore: ignore lightrag/api/webui/assets/ directory * chore: stop tracking lightrag/api/webui/assets (ignore in .gitignore) * feat: Initialize LightRAG Multi-Tenant Stack with PostgreSQL - Added README.md for project overview, setup instructions, and architecture details. - Created docker-compose.yml to define services: PostgreSQL, Redis, LightRAG API, and Web UI. - Introduced env.example for environment variable configuration. - Implemented init-postgres.sql for PostgreSQL schema initialization with multi-tenant support. - Added reproduce_issue.py for testing default tenant access via API. * feat: Enhance TenantSelector and update related components for improved multi-tenant support * feat: Enhance testing capabilities and update documentation - Updated Makefile to include new test commands for various modes (compatibility, isolation, multi-tenant, security, coverage, and dry-run). - Modified API health check endpoint in Makefile to reflect new port configuration. - Updated QUICK_START.md and README.md to reflect changes in service URLs and ports. - Added environment variables for testing modes in env.example. - Introduced run_all_tests.sh script to automate testing across different modes. - Created conftest.py for pytest configuration, including database fixtures and mock services. - Implemented database helper functions for streamlined database operations in tests. - Added test collection hooks to skip tests based on the current MULTITENANT_MODE. * feat: Implement multi-tenant support with demo mode enabled by default - Added multi-tenant configuration to the environment and Docker setup. - Created pre-configured demo tenants (acme-corp and techstart) for testing. - Updated API endpoints to support tenant-specific data access. - Enhanced Makefile commands for better service management and database operations. - Introduced user-tenant membership system with role-based access control. - Added comprehensive documentation for multi-tenant setup and usage. - Fixed issues with document visibility in multi-tenant environments. - Implemented necessary database migrations for user memberships and legacy support. * feat(audit): Add final audit report for multi-tenant implementation - Documented overall assessment, architecture overview, test results, security findings, and recommendations. - Included detailed findings on critical security issues and architectural concerns. fix(security): Implement security fixes based on audit findings - Removed global RAG fallback and enforced strict tenant context. - Configured super-admin access and required user authentication for tenant access. - Cleared localStorage on logout and improved error handling in WebUI. chore(logs): Create task logs for audit and security fixes implementation - Documented actions, decisions, and next steps for both audit and security fixes. - Summarized test results and remaining recommendations. chore(scripts): Enhance development stack management scripts - Added scripts for cleaning, starting, and stopping the development stack. - Improved output messages and ensured graceful shutdown of services. feat(starter): Initialize PostgreSQL with AGE extension support - Created initialization scripts for PostgreSQL extensions including uuid-ossp, vector, and AGE. - Ensured successful installation and verification of extensions. * feat: Implement auto-select for first tenant and KB on initial load in WebUI - Removed WEBUI_INITIAL_STATE_FIX.md as the issue is resolved. - Added useTenantInitialization hook to automatically select the first available tenant and KB on app load. - Integrated the new hook into the Root component of the WebUI. - Updated RetrievalTesting component to ensure a KB is selected before allowing user interaction. - Created end-to-end tests for multi-tenant isolation and real service interactions. - Added scripts for starting, stopping, and cleaning the development stack. - Enhanced API and tenant routes to support tenant-specific pipeline status initialization. - Updated constants for backend URL to reflect the correct port. - Improved error handling and logging in various components. * feat: Add multi-tenant support with enhanced E2E testing scripts and client functionality * update client * Add integration and unit tests for multi-tenant API, models, security, and storage - Implement integration tests for tenant and knowledge base management endpoints in `test_tenant_api_routes.py`. - Create unit tests for tenant isolation, model validation, and role permissions in `test_tenant_models.py`. - Add security tests to enforce role-based permissions and context validation in `test_tenant_security.py`. - Develop tests for tenant-aware storage operations and context isolation in `test_tenant_storage_phase3.py`. * feat(e2e): Implement OpenAI model support and database reset functionality * Add comprehensive test suite for gpt-5-nano compatibility - Introduced tests for parameter normalization, embeddings, and entity extraction. - Implemented direct API testing for gpt-5-nano. - Validated .env configuration loading and OpenAI API connectivity. - Analyzed reasoning token overhead with various token limits. - Documented test procedures and expected outcomes in README files. - Ensured all tests pass for production readiness. * kg(postgres_impl): ensure AGE extension is loaded in session and configure graph initialization * dev: add hybrid dev helper scripts, Makefile, docker-compose.dev-db and local development docs * feat(dev): add dev helper scripts and local development documentation for hybrid setup * feat(multi-tenant): add detailed specifications and logs for multi-tenant improvements, including UX, backend handling, and ingestion pipeline * feat(migration): add generated tenant/kb columns, indexes, triggers; drop unused tables; update schema and docs * test(backward-compat): adapt tests to new StorageNameSpace/TenantService APIs (use concrete dummy storages) * chore: multi-tenant and UX updates — docs, webui, storage, tenant service adjustments * tests: stabilize integration tests + skip external services; fix multi-tenant API behavior and idempotency - gpt5_nano_compatibility: add pytest-asyncio markers, skip when OPENAI key missing, prevent module-level asyncio.run collection, add conftest - Ollama tests: add server availability check and skip markers; avoid pytest collection warnings by renaming helper classes - Graph storage tests: rename interactive test functions to avoid pytest collection - Document & Tenant routes: support external_ids for idempotency; ensure HTTPExceptions are re-raised - LightRAG core: support external_ids in apipeline_enqueue_documents and idempotent logic - Tests updated to match API changes (tenant routes & document routes) - Add logs and scripts for inspection and audit
299 lines
10 KiB
Python
299 lines
10 KiB
Python
#!/usr/bin/env python3
|
|
"""
|
|
Test script to verify gpt-5-nano compatibility with LightRAG.
|
|
|
|
This script validates that:
|
|
1. gpt-5-nano parameter handling works correctly (max_completion_tokens conversion)
|
|
2. Temperature parameter is properly handled for gpt-5-nano
|
|
3. Embeddings work with gpt-5-nano configuration
|
|
4. Entity extraction works with gpt-5-nano
|
|
5. Full pipeline works end-to-end
|
|
|
|
Requires:
|
|
- OPENAI_API_KEY environment variable set
|
|
- LLM_MODEL set to gpt-5-nano or specified via argument
|
|
|
|
Run standalone: python test_gpt5_nano_compatibility.py
|
|
Run via pytest: pytest test_gpt5_nano_compatibility.py -v (some tests skipped without API key)
|
|
"""
|
|
|
|
import os
|
|
import sys
|
|
import asyncio
|
|
import logging
|
|
import pytest
|
|
from typing import Any, Dict
|
|
|
|
# Setup logging
|
|
logging.basicConfig(
|
|
level=logging.INFO,
|
|
format='%(asctime)s - %(name)s - %(levelname)s - %(message)s'
|
|
)
|
|
logger = logging.getLogger(__name__)
|
|
|
|
# Add the repo to path
|
|
sys.path.insert(0, os.path.dirname(os.path.dirname(os.path.dirname(os.path.abspath(__file__)))))
|
|
|
|
from lightrag.llm.openai import (
|
|
openai_complete_if_cache,
|
|
openai_embed,
|
|
_normalize_openai_kwargs_for_model
|
|
)
|
|
|
|
|
|
# Define markers for different test types
|
|
NEEDS_API_KEY = pytest.mark.skipif(
|
|
not os.getenv("OPENAI_API_KEY") and not os.getenv("LLM_BINDING_API_KEY"),
|
|
reason="OPENAI_API_KEY or LLM_BINDING_API_KEY not set"
|
|
)
|
|
|
|
|
|
@pytest.mark.asyncio
|
|
async def test_parameter_normalization():
|
|
"""Test 1: Parameter normalization for gpt-5-nano"""
|
|
logger.info("=" * 60)
|
|
logger.info("TEST 1: Parameter normalization for gpt-5-nano")
|
|
logger.info("=" * 60)
|
|
|
|
try:
|
|
# Test case 1a: max_tokens conversion to max_completion_tokens with buffer
|
|
kwargs = {"max_tokens": 500, "temperature": 0.7, "top_p": 0.9}
|
|
original_kwargs = kwargs.copy()
|
|
_normalize_openai_kwargs_for_model("gpt-5-nano", kwargs)
|
|
|
|
logger.info(f"Input kwargs: {original_kwargs}")
|
|
logger.info(f"Output kwargs: {kwargs}")
|
|
|
|
assert "max_completion_tokens" in kwargs, "max_tokens should be converted to max_completion_tokens"
|
|
assert kwargs["max_completion_tokens"] >= 500, "max_completion_tokens should be at least original value (500)"
|
|
assert "max_tokens" not in kwargs, "max_tokens should be removed"
|
|
assert "temperature" not in kwargs, "temperature should be removed for gpt-5-nano"
|
|
assert "top_p" in kwargs, "top_p should be preserved"
|
|
|
|
logger.info(f"✅ Test 1a passed: max_tokens → max_completion_tokens conversion works (buffered from 500 to {kwargs['max_completion_tokens']})")
|
|
|
|
# Test case 1b: Both max_tokens and max_completion_tokens (edge case)
|
|
kwargs = {
|
|
"max_tokens": 200,
|
|
"max_completion_tokens": 300,
|
|
"temperature": 0.5
|
|
}
|
|
original_kwargs = kwargs.copy()
|
|
_normalize_openai_kwargs_for_model("gpt-5-nano", kwargs)
|
|
|
|
logger.info(f"Input kwargs (both max params): {original_kwargs}")
|
|
logger.info(f"Output kwargs: {kwargs}")
|
|
|
|
assert "max_tokens" not in kwargs, "max_tokens should be removed"
|
|
assert "max_completion_tokens" in kwargs, "max_completion_tokens should be kept"
|
|
assert "temperature" not in kwargs, "temperature should be removed"
|
|
|
|
logger.info("✅ Test 1b passed: Both max parameters handled correctly")
|
|
|
|
# Test case 1c: Non-gpt5 models shouldn't change
|
|
kwargs = {"max_tokens": 500, "temperature": 0.7}
|
|
original_kwargs = kwargs.copy()
|
|
_normalize_openai_kwargs_for_model("gpt-4o-mini", kwargs)
|
|
|
|
logger.info(f"Input kwargs (gpt-4o-mini): {original_kwargs}")
|
|
logger.info(f"Output kwargs: {kwargs}")
|
|
|
|
assert "max_tokens" in kwargs, "max_tokens should be preserved for gpt-4o-mini"
|
|
assert kwargs["max_tokens"] == 500, "max_tokens value should be unchanged"
|
|
assert "temperature" in kwargs, "temperature should be preserved for gpt-4o-mini"
|
|
|
|
logger.info("✅ Test 1c passed: Non-gpt5 models are unchanged")
|
|
|
|
logger.info("✅ TEST 1 PASSED: Parameter normalization works correctly\n")
|
|
return True
|
|
except Exception as e:
|
|
logger.error(f"❌ TEST 1 FAILED: {e}")
|
|
import traceback
|
|
traceback.print_exc()
|
|
return False
|
|
|
|
|
|
@pytest.mark.asyncio
|
|
@NEEDS_API_KEY
|
|
async def test_embeddings():
|
|
"""Test 2: Embeddings generation"""
|
|
logger.info("=" * 60)
|
|
logger.info("TEST 2: Embeddings generation")
|
|
logger.info("=" * 60)
|
|
|
|
try:
|
|
texts = ["Hello world", "This is a test"]
|
|
model = os.getenv("EMBEDDING_MODEL", "text-embedding-3-small")
|
|
|
|
logger.info(f"Generating embeddings with model: {model}")
|
|
embeddings = await openai_embed(texts, model=model)
|
|
|
|
logger.info(f"Generated {len(embeddings)} embeddings")
|
|
logger.info(f"First embedding shape: {len(embeddings[0])}")
|
|
|
|
assert len(embeddings) == len(texts), "Should get one embedding per text"
|
|
assert len(embeddings[0]) > 0, "Embeddings should not be empty"
|
|
|
|
logger.info("✅ TEST 2 PASSED: Embeddings generation works\n")
|
|
return True
|
|
except Exception as e:
|
|
logger.error(f"❌ TEST 2 FAILED: {e}")
|
|
import traceback
|
|
traceback.print_exc()
|
|
return False
|
|
|
|
|
|
@pytest.mark.asyncio
|
|
@pytest.mark.skipif(not os.getenv("OPENAI_API_KEY"), reason="OPENAI_API_KEY not set")
|
|
async def test_simple_completion():
|
|
"""Test 3: Simple LLM completion with gpt-5-nano"""
|
|
logger.info("=" * 60)
|
|
logger.info("TEST 3: Simple LLM completion with gpt-5-nano")
|
|
logger.info("=" * 60)
|
|
|
|
try:
|
|
model = os.getenv("LLM_MODEL", "gpt-5-nano")
|
|
|
|
logger.info(f"Testing completion with model: {model}")
|
|
|
|
# Enable verbose debug logging
|
|
import logging as py_logging
|
|
py_logging.getLogger("openai").setLevel(py_logging.DEBUG)
|
|
|
|
# Test without custom temperature (gpt-5-nano requirement)
|
|
response = await openai_complete_if_cache(
|
|
model=model,
|
|
prompt="Say hello in one word",
|
|
system_prompt="You are a helpful assistant.",
|
|
max_completion_tokens=20
|
|
)
|
|
|
|
logger.info(f"Response: {response}")
|
|
assert len(response) > 0, "Response should not be empty"
|
|
|
|
logger.info("✅ TEST 3 PASSED: Simple completion works\n")
|
|
return True
|
|
except Exception as e:
|
|
logger.error(f"❌ TEST 3 FAILED: {e}")
|
|
import traceback
|
|
traceback.print_exc()
|
|
return False
|
|
|
|
|
|
@pytest.mark.asyncio
|
|
@pytest.mark.skipif(not os.getenv("OPENAI_API_KEY"), reason="OPENAI_API_KEY not set")
|
|
async def test_extraction_with_gpt5nano():
|
|
"""Test 4: Entity extraction style task"""
|
|
logger.info("=" * 60)
|
|
logger.info("TEST 4: Entity extraction style task")
|
|
logger.info("=" * 60)
|
|
|
|
try:
|
|
model = os.getenv("LLM_MODEL", "gpt-5-nano")
|
|
|
|
prompt = """Extract the entities from this text:
|
|
|
|
Apple Inc. was founded by Steve Jobs in 1976.
|
|
|
|
Return as JSON with keys: company, person, year."""
|
|
|
|
logger.info(f"Testing extraction with model: {model}")
|
|
|
|
response = await openai_complete_if_cache(
|
|
model=model,
|
|
prompt=prompt,
|
|
system_prompt="You are an entity extraction assistant. Always respond in valid JSON.",
|
|
max_completion_tokens=100
|
|
)
|
|
|
|
logger.info(f"Response: {response}")
|
|
assert len(response) > 0, "Response should not be empty"
|
|
assert "Apple" in response or "apple" in response, "Should mention Apple"
|
|
|
|
logger.info("✅ TEST 4 PASSED: Entity extraction works\n")
|
|
return True
|
|
except Exception as e:
|
|
logger.error(f"❌ TEST 4 FAILED: {e}")
|
|
import traceback
|
|
traceback.print_exc()
|
|
return False
|
|
|
|
|
|
@pytest.mark.asyncio
|
|
async def test_config_loading():
|
|
"""Test 5: Configuration loading from .env"""
|
|
logger.info("=" * 60)
|
|
logger.info("TEST 5: Configuration loading from .env")
|
|
logger.info("=" * 60)
|
|
|
|
llm_model = os.getenv("LLM_MODEL", "not-set")
|
|
llm_binding = os.getenv("LLM_BINDING", "not-set")
|
|
embedding_model = os.getenv("EMBEDDING_MODEL", "not-set")
|
|
embedding_binding = os.getenv("EMBEDDING_BINDING", "not-set")
|
|
|
|
logger.info(f"LLM_MODEL: {llm_model}")
|
|
logger.info(f"LLM_BINDING: {llm_binding}")
|
|
logger.info(f"EMBEDDING_MODEL: {embedding_model}")
|
|
logger.info(f"EMBEDDING_BINDING: {embedding_binding}")
|
|
|
|
# Verify we're using OpenAI
|
|
assert embedding_binding == "openai" or embedding_binding == "not-set", \
|
|
"EMBEDDING_BINDING should be openai"
|
|
assert llm_binding == "openai" or llm_binding == "not-set", \
|
|
"LLM_BINDING should be openai"
|
|
|
|
logger.info("✅ TEST 5 PASSED: Configuration loaded correctly\n")
|
|
return True
|
|
|
|
|
|
async def _run_all_tests():
|
|
"""Run all tests (internal helper, not picked up by pytest)"""
|
|
logger.info("\n" + "=" * 60)
|
|
logger.info("GPT-5-NANO COMPATIBILITY TEST SUITE")
|
|
logger.info("=" * 60 + "\n")
|
|
|
|
# Check prerequisites
|
|
api_key = os.getenv("OPENAI_API_KEY")
|
|
if not api_key:
|
|
logger.error("❌ OPENAI_API_KEY environment variable not set")
|
|
return False
|
|
|
|
results = {
|
|
"Parameter Normalization": await test_parameter_normalization(),
|
|
"Configuration Loading": await test_config_loading(),
|
|
"Embeddings": await test_embeddings(),
|
|
"Simple Completion": await test_simple_completion(),
|
|
"Entity Extraction": await test_extraction_with_gpt5nano(),
|
|
}
|
|
|
|
# Summary
|
|
logger.info("=" * 60)
|
|
logger.info("TEST SUMMARY")
|
|
logger.info("=" * 60)
|
|
|
|
for test_name, result in results.items():
|
|
status = "✅ PASSED" if result else "❌ FAILED"
|
|
logger.info(f"{test_name}: {status}")
|
|
|
|
all_passed = all(results.values())
|
|
|
|
if all_passed:
|
|
logger.info("\n" + "=" * 60)
|
|
logger.info("🎉 ALL TESTS PASSED")
|
|
logger.info("=" * 60)
|
|
else:
|
|
logger.info("\n" + "=" * 60)
|
|
logger.info("⚠️ SOME TESTS FAILED")
|
|
logger.info("=" * 60)
|
|
|
|
return all_passed
|
|
|
|
|
|
if __name__ == "__main__":
|
|
# Load environment from .env file
|
|
from dotenv import load_dotenv
|
|
load_dotenv(dotenv_path=".env", override=False)
|
|
|
|
# Run tests
|
|
success = asyncio.run(_run_all_tests())
|
|
sys.exit(0 if success else 1)
|