LightRAG/tests/gpt5_nano_compatibility
Raphael MANSUY fe9b8ec02a
tests: stabilize integration tests + skip external services; fix multi-tenant API behavior and idempotency (#4)
* feat: Implement multi-tenant architecture with tenant and knowledge base models

- Added data models for tenants, knowledge bases, and related configurations.
- Introduced role and permission management for users in the multi-tenant system.
- Created a service layer for managing tenants and knowledge bases, including CRUD operations.
- Developed a tenant-aware instance manager for LightRAG with caching and isolation features.
- Added a migration script to transition existing workspace-based deployments to the new multi-tenant architecture.

* chore: ignore lightrag/api/webui/assets/ directory

* chore: stop tracking lightrag/api/webui/assets (ignore in .gitignore)

* feat: Initialize LightRAG Multi-Tenant Stack with PostgreSQL

- Added README.md for project overview, setup instructions, and architecture details.
- Created docker-compose.yml to define services: PostgreSQL, Redis, LightRAG API, and Web UI.
- Introduced env.example for environment variable configuration.
- Implemented init-postgres.sql for PostgreSQL schema initialization with multi-tenant support.
- Added reproduce_issue.py for testing default tenant access via API.

* feat: Enhance TenantSelector and update related components for improved multi-tenant support

* feat: Enhance testing capabilities and update documentation

- Updated Makefile to include new test commands for various modes (compatibility, isolation, multi-tenant, security, coverage, and dry-run).
- Modified API health check endpoint in Makefile to reflect new port configuration.
- Updated QUICK_START.md and README.md to reflect changes in service URLs and ports.
- Added environment variables for testing modes in env.example.
- Introduced run_all_tests.sh script to automate testing across different modes.
- Created conftest.py for pytest configuration, including database fixtures and mock services.
- Implemented database helper functions for streamlined database operations in tests.
- Added test collection hooks to skip tests based on the current MULTITENANT_MODE.

* feat: Implement multi-tenant support with demo mode enabled by default

- Added multi-tenant configuration to the environment and Docker setup.
- Created pre-configured demo tenants (acme-corp and techstart) for testing.
- Updated API endpoints to support tenant-specific data access.
- Enhanced Makefile commands for better service management and database operations.
- Introduced user-tenant membership system with role-based access control.
- Added comprehensive documentation for multi-tenant setup and usage.
- Fixed issues with document visibility in multi-tenant environments.
- Implemented necessary database migrations for user memberships and legacy support.

* feat(audit): Add final audit report for multi-tenant implementation

- Documented overall assessment, architecture overview, test results, security findings, and recommendations.
- Included detailed findings on critical security issues and architectural concerns.

fix(security): Implement security fixes based on audit findings

- Removed global RAG fallback and enforced strict tenant context.
- Configured super-admin access and required user authentication for tenant access.
- Cleared localStorage on logout and improved error handling in WebUI.

chore(logs): Create task logs for audit and security fixes implementation

- Documented actions, decisions, and next steps for both audit and security fixes.
- Summarized test results and remaining recommendations.

chore(scripts): Enhance development stack management scripts

- Added scripts for cleaning, starting, and stopping the development stack.
- Improved output messages and ensured graceful shutdown of services.

feat(starter): Initialize PostgreSQL with AGE extension support

- Created initialization scripts for PostgreSQL extensions including uuid-ossp, vector, and AGE.
- Ensured successful installation and verification of extensions.

* feat: Implement auto-select for first tenant and KB on initial load in WebUI

- Removed WEBUI_INITIAL_STATE_FIX.md as the issue is resolved.
- Added useTenantInitialization hook to automatically select the first available tenant and KB on app load.
- Integrated the new hook into the Root component of the WebUI.
- Updated RetrievalTesting component to ensure a KB is selected before allowing user interaction.
- Created end-to-end tests for multi-tenant isolation and real service interactions.
- Added scripts for starting, stopping, and cleaning the development stack.
- Enhanced API and tenant routes to support tenant-specific pipeline status initialization.
- Updated constants for backend URL to reflect the correct port.
- Improved error handling and logging in various components.

* feat: Add multi-tenant support with enhanced E2E testing scripts and client functionality

* update client

* Add integration and unit tests for multi-tenant API, models, security, and storage

- Implement integration tests for tenant and knowledge base management endpoints in `test_tenant_api_routes.py`.
- Create unit tests for tenant isolation, model validation, and role permissions in `test_tenant_models.py`.
- Add security tests to enforce role-based permissions and context validation in `test_tenant_security.py`.
- Develop tests for tenant-aware storage operations and context isolation in `test_tenant_storage_phase3.py`.

* feat(e2e): Implement OpenAI model support and database reset functionality

* Add comprehensive test suite for gpt-5-nano compatibility

- Introduced tests for parameter normalization, embeddings, and entity extraction.
- Implemented direct API testing for gpt-5-nano.
- Validated .env configuration loading and OpenAI API connectivity.
- Analyzed reasoning token overhead with various token limits.
- Documented test procedures and expected outcomes in README files.
- Ensured all tests pass for production readiness.

* kg(postgres_impl): ensure AGE extension is loaded in session and configure graph initialization

* dev: add hybrid dev helper scripts, Makefile, docker-compose.dev-db and local development docs

* feat(dev): add dev helper scripts and local development documentation for hybrid setup

* feat(multi-tenant): add detailed specifications and logs for multi-tenant improvements, including UX, backend handling, and ingestion pipeline

* feat(migration): add generated tenant/kb columns, indexes, triggers; drop unused tables; update schema and docs

* test(backward-compat): adapt tests to new StorageNameSpace/TenantService APIs (use concrete dummy storages)

* chore: multi-tenant and UX updates — docs, webui, storage, tenant service adjustments

* tests: stabilize integration tests + skip external services; fix multi-tenant API behavior and idempotency

- gpt5_nano_compatibility: add pytest-asyncio markers, skip when OPENAI key missing, prevent module-level asyncio.run collection, add conftest
- Ollama tests: add server availability check and skip markers; avoid pytest collection warnings by renaming helper classes
- Graph storage tests: rename interactive test functions to avoid pytest collection
- Document & Tenant routes: support external_ids for idempotency; ensure HTTPExceptions are re-raised
- LightRAG core: support external_ids in apipeline_enqueue_documents and idempotent logic
- Tests updated to match API changes (tenant routes & document routes)
- Add logs and scripts for inspection and audit
2025-12-04 16:04:21 +08:00
..
__init__.py tests: stabilize integration tests + skip external services; fix multi-tenant API behavior and idempotency (#4) 2025-12-04 16:04:21 +08:00
conftest.py tests: stabilize integration tests + skip external services; fix multi-tenant API behavior and idempotency (#4) 2025-12-04 16:04:21 +08:00
README.md tests: stabilize integration tests + skip external services; fix multi-tenant API behavior and idempotency (#4) 2025-12-04 16:04:21 +08:00
test_direct_gpt5nano.py tests: stabilize integration tests + skip external services; fix multi-tenant API behavior and idempotency (#4) 2025-12-04 16:04:21 +08:00
test_env_config.py tests: stabilize integration tests + skip external services; fix multi-tenant API behavior and idempotency (#4) 2025-12-04 16:04:21 +08:00
test_gpt5_nano_compatibility.py tests: stabilize integration tests + skip external services; fix multi-tenant API behavior and idempotency (#4) 2025-12-04 16:04:21 +08:00
test_gpt5_reasoning.py tests: stabilize integration tests + skip external services; fix multi-tenant API behavior and idempotency (#4) 2025-12-04 16:04:21 +08:00

GPT-5-Nano Compatibility Tests

This directory contains comprehensive tests for ensuring LightRAG's compatibility with OpenAI's gpt-5-nano model, including its specific API constraints and parameter requirements.

Overview

gpt-5-nano is a cost-optimized reasoning model that differs from traditional LLMs in important ways:

  • Uses max_completion_tokens instead of max_tokens
  • Does NOT support custom temperature parameter
  • Has built-in reasoning that consumes tokens from the completion budget
  • Requires token budget adjustments to account for reasoning overhead

These tests validate that LightRAG handles these constraints correctly.

Test Files

1. test_gpt5_nano_compatibility.py Primary Test Suite

Purpose: Comprehensive compatibility validation Tests:

  • Test 1: Parameter normalization (max_tokens → max_completion_tokens conversion)
  • Test 2: Configuration loading from .env
  • Test 3: Embeddings generation with gpt-5-nano
  • Test 4: Simple LLM completion
  • Test 5: Entity extraction tasks

Run: python test_gpt5_nano_compatibility.py

Expected Output:

✅ Parameter Normalization: PASSED
✅ Configuration Loading: PASSED
✅ Embeddings: PASSED
✅ Simple Completion: PASSED
✅ Entity Extraction: PASSED
🎉 ALL TESTS PASSED

2. test_env_config.py

Purpose: Validate .env configuration is properly respected Tests:

  • Part 1: .env file loading
  • Part 2: Config parser respects .env variables
  • Part 3: OpenAI API connectivity
  • Part 4: Embeddings generation with configured model
  • Part 5: LLM extraction with configured model
  • Part 6: Full RAG pipeline integration

Run: python test_env_config.py

Expected Output:

✅ .env Loading: PASSED
✅ Config Parser: PASSED
✅ OpenAI Connectivity: PASSED
✅ Embeddings: PASSED
✅ LLM Extraction: PASSED
✅ Full Integration: PASSED
OVERALL: 6/6 tests passed

3. test_direct_gpt5nano.py

Purpose: Direct API testing without LightRAG abstraction Validates: Raw gpt-5-nano API behavior with proper parameters

Run: python test_direct_gpt5nano.py

What it does:

  • Sends direct API request to gpt-5-nano
  • Uses max_completion_tokens parameter
  • Prints raw response and token usage

4. test_gpt5_reasoning.py

Purpose: Understand gpt-5-nano's reasoning token overhead Tests: Token allocation with different reasoning effort levels

Run: python test_gpt5_reasoning.py

What it does:

  • Test 1: 200 token budget
  • Test 2: 50 token budget with reasoning_effort="low"
  • Outputs actual reasoning tokens consumed

Prerequisites

Environment Variables

Create a .env file in the repository root with:

# Required for all tests
OPENAI_API_KEY=sk-...

# For LLM tests
LLM_BINDING=openai
LLM_MODEL=gpt-5-nano
LLM_BINDING_API_KEY=sk-...

# For embedding tests
EMBEDDING_BINDING=openai
EMBEDDING_MODEL=text-embedding-3-small
EMBEDDING_BINDING_API_KEY=sk-...
EMBEDDING_DIM=1536

Or use existing .env configuration if already set up.

Python Dependencies

pip install openai
pip install python-dotenv
pip install lightrag  # for integration tests

Running All Tests

From this directory:

# Run individual test
python test_gpt5_nano_compatibility.py

# Or run all tests
for test in test_*.py; do 
    echo "Running $test..."
    python "$test"
done

From repository root:

# Run specific test
python -m pytest tests/gpt5_nano_compatibility/test_gpt5_nano_compatibility.py -v

# Or run all tests in this directory
python -m pytest tests/gpt5_nano_compatibility/ -v

Key Findings & Implementation

Problem: Parameter Incompatibility

gpt-5-nano requires different parameter names and constraints than other OpenAI models.

Issue:

  • Other models use max_tokens
  • gpt-5-nano requires max_completion_tokens

Solution: A normalization function _normalize_openai_kwargs_for_model() in /lightrag/llm/openai.py that:

  1. Detects gpt-5 models
  2. Converts max_tokensmax_completion_tokens
  3. Applies 2.5x token multiplier (minimum 300 tokens) to account for reasoning overhead
  4. Removes unsupported temperature parameter

Problem: Empty Responses

gpt-5-nano was returning empty responses despite successful API calls.

Root Cause: Internal reasoning consumes tokens from the completion budget. With insufficient token budget, all tokens were consumed by reasoning, leaving nothing for actual output.

Solution: Empirical testing showed that:

  • 200 tokens: Often empty responses
  • 300+ tokens: Consistent full responses
  • 2.5x multiplier: Provides adequate margin for reasoning

Parameter Handling

For gpt-5-nano models:

# Before normalization:
{"max_tokens": 500, "temperature": 0.7}

# After normalization:
{"max_completion_tokens": 1250}  # 500 * 2.5, min 300

For other models:

# Unchanged
{"max_tokens": 500, "temperature": 0.7}

Test Results Summary

All tests validate:

  • Parameter normalization works correctly
  • gpt-5-nano parameter constraints are handled
  • Backward compatibility maintained (other models unaffected)
  • Configuration from .env is respected
  • OpenAI API integration functions properly
  • Embeddings generation works
  • Entity extraction works with gpt-5-nano
  • Full RAG pipeline integration successful

Troubleshooting

"OPENAI_API_KEY not set"

  • Ensure .env file exists in repository root
  • Verify OPENAI_API_KEY is set: echo $OPENAI_API_KEY

"max_tokens unsupported with this model"

  • This error means parameter normalization isn't being called
  • Check that you're using LightRAG functions (not direct OpenAI client)
  • Verify the normalization function is in /lightrag/llm/openai.py

"Empty API responses"

  • Increase token budget (tests use 100+ tokens)
  • If using custom token limits, multiply by 2.5 minimum

"temperature does not support 0.7"

  • gpt-5-nano doesn't accept custom temperature
  • The normalization function removes it automatically
  • No action needed if using LightRAG functions

Documentation

For more details, see:

  • /docs/GPT5_NANO_COMPATIBILITY.md - User guide
  • /docs/GPT5_NANO_COMPATIBILITY_IMPLEMENTATION.md - Technical implementation details
  • /lightrag/llm/openai.py - Contains parameter normalization logic
  • /lightrag/llm/azure_openai.py - Azure OpenAI integration with same normalization
  • /.env - Configuration file (use .env.example as template)

Maintenance Notes

When updating LightRAG's OpenAI integration:

  1. Run all tests to ensure backward compatibility
  2. If adding new OpenAI models, test with gpt-5-nano constraints
  3. Update parameter normalization logic if OpenAI adds new gpt-5 variants
  4. Keep max_tokens * 2.5 strategy unless OpenAI documents different reasoning overhead

Last Updated: 2024 Status: All tests passing Model Tested: gpt-5-nano OpenAI SDK: Latest (with max_completion_tokens support)