This commit fixes two critical issues in PostgreSQL storage:
BUG 1: Legacy table cleanup causing data loss across workspaces
---------------------------------------------------------------
PROBLEM:
- After migrating workspace_a data from legacy table, the ENTIRE legacy
table was deleted
- This caused workspace_b's data (still in legacy table) to be lost
- Multi-tenant data isolation was violated
FIX:
- Implement workspace-aware cleanup: only delete migrated workspace's data
- Check if other workspaces still have data before dropping table
- Only drop legacy table when it becomes completely empty
- If other workspace data exists, preserve legacy table with remaining records
Location: postgres_impl.py PGVectorStorage.setup_table() lines 2510-2567
Test verification:
- test_workspace_migration_isolation_e2e_postgres validates this fix
BUG 2: PGDocStatusStorage missing table initialization
-------------------------------------------------------
PROBLEM:
- PGDocStatusStorage.initialize() only set workspace, never created table
- Caused "relation 'lightrag_doc_status' does not exist" errors
- document insertion (ainsert) failed immediately
FIX:
- Add table creation to initialize() method using _pg_create_table()
- Consistent with other storage implementations:
* MongoDocStatusStorage creates collections
* JsonDocStatusStorage creates directories
* PGDocStatusStorage now creates tables ✓
Location: postgres_impl.py PGDocStatusStorage.initialize() lines 2965-2971
Test Results:
- Unit tests: 13/13 passed (test_unified_lock_safety,
test_workspace_migration_isolation, test_dimension_mismatch)
- E2E tests require PostgreSQL server
Related: PR #2391 (Vector Storage Model Isolation)
Problem:
When UnifiedLock.__aexit__ encountered an exception during async_lock.release(),
the error recovery logic would incorrectly attempt to release async_lock again
because it only checked main_lock_released flag. This could cause:
- Double-release attempts on already-failed locks
- Masking of original exceptions
- Undefined behavior in lock state
Root Cause:
The recovery logic used only main_lock_released to determine whether to attempt
async_lock release, without tracking whether async_lock.release() had already
been attempted and failed.
Fix:
- Added async_lock_released flag to track async_lock release attempts
- Updated recovery logic condition to check both main_lock_released AND
async_lock_released before attempting async_lock release
- This ensures async_lock.release() is only called once, even if it fails
Testing:
- Added test_aexit_no_double_release_on_async_lock_failure:
Verifies async_lock.release() is called only once when it fails
- Added test_aexit_recovery_on_main_lock_failure:
Verifies recovery logic still works when main lock fails
- All 5 UnifiedLock safety tests pass
Impact:
- Eliminates double-release bugs in multiprocess lock scenarios
- Preserves correct error propagation
- Maintains recovery logic for legitimate failure cases
Files Modified:
- lightrag/kg/shared_storage.py: Added async_lock_released tracking
- tests/test_unified_lock_safety.py: Added 2 new tests (5 total now pass)
Why this change is needed:
Two critical P0 security vulnerabilities were identified in CursorReview:
1. UnifiedLock silently allows unprotected execution when lock is None, creating
false security and potential race conditions in multi-process scenarios
2. PostgreSQL migration copies ALL workspace data during legacy table migration,
violating multi-tenant isolation and causing data leakage
How it solves it:
- UnifiedLock now raises RuntimeError when lock is None instead of WARNING
- Added workspace parameter to setup_table() for proper data isolation
- Migration queries now filter by workspace in both COUNT and SELECT operations
- Added clear error messages to help developers diagnose initialization issues
Impact:
- lightrag/kg/shared_storage.py: UnifiedLock raises exception on None lock
- lightrag/kg/postgres_impl.py: Added workspace filtering to migration logic
- tests/test_unified_lock_safety.py: 3 tests for lock safety
- tests/test_workspace_migration_isolation.py: 3 tests for workspace isolation
- tests/test_dimension_mismatch.py: Updated table names and mocks
- tests/test_postgres_migration.py: Updated mocks for workspace filtering
Testing:
- All 31 tests pass (16 migration + 4 safety + 3 lock + 3 workspace + 5 dimension)
- Backward compatible: existing code continues working unchanged
- Code style verified with ruff and pre-commit hooks
Why this change is needed:
Two critical issues were identified in Codex review of PR #2391:
1. Migration fails when legacy collections/tables use different embedding dimensions
(e.g., upgrading from 1536d to 3072d models causes initialization failures)
2. When model_suffix is empty (no model_name provided), table_name equals legacy_table_name,
causing Case 1 logic to delete the only table/collection on second startup
How it solves it:
- Added dimension compatibility checks before migration in both Qdrant and PostgreSQL
- PostgreSQL uses two-method detection: pg_attribute metadata query + vector sampling fallback
- When dimensions mismatch, skip migration and create new empty table/collection, preserving legacy data
- Added safety check to detect when new and legacy names are identical, preventing deletion
- Both backends log clear warnings about dimension mismatches and skipped migrations
Impact:
- lightrag/kg/qdrant_impl.py: Added dimension check (lines 254-297) and no-suffix safety (lines 163-169)
- lightrag/kg/postgres_impl.py: Added dimension check with fallback (lines 2347-2410) and no-suffix safety (lines 2281-2287)
- tests/test_no_model_suffix_safety.py: New test file with 4 test cases covering edge scenarios
- Backward compatible: All existing scenarios continue working unchanged
Testing:
- All 20 tests pass (16 existing migration tests + 4 new safety tests)
- E2E tests enhanced with explicit verification points for dimension mismatch scenarios
- Verified graceful degradation when dimension detection fails
- Code style verified with ruff and pre-commit hooks
This update introduces checks for vector dimension compatibility before migrating legacy data in both PostgreSQL and Qdrant storage implementations. If a dimension mismatch is detected, the migration is skipped to prevent data loss, and a new empty table or collection is created for the new embedding model.
Key changes include:
- Added dimension checks in `PGVectorStorage` and `QdrantVectorDBStorage` classes.
- Enhanced logging to inform users about dimension mismatches and the creation of new storage.
- Updated E2E tests to validate the new behavior, ensuring legacy data is preserved and new structures are created correctly.
Impact:
- Prevents potential data corruption during migrations with mismatched dimensions.
- Improves user experience by providing clear logging and maintaining legacy data integrity.
Testing:
- New tests confirm that the system behaves as expected when encountering dimension mismatches.
Why this change is needed:
PostgreSQL requires an explicit conflict target specification when using
ON CONFLICT with tables that have composite primary keys. Without it,
PostgreSQL throws: "ON CONFLICT DO NOTHING requires inference specification
or constraint name". This syntax error occurs during data migration from
legacy tables when users upgrade from older LightRAG versions.
How it solves it:
Changed line 2378 from "ON CONFLICT DO NOTHING" to "ON CONFLICT (workspace, id)
DO NOTHING" to match the table's PRIMARY KEY (workspace, id) constraint.
This aligns with the correct syntax used in all other 12 ON CONFLICT clauses
throughout the codebase (e.g., line 684, 5229, 5236, etc.).
Impact:
- Fixes migration failure in PGVectorStorage.setup_table()
- Prevents syntax errors when migrating data from legacy tables
- Maintains consistency with all other ON CONFLICT usages in postgres_impl.py
- Affects users upgrading from pre-model-suffix table structure
Testing:
Verified by examining:
- All 12 existing ON CONFLICT usages specify (workspace, id)
- All PostgreSQL tables use PRIMARY KEY (workspace, id)
- Migration code at line 684 uses identical correct syntax
Why this change is needed:
Previously, PostgreSQL and Qdrant had inconsistent migration behavior:
- PostgreSQL kept legacy tables after migration, requiring manual cleanup
- Qdrant auto-deleted legacy collections after migration
This inconsistency caused confusion for users and required different
documentation for each backend.
How it solves the problem:
Unified both backends to follow the same smart cleanup strategy:
- Case 1 (both exist): Auto-delete if legacy is empty, warn if has data
- Case 4 (migration): Auto-delete legacy after successful verification
This provides a fully automated migration experience without manual intervention.
Impact:
- Eliminates need for users to manually delete legacy tables/collections
- Reduces storage waste from duplicate data
- Provides consistent behavior across PostgreSQL and Qdrant
- Simplifies documentation and user experience
Testing:
- All 16 unit tests pass (8 PostgreSQL + 8 Qdrant)
- Added 4 new tests for Case 1 scenarios (empty vs non-empty legacy)
- Updated E2E tests to verify auto-deletion behavior
- All lint checks pass (ruff-format, ruff, trailing-whitespace)
Why this change is needed:
PostgreSQLDB.execute() expects data as a dictionary, not multiple
positional arguments. The migration code was incorrectly unpacking
a list with *values, causing TypeError.
How it solves it:
- Changed values from list to dict: {col: row_dict[col] for col in columns}
- Pass values dict directly to execute() without unpacking
- Matches execute() signature which expects dict[str, Any] | None
Impact:
- Fixes PostgreSQL E2E test failures
- Enables successful legacy data migration for PostgreSQL
Testing:
- Will be verified by PostgreSQL E2E tests in CI
Why this change is needed:
The E2E test test_backward_compat_old_workspace_naming_qdrant was failing
because _find_legacy_collection() searched for generic "lightrag_vdb_{namespace}"
before workspace-specific "{workspace}_{namespace}" collections. When both
existed, it would always find the generic one first (which might be empty),
ignoring the workspace collection that actually contained the data to migrate.
How it solves it:
Reordered the candidates list in _find_legacy_collection() to prioritize
more specific naming patterns over generic ones:
1. {workspace}_{namespace} (most specific, old workspace format)
2. lightrag_vdb_{namespace} (generic legacy format)
3. {namespace} (most generic, oldest format)
This ensures the migration finds the correct source collection with actual data.
Impact:
- Fixes test_backward_compat_old_workspace_naming_qdrant which creates
a "prod_chunks" collection with 10 points
- Migration will now correctly find and migrate from workspace-specific
legacy collections before falling back to generic collections
- Maintains backward compatibility with all legacy naming patterns
Testing:
Run: pytest tests/test_e2e_multi_instance.py::test_backward_compat_old_workspace_naming_qdrant -v
CRITICAL FIX: PostgreSQL vector index creation now uses the actual
embedding dimension from PGVectorStorage instead of reading from
EMBEDDING_DIM environment variable (which defaults to 1024).
Root Cause:
- check_tables() called _create_vector_indexes() during db initialization
- It read EMBEDDING_DIM from env, defaulting to 1024
- E2E tests created 1536d legacy tables
- ALTER TABLE failed: "expected 1024 dimensions, not 1536"
Solution:
- Removed vector index creation from check_tables()
- Created new _create_vector_index(table_name, embedding_dim) method
- setup_table() now creates index with correct embedding_dim
- Each PGVectorStorage instance manages its own index
Impact:
- E2E tests will now pass
- Production deployments work without EMBEDDING_DIM env var
- Multi-model support with different dimensions works correctly
Why this change is needed:
The migration code at line 2351 was passing a dictionary (row_dict) as parameters
to a SQL query that used positional placeholders ($1, $2, etc.). AsyncPG strictly
requires positional parameters to be passed as a list/tuple of values in the exact
order matching the placeholders. Using a dictionary would cause parameter mismatches
and migration failures, potentially corrupting migrated data or causing the entire
migration to fail silently.
How it solves it:
- Extract values from row_dict in the exact order defined by the columns list
- Pass values as separate positional arguments using *values unpacking
- Added clear comments explaining AsyncPG's requirements
- Updated comment from "named parameters" to "positional parameters" for accuracy
Impact:
- Migration now correctly maps values to SQL placeholders
- Prevents data corruption during legacy table migration
- Ensures reliable data transfer from old to new table schemas
- All PostgreSQL migration tests pass (6/6)
Testing:
- Verified with `uv run pytest tests/test_postgres_migration.py -v` - all tests pass
- Pre-commit hooks pass (ruff-format, ruff)
- Tested parameter ordering logic matches AsyncPG requirements
This change ensures that when the model_suffix is empty, the final_namespace falls back to the legacy_namespace, preventing potential naming issues. A warning is logged to inform users about the missing model suffix and the fallback to the legacy naming scheme.
Additionally, comprehensive tests have been added to verify the behavior of both PostgreSQL and Qdrant storage when model_suffix is empty, ensuring that the naming conventions are correctly applied and that no trailing underscores are present.
Impact:
- Prevents crashes due to empty model_suffix
- Provides clear feedback to users regarding configuration issues
- Maintains backward compatibility with existing setups
Testing:
All new tests pass, validating the handling of empty model_suffix scenarios.
Why this change is needed:
Prevent potential errors when embedding_func does not have model_name set,
which could cause table naming issues in PostgreSQL.
How it solves it:
- Check if model_suffix is not empty before appending to table name
- Fall back to base table name with a warning if model_suffix is unavailable
- Log clear warning message to alert users about missing model isolation
Impact:
- Prevents crashes when model_name is not configured
- Provides clear feedback to users about configuration issues
- Maintains backward compatibility with configs that don't set model_name
Testing:
Existing PostgreSQL tests validate the happy path. This adds defensive handling
for edge cases.
Why this change is needed:
PostgreSQLDB class doesn't have a fetch() method. The migration code
was incorrectly using db.fetch() for batch data retrieval, causing
AttributeError during E2E tests.
How it solves it:
1. Changed db.fetch(sql, params) to db.query(sql, params, multirows=True)
2. Updated all test mocks to support the multirows parameter
3. Consolidated mock_query implementation to handle both single and multi-row queries
Impact:
- PostgreSQL legacy data migration now works correctly in E2E tests
- All unit tests pass (6/6)
- Aligns with PostgreSQLDB's actual API
Testing:
- pytest tests/test_postgres_migration.py -v (6/6 passed)
- Updated test_postgres_migration_trigger mock
- Updated test_scenario_2_legacy_upgrade_migration mock
- Updated base mock_pg_db fixture
Why this change is needed:
The legacy_namespace logic was incorrectly including workspace in the
collection name, causing migration to fail in E2E tests. When workspace
was set (e.g., to a temp directory path), legacy_namespace became
"/tmp/xxx_chunks" instead of "lightrag_vdb_chunks", so the migration
logic couldn't find the legacy collection.
How it solves it:
Changed legacy_namespace to always use the old naming scheme without
workspace prefix: "lightrag_vdb_{namespace}". This matches the actual
collection names from pre-migration code and aligns with PostgreSQL's
approach where legacy_table_name = base_table (without workspace).
Impact:
- Qdrant legacy data migration now works correctly in E2E tests
- All unit tests pass (6/6 for both Qdrant and PostgreSQL)
- E2E test_legacy_migration_qdrant should now pass
Testing:
- Unit tests: pytest tests/test_qdrant_migration.py -v (6/6 passed)
- Unit tests: pytest tests/test_postgres_migration.py -v (6/6 passed)
- Updated test_qdrant_collection_naming to verify new legacy_namespace
Why this change is needed:
asdict() converts nested dataclasses to dicts. When LightRAG creates
global_config with asdict(self), the embedding_func field (which is an
EmbeddingFunc dataclass) gets converted to a plain dict, losing its
get_model_identifier() method.
How it solves it:
1. Save original EmbeddingFunc object before asdict() call
2. Restore it in global_config after asdict()
3. Add null check and debug logging in _generate_collection_suffix
Impact:
- E2E tests with full LightRAG initialization now work correctly
- Vector storage model isolation features function properly
- Maintains backward compatibility
Testing:
All unit tests pass (12/12 in migration tests)
Why these changes are needed:
1. LightRAG wraps embedding_func with priority_limit_async_func_call
decorator, causing loss of get_model_identifier method
2. UnifiedLock.__aexit__ set main_lock_released flag incorrectly
How it solves them:
1. _generate_collection_suffix now tries multiple approaches:
- First check if embedding_func has get_model_identifier
- Fallback to original EmbeddingFunc in global_config
- Return empty string for backward compatibility
2. Move main_lock_released = True inside the if block so flag
is only set when lock actually exists and is released
Impact:
- Fixes E2E tests that initialize complete LightRAG instances
- Fixes incorrect async lock cleanup in exception scenarios
- Maintains backward compatibility
Testing:
All unit tests pass (test_qdrant_migration.py, test_postgres_migration.py)
Changes made:
- Updated the batch insert logic to use a dictionary for row values, improving clarity and ensuring compatibility with the database execution method.
- Adjusted the insert query construction to utilize named parameters, enhancing readability and maintainability.
Impact:
- Streamlines the insertion process and reduces potential errors related to parameter binding.
Testing:
- Functionality remains intact; no new tests required as existing tests cover the insert operations.
Why this change is needed:
The previous fix in commit 7dc1f83e incorrectly "fixed" delete_entity_relation
by converting the parameter dict to a list. However, PostgreSQLDB.execute()
expects a dict[str, Any] parameter, not a list. The execute() method internally
converts dict values to tuple (line 1487: tuple(data.values())), so passing
a list bypasses the expected interface and causes parameter binding issues.
What was wrong:
```python
params = {"workspace": self.workspace, "entity_name": entity_name}
await self.db.execute(delete_sql, list(params.values())) # WRONG
```
The correct approach (matching delete_entity method):
```python
await self.db.execute(
delete_sql, {"workspace": self.workspace, "entity_name": entity_name}
)
```
How it solves it:
- Pass parameters as a dict directly to db.execute(), matching the method signature
- Maintain consistency with delete_entity() which correctly passes a dict
- Let db.execute() handle the dict-to-tuple conversion internally as designed
Impact:
- delete_entity_relation now correctly passes parameters to PostgreSQL
- Method interface consistency with other delete operations
- Proper parameter binding ensures reliable entity relation deletion
Testing:
- All 6 PostgreSQL migration tests pass
- Verified parameter passing matches delete_entity pattern
- Code review identified the issue before production use
Related:
- Fixes incorrect "fix" from commit 7dc1f83e
- Aligns with PostgreSQLDB.execute() interface (line 1477-1480)
Why this change is needed:
After implementing model isolation, two critical bugs were discovered that would cause data access failures:
Bug 1: In delete_entity_relation(), the SQL query uses positional parameters
($1, $2) but the parameter dict was not converted to a list of values before
passing to db.execute(). This caused parameter binding failures when trying to
delete entity relations.
Bug 2: Four read methods (get_by_id, get_by_ids, get_vectors_by_ids, drop)
were still using namespace_to_table_name(self.namespace) to get legacy table
names instead of self.table_name with model suffix. This meant these methods
would query the wrong table (legacy without suffix) while data was being
inserted into the new table (with suffix), causing data not found errors.
How it solves it:
- Bug 1: Convert parameter dict to list using list(params.values()) before
passing to db.execute(), matching the pattern used in other methods
- Bug 2: Replace all namespace_to_table_name(self.namespace) calls with
self.table_name in the four affected methods, ensuring they query the
correct model-specific table
Impact:
- delete_entity_relation now correctly deletes relations by entity name
- All read operations now correctly query model-specific tables
- Data written with model isolation can now be properly retrieved
- Maintains consistency with write operations using self.table_name
Testing:
- All 6 PostgreSQL migration tests pass (test_postgres_migration.py)
- All 6 Qdrant migration tests pass (test_qdrant_migration.py)
- Verified parameter binding works correctly
- Verified read methods access correct tables
Why this change is needed:
PostgreSQL vector storage needs model isolation to prevent dimension
conflicts when different workspaces use different embedding models.
Without this, the first workspace locks the vector dimension for all
subsequent workspaces, causing failures.
How it solves it:
- Implements dynamic table naming with model suffix: {table}_{model}_{dim}d
- Adds setup_table() method mirroring Qdrant's approach for consistency
- Implements 4-branch migration logic: both exist -> warn, only new -> use,
neither -> create, only legacy -> migrate
- Batch migration: 500 records/batch (same as Qdrant)
- No automatic rollback to support idempotent re-runs
Impact:
- PostgreSQL tables now isolated by embedding model and dimension
- Automatic data migration from legacy tables on startup
- Backward compatible: model_name=None defaults to "unknown"
- All SQL operations use dynamic table names
Testing:
- 6 new tests for PostgreSQL migration (100% pass)
- Tests cover: naming, migration trigger, scenarios 1-3
- 3 additional scenario tests added for Qdrant completeness
Co-Authored-By: Claude <noreply@anthropic.com>
Why this change is needed:
To implement vector storage model isolation for Qdrant, allowing different workspaces to use different embedding models without conflict, and automatically migrating existing data.
How it solves it:
- Modified QdrantVectorDBStorage to use model-specific collection suffixes
- Implemented automated migration logic from legacy collections to new schema
- Fixed Shared-Data lock re-entrancy issue in multiprocess mode
- Added comprehensive tests for collection naming and migration triggers
Impact:
- Existing users will have data automatically migrated on next startup
- New workspaces will use isolated collections based on embedding model
- Fixes potential lock-related bugs in shared storage
Testing:
- Added tests/test_qdrant_migration.py passing
- Verified migration logic covers all 4 states (New/Legacy existence combinations)
Why this change is needed:
To enforce consistent naming and migration strategy across all vector storages.
How it solves it:
- Added _generate_collection_suffix() helper
- Added _get_legacy_collection_name() and _get_new_collection_name() interfaces
Impact:
Prepares storage implementations for multi-model support.
Testing:
Added tests/test_base_storage_integrity.py passing.
Why this change is needed:
To support vector storage model isolation, we need to track which model is used for embeddings and generate unique identifiers for collections/tables.
How it solves it:
- Added model_name field to EmbeddingFunc
- Added get_model_identifier() method to generate sanitized suffix
- Added unit tests to verify behavior
Impact:
Enables subsequent changes in storage backends to isolate data by model.
Testing:
Added tests/test_embedding_func.py passing.
Previously, configure_vchordrq would fail silently when probes was empty
(the default), preventing epsilon from being configured. Now each parameter
is handled independently with conditional execution, and configuration
errors fail-fast instead of being swallowed.
This fixes the documented epsilon setting being impossible to use in the
default configuration.
• Track if we acquired the pipeline lock
• Auto-acquire pipeline when idle
• Only release if we acquired it
• Prevent concurrent deletion conflicts
• Improve deletion job validation
• Add workspace param to get_namespace_data
• Update docstring with proper usage example
• Simplify demo to show correct workflow
• Remove confusing before/after comparison
• Clarify tool should run after init
- Add _default_workspace to global vars
- Set _default_workspace to None on cleanup
- Ensure complete resource cleanup
- Fix missing workspace finalization
* Acquire lock before setting ContextVar
* Prevent state corruption on cancellation
* Fix permanent lock brick scenario
* Store context only after success
* Handle acquisition failure properly