Why this change is needed:
After implementing model isolation, two critical bugs were discovered that would cause data access failures:
Bug 1: In delete_entity_relation(), the SQL query uses positional parameters
($1, $2) but the parameter dict was not converted to a list of values before
passing to db.execute(). This caused parameter binding failures when trying to
delete entity relations.
Bug 2: Four read methods (get_by_id, get_by_ids, get_vectors_by_ids, drop)
were still using namespace_to_table_name(self.namespace) to get legacy table
names instead of self.table_name with model suffix. This meant these methods
would query the wrong table (legacy without suffix) while data was being
inserted into the new table (with suffix), causing data not found errors.
How it solves it:
- Bug 1: Convert parameter dict to list using list(params.values()) before
passing to db.execute(), matching the pattern used in other methods
- Bug 2: Replace all namespace_to_table_name(self.namespace) calls with
self.table_name in the four affected methods, ensuring they query the
correct model-specific table
Impact:
- delete_entity_relation now correctly deletes relations by entity name
- All read operations now correctly query model-specific tables
- Data written with model isolation can now be properly retrieved
- Maintains consistency with write operations using self.table_name
Testing:
- All 6 PostgreSQL migration tests pass (test_postgres_migration.py)
- All 6 Qdrant migration tests pass (test_qdrant_migration.py)
- Verified parameter binding works correctly
- Verified read methods access correct tables
Why this change is needed:
PostgreSQL vector storage needs model isolation to prevent dimension
conflicts when different workspaces use different embedding models.
Without this, the first workspace locks the vector dimension for all
subsequent workspaces, causing failures.
How it solves it:
- Implements dynamic table naming with model suffix: {table}_{model}_{dim}d
- Adds setup_table() method mirroring Qdrant's approach for consistency
- Implements 4-branch migration logic: both exist -> warn, only new -> use,
neither -> create, only legacy -> migrate
- Batch migration: 500 records/batch (same as Qdrant)
- No automatic rollback to support idempotent re-runs
Impact:
- PostgreSQL tables now isolated by embedding model and dimension
- Automatic data migration from legacy tables on startup
- Backward compatible: model_name=None defaults to "unknown"
- All SQL operations use dynamic table names
Testing:
- 6 new tests for PostgreSQL migration (100% pass)
- Tests cover: naming, migration trigger, scenarios 1-3
- 3 additional scenario tests added for Qdrant completeness
Co-Authored-By: Claude <noreply@anthropic.com>
Previously, configure_vchordrq would fail silently when probes was empty
(the default), preventing epsilon from being configured. Now each parameter
is handled independently with conditional execution, and configuration
errors fail-fast instead of being swallowed.
This fixes the documented epsilon setting being impossible to use in the
default configuration.
• Remove premature ID normalization
• Add lookup mapping for node resolution
• Filter results by requested nodes only
• Improve error logging with workspace
- Batch index existence checks into single query (16+ queries -> 1 query)
- Batch timestamp column checks into single query (8 queries -> 1 query)
- Batch field length checks into single query (5 queries -> 1 query)
Performance improvement: ~70-80% faster initialization (35s -> 5-10s)
Key optimizations:
1. check_tables(): Use ANY($1) to check all indexes at once
2. _migrate_timestamp_columns(): Batch all column type checks
3. _migrate_field_lengths(): Batch all field definition checks
All changes are backward compatible with no schema or API changes.
Reduces database round-trips by batching information_schema queries.
- Add entity_chunks & relation_chunks storage
- Implement KEEP/FIFO limit strategies
- Update env.example with new settings
- Add migration for chunk tracking data
- Support all KV storage
Prepared statement caching is disabled by setting
`statement_cache_size=0` in the `asyncpg` connection pool parameters.
This is necessary to prevent
`asyncpg.exceptions.InvalidSQLStatementNameError` when using
transaction-level connection poolers like Supabase Supavisor or
pgbouncer, which do not support prepared statements.
- Store file_path in full_docs storage
- Update PostgreSQL implementation by map file_path to doc_name
- Other storage implementation automatically handles the new field
- Add get_doc_by_file_path to all storages
- Skip processed files in scan operation
- Check duplicates in upload endpoints
- Check duplicates in text insert APIs
- Return status info in duplicate responses
- Add get_popular_labels() method
- Add search_labels() with fuzzy matching
- Use native SQL for better performance
- Include proper scoring and ranking
- Add support for reading vector_index_type, hnsw_m, hnsw_ef, and ivfflat_lists from config.ini
- Maintain backward compatibility with environment variables
- Update config.ini.example with new PostgreSQL vector index options
- Follow existing configuration priority: env vars > config.ini > defaults
- Add missing query parameters (top_k, enable_rerank, max_tokens, etc.) to cache key generation in kg_query, naive_query, and extract_keywords_only functions
- Add queryparam field to CacheData structure and PostgreSQL storage for debugging
- Update PostgreSQL schema with automatic migration for queryparam JSONB column
- Prevent incorrect cache hits between queries with different parameters
Fixes issue where different query parameters incorrectly shared the same cached results.