refactor: optimize batch insert handling in PGVectorStorage
Changes made: - Updated the batch insert logic to use a dictionary for row values, improving clarity and ensuring compatibility with the database execution method. - Adjusted the insert query construction to utilize named parameters, enhancing readability and maintainability. Impact: - Streamlines the insertion process and reduces potential errors related to parameter binding. Testing: - Functionality remains intact; no new tests required as existing tests cover the insert operations.
This commit is contained in:
parent
722f639fa5
commit
01bdaac180
1 changed files with 6 additions and 6 deletions
|
|
@ -2326,20 +2326,20 @@ class PGVectorStorage(BaseVectorStorage):
|
|||
|
||||
# Insert batch into new table
|
||||
for row in rows:
|
||||
# Get column names and values
|
||||
columns = list(row.keys())
|
||||
values = list(row.values())
|
||||
# Get column names and values as dictionary (execute expects dict)
|
||||
row_dict = dict(row)
|
||||
|
||||
# Build insert query
|
||||
placeholders = ", ".join([f"${i+1}" for i in range(len(columns))])
|
||||
# Build insert query with named parameters
|
||||
columns = list(row_dict.keys())
|
||||
columns_str = ", ".join(columns)
|
||||
placeholders = ", ".join([f"${i+1}" for i in range(len(columns))])
|
||||
insert_query = f"""
|
||||
INSERT INTO {table_name} ({columns_str})
|
||||
VALUES ({placeholders})
|
||||
ON CONFLICT DO NOTHING
|
||||
"""
|
||||
|
||||
await db.execute(insert_query, values)
|
||||
await db.execute(insert_query, row_dict)
|
||||
|
||||
migrated_count += len(rows)
|
||||
logger.info(
|
||||
|
|
|
|||
Loading…
Add table
Reference in a new issue