- Changed to_prompt_json default from ensure_ascii=True to False
- Removed ensure_ascii parameter from Graphiti.__init__ and GraphitiClients
- Removed ensure_ascii from all function signatures and context dictionaries
- Removed ensure_ascii from all test files
- All JSON serialization now preserves Unicode characters by default
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude <noreply@anthropic.com>
* fix: Improve deduplication ID validation and logging
- Add comprehensive logging to verify IDs sent to LLM (sent vs received)
- Enhance prompt with explicit ID bounds (0 through N-1)
- Add validation warnings for missing and extra IDs from LLM responses
- Improve error message clarity for invalid dedupe IDs
- Log actual IDs sent to LLM to confirm no index leakage
This helps diagnose cases where the LLM returns IDs outside the valid
range (e.g., ID 19 when only 0-18 were sent).
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude <noreply@anthropic.com>
* fix: Remove redundant logging parameter
Address reviewer comment about redundant third parameter in debug log statement.
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude <noreply@anthropic.com>
* fix: Address reviewer comments on list slicing and prompt clarity
- Fix list slicing bug: change <= to < to avoid gap when exactly 20 elements
(previously would skip element 10 when showing 21 elements)
- Consolidate redundant prompt phrasing while maintaining clarity
(reduced from 3 sentences to 2, keeping essential constraints)
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude <noreply@anthropic.com>
* fix: Remove redundant prompt text to reduce token usage
Consolidate 'using these exact IDs (0 through N-1)' with following sentence
to eliminate repetition. Changes:
- 'using these exact IDs (0 through {N-1}). Do not skip IDs or use IDs outside this range'
- 'with IDs 0 through {N-1}. Do not skip or add IDs'
Saves ~15 tokens per deduplication call.
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude <noreply@anthropic.com>
---------
Co-authored-by: Claude <noreply@anthropic.com>
Add NodeSummaryFilter callback parameter to extract_attributes_from_nodes
and extract_attributes_from_node functions, allowing consumers to
selectively skip summary regeneration for specific nodes.
This enables downstream applications to implement custom logic for
throttling or filtering which nodes should have summaries regenerated,
reducing unnecessary LLM calls and token costs.
Key changes:
- Add NodeSummaryFilter type alias: Callable[[EntityNode], Awaitable[bool]]
- Update extract_attributes_from_nodes with optional should_summarize_node parameter
- Update extract_attributes_from_node with conditional summary generation logic
- Add 5 comprehensive test cases covering callback functionality
- Maintain full backwards compatibility (default None = all summaries generated)
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-authored-by: Claude <noreply@anthropic.com>
* Refactor deduplication logic to enhance node resolution and track duplicate pairs (#929)
* Simplify deduplication process in bulk_utils by reusing canonical nodes.
* Update dedup_helpers to store duplicate pairs during resolution.
* Modify node_operations to append duplicate pairs when resolving nodes.
* Add tests to verify deduplication behavior and ensure correct state updates.
* reveret to concurrent dedup with fanout and then reconcilation
* add performance note for deduplication loop in bulk_utils
* enhance deduplication logic in bulk_utils to handle missing canonical nodes gracefully
* Update graphiti_core/utils/bulk_utils.py
Co-authored-by: claude[bot] <209825114+claude[bot]@users.noreply.github.com>
* refactor deduplication logic in bulk_utils to use directed union-find for canonical UUID resolution
* implement _build_directed_uuid_map for efficient UUID resolution in bulk_utils
* document directed union-find lookup in bulk_utils for clarity
---------
Co-authored-by: claude[bot] <209825114+claude[bot]@users.noreply.github.com>
* add repository guidelines and project structure documentation
* update neo4j image version and modify test command to disable specific databases
* implement deduplication helpers and integrate with node operations
* refactor string formatting to use single quotes in node operations
* enhance deduplication helpers with UUID indexing and update resolution logic
* implement exact fact matching (#931)
* Add support for non-ASCII characters in LLM prompts
- Add ensure_ascii parameter to Graphiti class (default: True)
- Create to_prompt_json helper function for consistent JSON serialization
- Update all prompt files to use new helper function
- Preserve Korean/Japanese/Chinese characters when ensure_ascii=False
- Maintain backward compatibility with existing behavior
Fixes issue where non-ASCII characters were escaped as unicode sequences
in prompts, making them unreadable in LLM logs and potentially affecting
model understanding.
* Remove unused json imports after replacing with to_prompt_json helper
- Fix ruff lint errors (F401) for unused json imports
- All prompt files now use to_prompt_json helper instead of json.dumps
- Maintains clean code style and passes lint checks
* Fix ensure_ascii propagation to all LLM calls
- Add ensure_ascii parameter to maintenance operation functions that were missing it
- Update function signatures in node_operations, community_operations, temporal_operations, and edge_operations
- Ensure all llm_client.generate_response calls receive proper ensure_ascii context
- Fix hardcoded ensure_ascii: True values that prevented non-ASCII character preservation
- Maintain backward compatibility with default ensure_ascii=True
- Complete the fix for issue #804 ensuring Korean/Japanese/Chinese characters are properly handled in LLM prompts
* Bump version from 0.9.0 to 0.9.1 in pyproject.toml and update google-genai dependency to >=0.1.0
* Bump version from 0.9.1 to 0.9.2 in pyproject.toml
* Update google-genai dependency version to >=0.8.0 in pyproject.toml
* loc file
* Update pyproject.toml to version 0.9.3, restructure dependencies, and modify author format. Remove outdated Google API key note from README.md.
* upgrade poetry and ruff
* first cut
* Update dependencies and enhance README for optional LLM providers
- Bump aiohttp version from 3.11.14 to 3.11.16
- Update yarl version from 1.18.3 to 1.19.0
- Modify pyproject.toml to include optional extras for Anthropic, Groq, and Google Gemini
- Revise README.md to reflect new optional LLM provider installation instructions and clarify API key requirements
* Remove deprecated packages from poetry.lock and update content hash
- Removed cachetools, google-auth, google-genai, pyasn1, pyasn1-modules, rsa, and websockets from the lock file.
- Added new extras for anthropic, google-genai, and groq.
- Updated content hash to reflect changes.
* Refactor import paths for GeminiClient in README and __init__.py
- Updated import statement in README.md to reflect the new module structure for GeminiClient.
- Removed GeminiClient from the __all__ list in __init__.py as it is no longer directly imported.
* Refactor import paths for GeminiEmbedder in README and __init__.py
- Updated import statement in README.md to reflect the new module structure for GeminiEmbedder.
- Removed GeminiEmbedder and GeminiEmbedderConfig from the __all__ list in __init__.py as they are no longer directly imported.
* implement so
* bug fixes and typing
* inject schema for non-openai clients
* correct datetime format
* remove List keyword
* Refactor node_operations.py to use updated prompt_library functions
* update example