Changes to `to_prompt_json()` helper to default to minified JSON (no indentation) instead of 2-space indentation. This reduces token consumption in LLM prompts while maintaining all necessary information.
- Changed default `indent` parameter from `2` to `None` in `prompt_helpers.py`
- Updated all prompt modules to remove explicit `indent=2` arguments
- Minor code formatting fixes in LLM clients
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-authored-by: Claude <noreply@anthropic.com>
* Refactor summary prompts to use character limit and prevent meta-commentary
- Changed summary length constraint from "8 sentences" to "250 characters" for more predictable output
- Created reusable summary_instructions snippet in snippets.py with clear BAD/GOOD examples
- Added explicit instruction to output only factual content without meta-commentary
- Applied consistent formatting across extract_nodes.py and summarize_nodes.py
- Bumped version to 0.22.0pre2
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude <noreply@anthropic.com>
* Add copyright header to snippets.py
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude <noreply@anthropic.com>
---------
Co-authored-by: Claude <noreply@anthropic.com>
* Enforce shorter summaries with 8 sentence limit
Replace 250-word limit with 8 sentence limit for node summaries to improve conciseness. Also update prompt system message for summarize_context to better reflect its dual purpose of generating summaries and attributes.
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude <noreply@anthropic.com>
* Update graphiti_core/prompts/summarize_nodes.py
Co-authored-by: claude[bot] <209825114+claude[bot]@users.noreply.github.com>
* Bump version to 0.22.0pre1
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude <noreply@anthropic.com>
* Update graphiti_core/prompts/summarize_nodes.py
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude <noreply@anthropic.com>
---------
Co-authored-by: Claude <noreply@anthropic.com>
Co-authored-by: claude[bot] <209825114+claude[bot]@users.noreply.github.com>
* Remove ensure_ascii configuration parameter
- Changed to_prompt_json default from ensure_ascii=True to False
- Removed ensure_ascii parameter from Graphiti.__init__ and GraphitiClients
- Removed ensure_ascii from all function signatures and context dictionaries
- Removed ensure_ascii from all test files
- All JSON serialization now preserves Unicode characters by default
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude <noreply@anthropic.com>
* format
---------
Co-authored-by: Claude <noreply@anthropic.com>
* Add support for non-ASCII characters in LLM prompts
- Add ensure_ascii parameter to Graphiti class (default: True)
- Create to_prompt_json helper function for consistent JSON serialization
- Update all prompt files to use new helper function
- Preserve Korean/Japanese/Chinese characters when ensure_ascii=False
- Maintain backward compatibility with existing behavior
Fixes issue where non-ASCII characters were escaped as unicode sequences
in prompts, making them unreadable in LLM logs and potentially affecting
model understanding.
* Remove unused json imports after replacing with to_prompt_json helper
- Fix ruff lint errors (F401) for unused json imports
- All prompt files now use to_prompt_json helper instead of json.dumps
- Maintains clean code style and passes lint checks
* Fix ensure_ascii propagation to all LLM calls
- Add ensure_ascii parameter to maintenance operation functions that were missing it
- Update function signatures in node_operations, community_operations, temporal_operations, and edge_operations
- Ensure all llm_client.generate_response calls receive proper ensure_ascii context
- Fix hardcoded ensure_ascii: True values that prevented non-ASCII character preservation
- Maintain backward compatibility with default ensure_ascii=True
- Complete the fix for issue #804 ensuring Korean/Japanese/Chinese characters are properly handled in LLM prompts
* implement so
* bug fixes and typing
* inject schema for non-openai clients
* correct datetime format
* remove List keyword
* Refactor node_operations.py to use updated prompt_library functions
* update example