* graphiti-graph-name
* fix-lint
* fix-unittest
* clone-update
* groupid-none
* groupid-def-fulltext
* lint
* Remove redundant function definition for fulltext_query in search_utils.py
* Refactor get_default_group_id function and remove redundant code in falkordb_driver and search_utils. Added import statement in driver.py.
* Refactor test cases in test_falkordb_driver.py for improved readability by consolidating multi-line assertions into single lines. No functional changes made.
* Refactor fulltext_query function in search_utils.py to use double quotes for group_id in the filter list, enhancing consistency in query syntax.
* Remove duplicate assignment of fuzzy_query in episode_fulltext_search function in search_utils.py to eliminate redundancy.
* Remove duplicate assignment of fuzzy_query in community_fulltext_search function in search_utils.py to streamline code.
---------
Co-authored-by: Gal Shubeli <galshubeli93@gmail.com>
* feat: enhance GeminiClient with max tokens management
- Introduced a mapping for maximum output tokens for various Gemini models.
- Added methods to resolve max tokens based on precedence rules, allowing for more flexible token management.
- Updated tests to verify max tokens behavior, ensuring explicit parameters take precedence and fallback mechanisms work correctly.
This change improves the handling of token limits for different models, enhancing the client’s configurability and usability.
* refactor: streamline max tokens retrieval in GeminiClient
- Removed the fallback to DEFAULT_MAX_TOKENS in favor of directly using model-specific maximum tokens.
- Simplified the logic for determining max tokens, enhancing code clarity and maintainability.
This change improves the efficiency of token management within the GeminiClient.
* feat(gemini): embedding batch size & lite default
The new `gemini-embedding-001` model only allows one embedding input per batch
(instance), but has other impressive statistics:
https://cloud.google.com/vertex-ai/generative-ai/docs/model-reference/text-embeddings-api
The -DEFAULT_SMALL_MODEL must not have the 'models/' prefix.
* Refactor: Improve Gemini Client Error Handling and Reliability
This commit introduces several improvements to the Gemini client to enhance its robustness and reliability.
- Implemented more specific error handling for various Gemini API responses, including rate limits and safety blocks.
- Added a JSON salvaging mechanism to gracefully handle incomplete or malformed JSON responses from the API.
- Introduced detailed logging for failed LLM generations to simplify debugging and troubleshooting.
- Refined the Gemini embedder to better handle empty or invalid embedding responses.
- Updated and corrected tests to align with the improved error handling and reliability features.
* fix: cleanup in _log_failed_generation()
* fix: cleanup in _log_failed_generation()
* Fix ruff B904 error in gemini_client.py
* fix(gemini): correct retry logic and enhance error logging
Updated the retry mechanism in the GeminiClient to ensure it retries the maximum number of times specified. Improved error logging to provide clearer insights when all retries are exhausted, including detailed information about the last error encountered.
* fix(gemini): enhance error handling for safety blocks and update tests
Refined error handling in the GeminiClient to improve detection of safety block conditions. Updated test cases to reflect changes in exception messages and ensure proper retry logic is enforced. Enhanced mock responses in tests to better simulate real-world scenarios, including handling of invalid JSON responses.
* revert default gemini to text-embedding-001
---------
Co-authored-by: Daniel Chalef <131175+danielchalef@users.noreply.github.com>
* docs: add comprehensive database configuration instructions to README
Add detailed instructions for custom database configuration using graph drivers:
- Neo4j with custom database name
- FalkorDB with custom database name
- Best practices for using graph drivers
- Environment variable configuration examples
Resolves#702
Co-authored-by: Daniel Chalef <danielchalef@users.noreply.github.com>
* Update README.md
---------
Co-authored-by: claude[bot] <209825114+claude[bot]@users.noreply.github.com>
Co-authored-by: Daniel Chalef <danielchalef@users.noreply.github.com>
* fix: remove global DEFAULT_DATABASE usage in favor of driver-specific
config
Fixes bugs introduced in PR #607. This removes reliance on the global
DEFAULT_DATABASE environment variable. It specifies the database within
each driver. PR #607 introduced a Neo4j compatability, as the database
names are different when attempting to support FalkorDB.
This refactor improves compatability across database types and ensures
future reliance by isolating the configuraiton to the driver level.
* fix: make falkordb support optional
This ensures that the the optional dependency and subsequent import is compliant with the graphiti-core project dependencies.
* chore: fmt code
* chore: undo changes to uv.lock
* fix: undo potentially breaking changes to drive interface
* fix: ensure a default database of "None" is provided - falling back to internal default
* chore: ensure default value exists for session and delete_all_indexes
* chore: fix typos and grammar
* chore: update package versions and dependencies in uv.lock and bulk_utils.py
* docs: update database configuration instructions for Neo4j and FalkorDB
Clarified default database names and how to override them in driver constructors. Updated testing requirements to include specific commands for running integration and unit tests.
* fix: ensure params defaults to an empty dictionary in Neo4jDriver
Updated the execute_query method to initialize params as an empty dictionary if not provided, ensuring compatibility with the database configuration.
---------
Co-authored-by: Urmzd <urmzd@dal.ca>
* feat: add template compliance check for issues and pull requests
* drop review action
* fix: update linting and type checking references in CLAUDE.md
The cross_encoder for Gemini already supported passing in a custom client.
I replicated the same input pattern to embedder and llm_client.
The value is, you can support custom API endpoints and other options like below:
cross_encoder=GeminiRerankerClient(
client=genai.Client(
api_key=os.environ.get('GOOGLE_GENAI_API_KEY'),
http_options=types.HttpOptions(api_version='v1alpha')),
config=LLMConfig(
model="gemini-2.5-flash-lite-preview-06-17"
)
))
Updated the error and success responses in various functions to utilize structured response classes (ErrorResponse, SuccessResponse, FactSearchResponse, EpisodeSearchResponse, StatusResponse) for improved consistency and clarity. Incremented version in pyproject.toml to 0.2.1.