fix: replace deprecated gemini-2.5-flash-lite-preview-06-17 with gemini-2.5-flash-lite
Updated all references to the deprecated Gemini model in:
- graphiti_core/llm_client/gemini_client.py
- graphiti_core/cross_encoder/gemini_reranker_client.py
- tests/llm_client/test_gemini_client.py
- README.md
This resolves 404 errors when using Gemini clients.
Fixes#1075🤖 Generated with [Claude Code](https://claude.ai/code)
Co-authored-by: claude[bot] <41898282+claude[bot]@users.noreply.github.com>
Co-authored-by: Daniel Chalef <danielchalef@users.noreply.github.com>
* feat: enhance GeminiClient with max tokens management
- Introduced a mapping for maximum output tokens for various Gemini models.
- Added methods to resolve max tokens based on precedence rules, allowing for more flexible token management.
- Updated tests to verify max tokens behavior, ensuring explicit parameters take precedence and fallback mechanisms work correctly.
This change improves the handling of token limits for different models, enhancing the client’s configurability and usability.
* refactor: streamline max tokens retrieval in GeminiClient
- Removed the fallback to DEFAULT_MAX_TOKENS in favor of directly using model-specific maximum tokens.
- Simplified the logic for determining max tokens, enhancing code clarity and maintainability.
This change improves the efficiency of token management within the GeminiClient.
* feat(gemini): embedding batch size & lite default
The new `gemini-embedding-001` model only allows one embedding input per batch
(instance), but has other impressive statistics:
https://cloud.google.com/vertex-ai/generative-ai/docs/model-reference/text-embeddings-api
The -DEFAULT_SMALL_MODEL must not have the 'models/' prefix.
* Refactor: Improve Gemini Client Error Handling and Reliability
This commit introduces several improvements to the Gemini client to enhance its robustness and reliability.
- Implemented more specific error handling for various Gemini API responses, including rate limits and safety blocks.
- Added a JSON salvaging mechanism to gracefully handle incomplete or malformed JSON responses from the API.
- Introduced detailed logging for failed LLM generations to simplify debugging and troubleshooting.
- Refined the Gemini embedder to better handle empty or invalid embedding responses.
- Updated and corrected tests to align with the improved error handling and reliability features.
* fix: cleanup in _log_failed_generation()
* fix: cleanup in _log_failed_generation()
* Fix ruff B904 error in gemini_client.py
* fix(gemini): correct retry logic and enhance error logging
Updated the retry mechanism in the GeminiClient to ensure it retries the maximum number of times specified. Improved error logging to provide clearer insights when all retries are exhausted, including detailed information about the last error encountered.
* fix(gemini): enhance error handling for safety blocks and update tests
Refined error handling in the GeminiClient to improve detection of safety block conditions. Updated test cases to reflect changes in exception messages and ensure proper retry logic is enforced. Enhanced mock responses in tests to better simulate real-world scenarios, including handling of invalid JSON responses.
* revert default gemini to text-embedding-001
---------
Co-authored-by: Daniel Chalef <131175+danielchalef@users.noreply.github.com>
* add support for Gemini 2.5 model thinking budget
* allow adding thinking config to support current and future gemini models
* merge
* improve client; add reranker
* refactor: change type hint for gemini_messages to Any for flexibility
* refactor: update GeminiRerankerClient to use direct relevance scoring and improve ranking logic. Add tests
* fix fixtures
---------
Co-authored-by: realugbun <github.disorder751@passmail.net>