graphiti/graphiti_core/llm_client
Daniel Chalef 9e78890f2e
Gemini support (#324)
* first cut

* Update dependencies and enhance README for optional LLM providers

- Bump aiohttp version from 3.11.14 to 3.11.16
- Update yarl version from 1.18.3 to 1.19.0
- Modify pyproject.toml to include optional extras for Anthropic, Groq, and Google Gemini
- Revise README.md to reflect new optional LLM provider installation instructions and clarify API key requirements

* Remove deprecated packages from poetry.lock and update content hash

- Removed cachetools, google-auth, google-genai, pyasn1, pyasn1-modules, rsa, and websockets from the lock file.
- Added new extras for anthropic, google-genai, and groq.
- Updated content hash to reflect changes.

* Refactor import paths for GeminiClient in README and __init__.py

- Updated import statement in README.md to reflect the new module structure for GeminiClient.
- Removed GeminiClient from the __all__ list in __init__.py as it is no longer directly imported.

* Refactor import paths for GeminiEmbedder in README and __init__.py

- Updated import statement in README.md to reflect the new module structure for GeminiEmbedder.
- Removed GeminiEmbedder and GeminiEmbedderConfig from the __all__ list in __init__.py as they are no longer directly imported.
2025-04-06 09:27:04 -07:00
..
__init__.py Fix llm client retry (#102) 2024-09-10 08:15:27 -07:00
anthropic_client.py Add MCP Server (#301) 2025-03-24 17:08:19 -07:00
client.py Add MCP Server (#301) 2025-03-24 17:08:19 -07:00
config.py update rate limits (#316) 2025-04-02 11:43:34 -04:00
errors.py Implement OpenAI Structured Output (#225) 2024-12-05 07:03:18 -08:00
gemini_client.py Gemini support (#324) 2025-04-06 09:27:04 -07:00
groq_client.py Set max tokens by prompt (#255) 2025-01-24 10:14:49 -05:00
openai_client.py Set max tokens by prompt (#255) 2025-01-24 10:14:49 -05:00
openai_generic_client.py Set max tokens by prompt (#255) 2025-01-24 10:14:49 -05:00
utils.py update new names with input_data (#204) 2024-10-29 11:03:31 -04:00