* feat: MCP Server v1.0.0rc0 - Complete refactoring with modular architecture This is a major refactoring of the MCP Server to support multiple providers through a YAML-based configuration system with factory pattern implementation. ## Key Changes ### Architecture Improvements - Modular configuration system with YAML-based settings - Factory pattern for LLM, Embedder, and Database providers - Support for multiple database backends (Neo4j, FalkorDB, KuzuDB) - Clean separation of concerns with dedicated service modules ### Provider Support - **LLM**: OpenAI, Anthropic, Gemini, Groq - **Embedders**: OpenAI, Voyage, Gemini, Anthropic, Sentence Transformers - **Databases**: Neo4j, FalkorDB, KuzuDB (new default) - Azure OpenAI support with AD authentication ### Configuration - YAML configuration with environment variable expansion - CLI argument overrides for runtime configuration - Multiple pre-configured Docker Compose setups - Proper boolean handling in environment variables ### Testing & CI - Comprehensive test suite with unit and integration tests - GitHub Actions workflows for linting and testing - Multi-database testing support ### Docker Support - Updated Docker images with multi-stage builds - Database-specific docker-compose configurations - Persistent volume support for all databases ### Bug Fixes - Fixed KuzuDB connectivity checks - Corrected Docker command paths - Improved error handling and logging - Fixed boolean environment variable expansion Co-authored-by: Claude <noreply@anthropic.com> * conductor-checkpoint-msg_01PmXwij9S976CQk798DJ4PH * fix: Improve MCP server configuration and initialization - Fix API key detection: Remove hardcoded OpenAI checks, let factories handle provider-specific validation - Fix .env file loading: Search for .env in mcp_server directory first - Change default transport to SSE for broader compatibility (was stdio) - Add proper error handling with warnings for failed client initialization - Model already defaults to gpt-4o as requested These changes ensure the MCP server properly loads API keys from .env files and creates the appropriate LLM/embedder clients based on configuration. 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude <noreply@anthropic.com> * conductor-checkpoint-msg_01PCGAWzQUbmh7hAKBvadbYN * chore: Update default transport from SSE to HTTP - Changed default transport to 'http' as SSE is deprecated - Updated all configuration files to use HTTP transport - Updated Docker compose commands to use HTTP transport - Updated comments to reflect HTTP transport usage This change ensures the MCP server uses the recommended HTTP transport instead of the deprecated SSE transport. 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude <noreply@anthropic.com> * conductor-checkpoint-msg_01FErZjFG5iWrvbdUD2acQwQ * chore: Update default OpenAI model to gpt-4o-mini Changed the default LLM model from gpt-4o to gpt-4o-mini across all configuration files for better cost efficiency while maintaining quality. 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude <noreply@anthropic.com> * conductor-checkpoint-msg_01FETp6u9mWAMjJAeT6WFgAf * conductor-checkpoint-msg_01AJJ48RbkPaZ99G2GmE6HUi * fix: Correct default OpenAI model to gpt-4.1 Changed the default LLM model from gpt-4o-mini to gpt-4.1 as requested. This is the latest GPT-4 series model. 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude <noreply@anthropic.com> * conductor-checkpoint-msg_013HP1MYHKZ5wdTHHxrpBaT9 * fix: Update hardcoded default model to gpt-4.1 and fix config path - Changed hardcoded default in schema.py from gpt-4o to gpt-4.1 - Fixed default config path to look in config/config.yaml relative to mcp_server directory - This ensures the server uses gpt-4.1 as the default model everywhere 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude <noreply@anthropic.com> * conductor-checkpoint-msg_01EaN8GZtehm8LV3a7CdWJ8u * feat: Add detailed server URL logging and improve access information - Added comprehensive logging showing exact URLs to access the MCP server - Display localhost instead of 0.0.0.0 for better usability - Show MCP endpoint, transport type, and status endpoint information - Added visual separators to make server info stand out in logs This helps users understand exactly how to connect to the MCP server and troubleshoot connection issues. 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude <noreply@anthropic.com> * conductor-checkpoint-msg_01SNpbaZMdxWbefo2zsLprcW * fix: Correct MCP HTTP endpoint path from / to /mcp/ - Remove incorrect /status endpoint reference - Update logging to show correct MCP endpoint at /mcp/ - Align with FastMCP documentation standards 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude <noreply@anthropic.com> * conductor-checkpoint-msg_01417YVh3s6afJadN5AM5Ahk * fix: Configure consistent logging format between uvicorn and MCP server - Use simplified format matching uvicorn's default (LEVEL message) - Remove timestamps from custom logger format - Suppress verbose MCP and uvicorn access logs - Improve readability of server startup output 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude <noreply@anthropic.com> * conductor-checkpoint-msg_014BF6Kzdy7qXc5AgC7eeVa5 * conductor-checkpoint-msg_01TscHXmijzkqcTJX5sGTYP8 * conductor-checkpoint-msg_01Q7VLFTJrtmpkaB7hfUzZLP * fix: Improve test runner to load API keys from .env file - Add dotenv loading support in test runner - Fix duplicate os import issue - Improve prerequisite checking with helpful hints - Update error messages to guide users 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude <noreply@anthropic.com> * conductor-checkpoint-msg_01NfviLNeAhFDA1G5841YKCS * conductor-checkpoint-msg_015EewQhKbqAGSQkasWtRQjp * fix: Fix all linting errors in test suite - Replace bare except with except Exception - Remove unused imports and variables - Fix type hints to use modern syntax - Apply ruff formatting for line length - Ensure all tests pass linting checks * conductor-checkpoint-msg_01RedNheKT4yWyXcM83o3Nmv * fix: Use contextlib.suppress instead of try-except-pass (SIM105) - Replace try-except-pass with contextlib.suppress in test_async_operations.py - Replace try-except-pass with contextlib.suppress in test_fixtures.py - Fixes ruff SIM105 linting errors * conductor-checkpoint-msg_01GuBj69k2CsqgojBsGJ2zFT * fix: Move README back to mcp_server root folder The main README for the MCP server should be in the root of the mcp_server folder for better discoverability * conductor-checkpoint-msg_01VsJQ3MgDxPwyb4ynvswZfb * docs: Update README with comprehensive features and database options - Add comprehensive features list including all supported databases, LLM providers, and transports - Document Kuzu as the default database with explanation of its benefits and archived status - Add detailed instructions for running with different databases (Kuzu, Neo4j, FalkorDB) - Update transport references from SSE to HTTP (default transport) - Add database-specific Docker Compose instructions - Update MCP client configurations to use /mcp/ endpoint - Clarify prerequisites to reflect optional nature of external databases - Add detailed database configuration examples for all supported backends * conductor-checkpoint-msg_018Z7AjbTkuDfhqdjB9iGbbD * docs: Address README review comments - Shorten Kuzu database description to be more concise - Update Ollama model example to use 'gpt-oss:120b' - Restore Azure OpenAI environment variables documentation - Remove implementation details from Docker section (irrelevant to container users) - Clarify mcp-remote supports both HTTP and SSE transports Addresses review comments #1-7 on the PR * conductor-checkpoint-msg_01QMeMEMe9rTVDgd8Ce5hmXp * docs: Remove SSE transport reference from Claude Desktop section Since the MCP server no longer supports SSE transport, removed the mention of SSE from the mcp-remote description. The server only uses HTTP transport. Addresses review comment on line 514 * conductor-checkpoint-msg_01DNn76rvpx7rmTBwsUQd1De * docs: Remove telemetry from features list Telemetry is not a feature but a notice about data collection, so it shouldn't be listed as a feature. Addresses review comment on line 29 * conductor-checkpoint-msg_01Jcb8sm9bpqB9Ksz1W6YrSz * feat: Update default embedding model to text-embedding-3-small Replace outdated text-embedding-ada-002 with the newer, more efficient text-embedding-3-small model as the default embedder. The new model offers better performance and is more cost-effective. Updated: - config/config.yaml: Changed default model - README.md: Updated documentation to reflect new default * conductor-checkpoint-msg_016AXAH98nYKTj5WCueBubmA * fix: Resolve database connection and episode processing errors Fixed two critical runtime errors: 1. Database connection check for KuzuDB - KuzuDB session.run() returns None, causing async iteration error - Added special handling for KuzuDB (in-memory, no query needed) - Other databases (Neo4j, FalkorDB) still perform connection test 2. Episode processing parameter error - Changed 'episode_type' parameter to 'source' to match Graphiti API - Added required 'reference_time' parameter with current timestamp - Added datetime imports (UTC, datetime) Errors fixed: - 'async for' requires an object with __aiter__ method, got NoneType - Graphiti.add_episode() got an unexpected keyword argument 'episode_type' * conductor-checkpoint-msg_01JvcW97a4s3icDWFkhF3kEJ * fix: Use timezone.utc instead of UTC for Python 3.10 compatibility The UTC constant was added in Python 3.11. Changed to use timezone.utc which is available in Python 3.10+. Fixed ImportError: cannot import name 'UTC' from 'datetime' * conductor-checkpoint-msg_01Br69UnYf8QXvtAhJVTuDGD * fix: Convert entity_types from list to dict for Graphiti API The Graphiti add_episode() API expects entity_types as a dict[str, type[BaseModel]], not a list. Changed entity type building to create a dictionary mapping entity names to their Pydantic model classes. Fixed error: 'list' object has no attribute 'items' Changes: - Build entity_types as dict instead of list in config processing - Add fallback to convert ENTITY_TYPES list to dict if needed - Map entity type names to their model classes * conductor-checkpoint-msg_0173SR9CxbBH9jVWp8tLooRp * conductor-checkpoint-msg_0169v3hqZG1Sqb13Kp1Vijms * fix: Remove protected 'name' attribute from entity type models Pydantic BaseModel reserves 'name' as a protected attribute. Removed the 'name' attribute from dynamically created entity type models as it's not needed - the entity type name is already stored as the class name and dict key. Fixed error: name cannot be used as an attribute for Requirement as it is a protected attribute name. * conductor-checkpoint-msg_0118QJWvZLyoZfwb1UWZqrRa * conductor-checkpoint-msg_01B78jtT59YDt1Xm5hJpoqQw * conductor-checkpoint-msg_01MsqeFGoCEXpoNMiDRM3Gjh * conductor-checkpoint-msg_01SwJkCDAScffk8116KPVpTd * conductor-checkpoint-msg_01EBWwDRC8bZ7oLYxsrmVnLH * conductor-checkpoint-msg_01SAcxuF3eqtP4exA47CBqAi * conductor-checkpoint-msg_011dRKwJM31K3ob9Gy4JCmae * conductor-checkpoint-msg_018d52yUXdPF48UBWPQdiB4W * conductor-checkpoint-msg_01MGFAenMDnTX3H9HSZEbj2T * conductor-checkpoint-msg_01MHw4g8TicrXegSK9phncfw * conductor-checkpoint-msg_018YrqWa3c2ZpkxemiiaE9tA * conductor-checkpoint-msg_01SNsax9AwiCBFrC7Fpo7BNe * conductor-checkpoint-msg_01K7QC1X8iPiYaMdvbi7WtR5 * conductor-checkpoint-msg_01KgGgzpbiuM31KWKxQhNBfY * conductor-checkpoint-msg_01KL3wzQUn3gekDmznXVgXne * conductor-checkpoint-msg_016GKc3DYwYUjngGw8pArRJK * conductor-checkpoint-msg_01QLbhPMGDeB5EHbMq5KT86U * conductor-checkpoint-msg_01Qdskq96hJ6Q9DPg1h5Jjgg * conductor-checkpoint-msg_01JhPXYdc6HGsoEW2f1USSyd * conductor-checkpoint-msg_018NLrtFxs5zfcNwQnNCfvNg * conductor-checkpoint-msg_01G1G9J7cbupmLkyiQufj335 * conductor-checkpoint-msg_01BHEPsv2EML14gFa6vkn1NP * conductor-checkpoint-msg_0127MeSvxWk8BLXjB5k3wDJY * conductor-checkpoint-msg_018dRGHW6fPNqJDN6eV6SpoH * conductor-checkpoint-msg_01CPPZ9JKakjsmHpzzoFVhaM * conductor-checkpoint-msg_014jJQ4FkGU4485gF41K2suG * conductor-checkpoint-msg_01MS72hQDCrr1rB6GSd3zy4h * conductor-checkpoint-msg_01P7ur6mQEusfHTYpBrBnpk3 * conductor-checkpoint-msg_01JiEiEuJN3sQXheqMzCa6hX * conductor-checkpoint-msg_01D7XfEJqzTeKGyuE5EFmjND * conductor-checkpoint-msg_01Gn6qZrD3DZd8c6a6fmMap7 * conductor-checkpoint-msg_01Ji7gxCG4jR145rBAupwU49 * conductor-checkpoint-msg_01CYzyiAtLo95iVLeqWSuYiR * conductor-checkpoint-msg_017fAeUG21Ym1EeofanFzFGa * conductor-checkpoint-msg_013rt24pyzMHbrmEQein2dJJ * conductor-checkpoint-msg_016bN3uyAxN28Rh8uvDpExit * conductor-checkpoint-msg_017QV6m73ShaMBdQi7L3kmhP * conductor-checkpoint-msg_01LUZ9XS7C1LCG6A1VFNcRL2 * conductor-checkpoint-msg_0136b9tNU5ko18T3PmRkW3LJ * conductor-checkpoint-msg_018FX6Mibr66cKLnpL84f2Js * conductor-checkpoint-msg_01WRZxPMQYjNEjcFNTMzWYeL * conductor-checkpoint-msg_015Tbxjxrj6dynf7TbZscFD3 * conductor-checkpoint-msg_01ELC9AyZZGry9tN4XKrwEM6 * conductor-checkpoint-msg_01Jk4ugkAqMs4iRYWwnaNAHR * conductor-checkpoint-msg_01NLStrCDq7HZJy3pKyGSqxM * conductor-checkpoint-msg_01BFZEVpXbdxuXJguFH3caek * Remove User and Assistant exception from Preference prioritization * conductor-checkpoint-msg_01JP4eGXZfEjoSXWUwTHNYoJ * Add combined FalkorDB + MCP server Docker image - Created Dockerfile.falkordb-combined extending official FalkorDB image - Added startup script to run both FalkorDB daemon and MCP server - Created docker-compose-falkordb-combined.yml for simplified deployment - Added comprehensive README-falkordb-combined.md documentation - Updated main README with Option 4 for combined image - Single container solution for development and single-node deployments * conductor-checkpoint-msg_01PRJ1fre9d6J4qgBmCBQhCu * Fix Dockerfile syntax version and Python compatibility - Set Dockerfile syntax to version 1 as requested - Use Python 3.11 from Debian Bookworm instead of 3.12 - Add comment explaining Bookworm ships with Python 3.11 - Python 3.11 meets project requirement of >=3.10 - Build tested successfully * conductor-checkpoint-msg_011Thrsv6CjZKRCXvordMWeb * Fix combined FalkorDB image to run both services successfully - Override FalkorDB ENTRYPOINT to use custom startup script - Use correct FalkorDB module path: /var/lib/falkordb/bin/falkordb.so - Create config-docker-falkordb-combined.yaml with localhost URI - Create /var/lib/falkordb/data directory for persistence - Both FalkorDB and MCP server now start successfully - Tested: FalkorDB ready, MCP server running on port 8000 * conductor-checkpoint-msg_01FT3bsTuv7466EvCeRtgDsD * Fix health check to eliminate 404 errors - Changed health check to only verify FalkorDB (redis-cli ping) - Removed non-existent /health endpoint check - MCP server startup is visible in logs - Container now runs without health check errors * conductor-checkpoint-msg_01KWBc5S8vWzyovUTWLvPYNw * Replace Kuzu with FalkorDB as default database BREAKING CHANGE: Kuzu is no longer supported. FalkorDB is now the default. - Renamed Dockerfile.falkordb-combined to Dockerfile (default) - Renamed docker-compose-falkordb-combined.yml to docker-compose.yml (default) - Updated config.yaml to use FalkorDB with localhost:6379 as default - Removed Kuzu from pyproject.toml dependencies (now only falkordb extra) - Updated Dockerfile to use graphiti-core[falkordb] instead of [kuzu,falkordb] - Completely removed all Kuzu references from README - Updated README to document FalkorDB combined container as default - Docker Compose now starts single container with FalkorDB + MCP server - Prerequisites now require Docker instead of Python for default setup - Removed old Kuzu docker-compose files Running from command line now requires external FalkorDB instance at localhost:6379 * conductor-checkpoint-msg_014wBY9WG9GRXP7cUZ2JiqGz * Complete Kuzu removal from MCP server Removed all remaining Kuzu references from: - Test fixtures (test_fixtures.py): Changed default database to falkordb, removed kuzu configuration - Test runner (run_tests.py): Removed kuzu from database choices, checks, and markers - Integration tests (test_comprehensive_integration.py): Removed kuzu from parameterized tests and environment setup - Test README: Updated all examples and documentation to reflect falkordb as default - Docker README: Completely rewrote to remove KuzuDB section, updated with FalkorDB combined image as default All Kuzu support has been completely removed from the MCP server codebase. FalkorDB (via combined container) is now the default database backend. 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude <noreply@anthropic.com> * conductor-checkpoint-msg_01FAgmoDFBPETezbBr18Bpir * Fix Anthropic client temperature type error Fixed pyright type error where temperature parameter (float | None) was being passed directly to Anthropic's messages.create() method which expects (float | Omit). Changes: - Build message creation parameters as a dictionary - Conditionally include temperature only when not None - Use dictionary unpacking to pass parameters This allows temperature to be properly omitted when None, rather than passing None as a value. 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude <noreply@anthropic.com> * conductor-checkpoint-msg_01KEuAQucnvsH94BwFgCAQXg * Fix critical PR review issues Fixed high-impact issues from PR #1024 code review: 1. **Boolean conversion bug (schema.py)** - Fixed _expand_env_vars returning strings 'true'/'false' instead of booleans - Now properly converts boolean-like strings (true/false/1/0/yes/no/on/off) to actual booleans - Simplified logic by removing redundant string-to-string conversions - Added support for common boolean string variations 2. **Dependency management (pyproject.toml)** - Removed pytest from main dependencies (now only in dev dependencies) - Moved azure-identity to optional dependencies under new [azure] group - Prevents forcing Azure and testing dependencies on all users 3. **Conditional Azure imports (utils.py)** - Made azure-identity import conditional in create_azure_credential_token_provider() - Raises helpful ImportError with installation instructions if not available - Follows lazy-import pattern for optional dependencies 4. **Documentation fix (graphiti_mcp_server.py)** - Fixed confusing JSON escaping in add_memory docstring example - Changed from triple-backslash escaping to standard JSON string - Updated comment to clarify standard JSON escaping is used Issues verified as already fixed: - Docker build context (all docker-compose files use context: ..) 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude <noreply@anthropic.com> * conductor-checkpoint-msg_013aEa7tUV8rEmfw38BzJatc * Add comprehensive SEMAPHORE_LIMIT documentation Added detailed documentation for SEMAPHORE_LIMIT configuration to help users optimize episode processing concurrency based on their LLM provider's rate limits. Changes: 1. **graphiti_mcp_server.py** - Expanded inline comments from 3 lines to 26 lines - Added provider-specific tuning guidelines (OpenAI, Anthropic, Azure, Ollama) - Documented symptoms of too-high/too-low settings - Added monitoring recommendations 2. **README.md** - Expanded "Concurrency and LLM Provider 429 Rate Limit Errors" section - Added tier-specific recommendations for each provider - Explained relationship between episode concurrency and LLM request rates - Added troubleshooting symptoms and monitoring guidance - Included example .env configuration 3. **config.yaml** - Added header comment referencing detailed documentation - Noted default value and suitable use case 4. **.env.example** - Added SEMAPHORE_LIMIT with inline tuning guidelines - Quick reference for all major LLM provider tiers - Cross-reference to README for full details Benefits: - Users can now make informed decisions about concurrency settings - Reduces likelihood of 429 rate limit errors from misconfiguration - Helps users maximize throughput within their rate limits - Provides clear troubleshooting guidance Addresses PR #1024 review comment about magic number documentation. 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude <noreply@anthropic.com> * conductor-checkpoint-msg_017netxNYzmam5Cu8PM2uQXW * conductor-checkpoint-msg_012B8ESfBFcMeG3tFimpjbce * conductor-checkpoint-msg_01Xe46bzgCGV4c8g4piPtSMQ * conductor-checkpoint-start * conductor-checkpoint-msg_01QPZK2pa2vUMpURRFmX93Jt * conductor-checkpoint-msg_01UU5jQcfrW5btRJB3zy5KQZ * conductor-checkpoint-msg_01884eN3wprtCkrEgEaRDzko * conductor-checkpoint-msg_01GC2fQiu9gLGPGf8SvG5VW8 * conductor-checkpoint-msg_018ZD567wd4skoiAz7oML7WX * conductor-checkpoint-msg_01C3AxzcQQSNZxJcuVxAMYpG * conductor-checkpoint-msg_014w5iHAnv7mVkKfTroeNkuM * docs: Add current LLM model reference to CLAUDE.md Added comprehensive model reference section documenting valid model names for OpenAI, Anthropic, and Google Gemini as of January 2025. OpenAI Models: - GPT-5 family (reasoning models): gpt-5-mini, gpt-5-nano - GPT-4.1 family (standard models): gpt-4.1, gpt-4.1-mini, gpt-4.1-nano - Legacy models: gpt-4o, gpt-4o-mini Anthropic Models: - Claude 3.7 family (latest) - Claude 3.5 family - Legacy Claude 3 models Google Gemini Models: - Gemini 2.5 family (latest) - Gemini 2.0 family (experimental) - Gemini 1.5 family (stable) This documents that model names like gpt-5-mini, gpt-4.1, and gpt-4.1-mini used throughout the codebase are valid OpenAI model identifiers, not errors. 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude <noreply@anthropic.com> * conductor-checkpoint-msg_014JsovjGyTM1mGwR1nVWLvX * conductor-checkpoint-msg_013ooHLBEhPccaSY4cFse8vK * conductor-checkpoint-msg_01WfmUCwXhWxEFtV7R3zJLwT * conductor-checkpoint-msg_01SbjZ9mm9YwqeJHTDUDoKU8 * conductor-checkpoint-msg_01T2cR1aXUjNSegqzXQcW2jC * conductor-checkpoint-msg_01EnQy5A9dMFD8F11hWKvzGo * conductor-checkpoint-msg_01R1zsLmxvwjZ9SwKNhSnQAv * refactor: Remove duplicate is_reasoning_model calculation in factories.py * conductor-checkpoint-msg_015oLk8qck3TbfaCryY9gngJ * conductor-checkpoint-msg_018YAxG5GsLq1dBMuGE6kwEJ * conductor-checkpoint-msg_014fda5sUsvofb537BvqkuBY * fix: Change default transport to http, mark SSE as deprecated * conductor-checkpoint-msg_01S3x8oHkFTM2x4ZiT81QetV * conductor-checkpoint-msg_01AVxUgejEA9piS6narw4omz * conductor-checkpoint-msg_019W9KoNBmkobBguViYUj18s * conductor-checkpoint-msg_01S2mYUmqLohxEmoZaNqsm2f * conductor-checkpoint-msg_013ZGKfZjdDsqiCkAjAiuEk7 * fix: Handle default config path and empty env vars correctly - Change default config path from 'config.yaml' to 'config/config.yaml' - Fix env var expansion to return None for empty strings instead of False - Prevents validation errors when optional string fields have unset env vars * conductor-checkpoint-msg_01Bx1BqH3BaBxHMrnsbUQXww * fix: Allow None for episode_id_prefix and convert to empty string - Change episode_id_prefix type to str | None to accept None from YAML - Add model_post_init to convert None to empty string for backward compatibility * conductor-checkpoint-msg_01CXVkHJC8gp5i395MQMhp6D * feat: Add helpful error message for database connection failures - Catch Redis/database connection errors during initialization - Provide clear, formatted error messages with startup instructions - Include provider-specific guidance (FalkorDB vs Neo4j) - Improves developer experience when database is not running * conductor-checkpoint-msg_01Cd9u1z7pqmX1EG7vXXo4GA * feat: Add specific Neo4j connection error message with startup instructions * conductor-checkpoint-msg_01XgbmgFaUMPopni4Q8EhG23 * fix: Remove obsolete KuzuDB check from status endpoint - Remove dead code checking for 'kuzu' provider (was removed) - Simplify status check to use configured database provider directly - Status now correctly reports neo4j or falkordb based on config * conductor-checkpoint-msg_01WLjwygBwfvbJcVoUMDV3h6 * fix: Use service config instead of global config in status endpoint - Changed status check to use graphiti_service.config.database.provider - Ensures status reports the actual running database, not potentially stale global - Fixes issue where status always reported falkordb regardless of config * conductor-checkpoint-msg_01DoLD51xqrrdFvq3AgkYuQi * conductor-checkpoint-msg_01EUW7ArnNM6kHCgFDrQZrro * conductor-checkpoint-msg_01LqYK6nj1ZFfRNBRP15FMLo * feat: Add standalone Dockerfile for external database deployments - Create Dockerfile.standalone for MCP server without embedded FalkorDB - Supports both Neo4j and FalkorDB via DATABASE_PROVIDER build arg - Update docker-compose-neo4j.yml to use standalone Dockerfile - Update docker-compose-falkordb.yml to use standalone Dockerfile - Fixes issue where Neo4j compose was starting embedded FalkorDB - Separate images: standalone-neo4j and standalone-falkordb * conductor-checkpoint-msg_01QSHNgVZvF1id5UtLhpzuUa * refactor: Unified standalone image with both Neo4j and FalkorDB drivers - Modified Dockerfile.standalone to install both neo4j and falkordb extras - Both compose files now use the same standalone image - Config file determines which database to connect to at runtime - Added build-standalone.sh script for building and pushing to DockerHub - Image tags: standalone, {version}-standalone, {version}-graphiti-{core}-standalone * conductor-checkpoint-msg_01H4isP3oHK25sGpVWzXq9kX * fix: Correct config file paths in compose files - Fix CONFIG_PATH env var: /app/config/config.yaml -> /app/mcp/config/config.yaml - Fix volume mount path: /app/config/config.yaml -> /app/mcp/config/config.yaml - Matches WORKDIR /app/mcp in Dockerfile.standalone - Fixes issue where wrong config was being loaded * conductor-checkpoint-msg_01Pv3Qj9UJJat288xZTsfCm3 * conductor-checkpoint-msg_01DkBq4kQA5Fdmxfikm8aBYG * conductor-checkpoint-msg_01EBqphY68KNzRWei4QNpcYg * feat: Add /health endpoint for Docker healthchecks - Add @mcp.custom_route for /health endpoint using FastMCP - Returns {status: 'healthy', service: 'graphiti-mcp'} - Update Dockerfile.standalone healthcheck to use /health instead of / - Eliminates 404 errors in logs from healthcheck pings - Follows FastMCP best practices for operational monitoring * conductor-checkpoint-msg_01UpNeurS45bREPEeGkV3uCx * feat: Add logging to verify entity types are loaded from config Added INFO level logging during GraphitiService initialization to confirm that custom entity types from the configuration file are properly loaded. This helps debug issues where the entity ontology may not be applied. Logs the entity type names when custom types are present: INFO - Using custom entity types: Preference, Requirement, Procedure, ... * fix: Correct logging message for entity types and add embedder logging Fixed copy-paste error where entity types else clause was logging about embedder client. Also added missing else clause for embedder client logging for consistency. - Fixed: "No Embedder client configured" -> "Using default entity types" - Added: Missing embedder client else clause logging * conductor-checkpoint-msg_01WMuxAzUnkpsa5WSXKMyLLP * fix: Return JSONResponse from health check endpoint Fixed TypeError in health check endpoint by returning a proper Starlette JSONResponse object instead of a plain dict. Starlette custom routes require ASGI-compatible response objects. Error was: TypeError: 'dict' object is not callable * conductor-checkpoint-msg_01CSgKFQaLsKVrBJAYCFoSGa * conductor-checkpoint-msg_01SFY9xCnHxeCFGf53FESncs * feat: Return complete node properties and exclude all embeddings Enhanced node search results to include all relevant properties: - Added `attributes` dict for custom entity properties - Changed from single `type` to full `labels` array - Added `group_id` for partition information - Added safety filter to strip any keys containing "embedding" from attributes Added format_node_result() helper function for consistent node formatting that excludes name_embedding vectors, matching the pattern used for edges. Embeddings are now explicitly excluded in all data returns: - EntityNode: name_embedding excluded + attributes filtered - EntityEdge: fact_embedding excluded (existing) - EpisodicNode: No embeddings to exclude This ensures clients receive complete metadata while keeping payload sizes manageable and avoiding exposure of internal vector representations. --------- Co-authored-by: Claude <noreply@anthropic.com> |
||
|---|---|---|
| .github | ||
| examples | ||
| graphiti_core | ||
| images | ||
| mcp_server | ||
| server | ||
| signatures/version1 | ||
| tests | ||
| .env.example | ||
| .gitignore | ||
| AGENTS.md | ||
| CLAUDE.md | ||
| CODE_OF_CONDUCT.md | ||
| conftest.py | ||
| CONTRIBUTING.md | ||
| depot.json | ||
| docker-compose.test.yml | ||
| docker-compose.yml | ||
| Dockerfile | ||
| ellipsis.yaml | ||
| LICENSE | ||
| Makefile | ||
| OTEL_TRACING.md | ||
| py.typed | ||
| pyproject.toml | ||
| pytest.ini | ||
| README.md | ||
| SECURITY.md | ||
| uv.lock | ||
| Zep-CLA.md | ||
Graphiti
Build Real-Time Knowledge Graphs for AI Agents
⭐ Help us reach more developers and grow the Graphiti community. Star this repo!
Tip
Check out the new MCP server for Graphiti! Give Claude, Cursor, and other MCP clients powerful Knowledge Graph-based memory.
Graphiti is a framework for building and querying temporally-aware knowledge graphs, specifically tailored for AI agents operating in dynamic environments. Unlike traditional retrieval-augmented generation (RAG) methods, Graphiti continuously integrates user interactions, structured and unstructured enterprise data, and external information into a coherent, queryable graph. The framework supports incremental data updates, efficient retrieval, and precise historical queries without requiring complete graph recomputation, making it suitable for developing interactive, context-aware AI applications.
Use Graphiti to:
- Integrate and maintain dynamic user interactions and business data.
- Facilitate state-based reasoning and task automation for agents.
- Query complex, evolving data with semantic, keyword, and graph-based search methods.
A knowledge graph is a network of interconnected facts, such as "Kendra loves Adidas shoes." Each fact is a "triplet" represented by two entities, or nodes ("Kendra", "Adidas shoes"), and their relationship, or edge ("loves"). Knowledge Graphs have been explored extensively for information retrieval. What makes Graphiti unique is its ability to autonomously build a knowledge graph while handling changing relationships and maintaining historical context.
Graphiti and Zep's Context Engineering Platform.
Graphiti powers the core of Zep, a turn-key context engineering platform for AI Agents. Zep offers agent memory, Graph RAG for dynamic data, and context retrieval and assembly.
Using Graphiti, we've demonstrated Zep is the State of the Art in Agent Memory.
Read our paper: Zep: A Temporal Knowledge Graph Architecture for Agent Memory.
We're excited to open-source Graphiti, believing its potential reaches far beyond AI memory applications.
Zep vs Graphiti
| Aspect | Zep | Graphiti |
|---|---|---|
| What they are | Complete managed platform for AI memory | Open-source graph framework |
| User & conversation management | Built-in users, threads, and message storage | Build your own |
| Retrieval & performance | Pre-configured, production-ready retrieval with sub-200ms performance at scale | Custom implementation required; performance depends on your setup |
| Developer tools | Dashboard with graph visualization, debug logs, API logs; SDKs for Python, TypeScript, and Go | Build your own tools |
| Enterprise features | SLAs, support, security guarantees | Self-managed |
| Deployment | Fully managed or in your cloud | Self-hosted only |
When to choose which
Choose Zep if you want a turnkey, enterprise-grade platform with security, performance, and support baked in.
Choose Graphiti if you want a flexible OSS core and you're comfortable building/operating the surrounding system.
Why Graphiti?
Traditional RAG approaches often rely on batch processing and static data summarization, making them inefficient for frequently changing data. Graphiti addresses these challenges by providing:
- Real-Time Incremental Updates: Immediate integration of new data episodes without batch recomputation.
- Bi-Temporal Data Model: Explicit tracking of event occurrence and ingestion times, allowing accurate point-in-time queries.
- Efficient Hybrid Retrieval: Combines semantic embeddings, keyword (BM25), and graph traversal to achieve low-latency queries without reliance on LLM summarization.
- Custom Entity Definitions: Flexible ontology creation and support for developer-defined entities through straightforward Pydantic models.
- Scalability: Efficiently manages large datasets with parallel processing, suitable for enterprise environments.
Graphiti vs. GraphRAG
| Aspect | GraphRAG | Graphiti |
|---|---|---|
| Primary Use | Static document summarization | Dynamic data management |
| Data Handling | Batch-oriented processing | Continuous, incremental updates |
| Knowledge Structure | Entity clusters & community summaries | Episodic data, semantic entities, communities |
| Retrieval Method | Sequential LLM summarization | Hybrid semantic, keyword, and graph-based search |
| Adaptability | Low | High |
| Temporal Handling | Basic timestamp tracking | Explicit bi-temporal tracking |
| Contradiction Handling | LLM-driven summarization judgments | Temporal edge invalidation |
| Query Latency | Seconds to tens of seconds | Typically sub-second latency |
| Custom Entity Types | No | Yes, customizable |
| Scalability | Moderate | High, optimized for large datasets |
Graphiti is specifically designed to address the challenges of dynamic and frequently updated datasets, making it particularly suitable for applications requiring real-time interaction and precise historical queries.
Installation
Requirements:
- Python 3.10 or higher
- Neo4j 5.26 / FalkorDB 1.1.2 / Kuzu 0.11.2 / Amazon Neptune Database Cluster or Neptune Analytics Graph + Amazon OpenSearch Serverless collection (serves as the full text search backend)
- OpenAI API key (Graphiti defaults to OpenAI for LLM inference and embedding)
Important
Graphiti works best with LLM services that support Structured Output (such as OpenAI and Gemini). Using other services may result in incorrect output schemas and ingestion failures. This is particularly problematic when using smaller models.
Optional:
- Google Gemini, Anthropic, or Groq API key (for alternative LLM providers)
Tip
The simplest way to install Neo4j is via Neo4j Desktop. It provides a user-friendly interface to manage Neo4j instances and databases. Alternatively, you can use FalkorDB on-premises via Docker and instantly start with the quickstart example:
docker run -p 6379:6379 -p 3000:3000 -it --rm falkordb/falkordb:latest
pip install graphiti-core
or
uv add graphiti-core
Installing with FalkorDB Support
If you plan to use FalkorDB as your graph database backend, install with the FalkorDB extra:
pip install graphiti-core[falkordb]
# or with uv
uv add graphiti-core[falkordb]
Installing with Kuzu Support
If you plan to use Kuzu as your graph database backend, install with the Kuzu extra:
pip install graphiti-core[kuzu]
# or with uv
uv add graphiti-core[kuzu]
Installing with Amazon Neptune Support
If you plan to use Amazon Neptune as your graph database backend, install with the Amazon Neptune extra:
pip install graphiti-core[neptune]
# or with uv
uv add graphiti-core[neptune]
You can also install optional LLM providers as extras:
# Install with Anthropic support
pip install graphiti-core[anthropic]
# Install with Groq support
pip install graphiti-core[groq]
# Install with Google Gemini support
pip install graphiti-core[google-genai]
# Install with multiple providers
pip install graphiti-core[anthropic,groq,google-genai]
# Install with FalkorDB and LLM providers
pip install graphiti-core[falkordb,anthropic,google-genai]
# Install with Amazon Neptune
pip install graphiti-core[neptune]
Default to Low Concurrency; LLM Provider 429 Rate Limit Errors
Graphiti's ingestion pipelines are designed for high concurrency. By default, concurrency is set low to avoid LLM Provider 429 Rate Limit Errors. If you find Graphiti slow, please increase concurrency as described below.
Concurrency controlled by the SEMAPHORE_LIMIT environment variable. By default, SEMAPHORE_LIMIT is set to 10
concurrent operations to help prevent 429 rate limit errors from your LLM provider. If you encounter such errors, try
lowering this value.
If your LLM provider allows higher throughput, you can increase SEMAPHORE_LIMIT to boost episode ingestion
performance.
Quick Start
Important
Graphiti defaults to using OpenAI for LLM inference and embedding. Ensure that an
OPENAI_API_KEYis set in your environment. Support for Anthropic and Groq LLM inferences is available, too. Other LLM providers may be supported via OpenAI compatible APIs.
For a complete working example, see the Quickstart Example in the examples directory. The quickstart demonstrates:
- Connecting to a Neo4j, Amazon Neptune, FalkorDB, or Kuzu database
- Initializing Graphiti indices and constraints
- Adding episodes to the graph (both text and structured JSON)
- Searching for relationships (edges) using hybrid search
- Reranking search results using graph distance
- Searching for nodes using predefined search recipes
The example is fully documented with clear explanations of each functionality and includes a comprehensive README with setup instructions and next steps.
Running with Docker Compose
You can use Docker Compose to quickly start the required services:
-
Neo4j Docker:
docker compose upThis will start the Neo4j Docker service and related components.
-
FalkorDB Docker:
docker compose --profile falkordb upThis will start the FalkorDB Docker service and related components.
MCP Server
The mcp_server directory contains a Model Context Protocol (MCP) server implementation for Graphiti. This server
allows AI assistants to interact with Graphiti's knowledge graph capabilities through the MCP protocol.
Key features of the MCP server include:
- Episode management (add, retrieve, delete)
- Entity management and relationship handling
- Semantic and hybrid search capabilities
- Group management for organizing related data
- Graph maintenance operations
The MCP server can be deployed using Docker with Neo4j, making it easy to integrate Graphiti into your AI assistant workflows.
For detailed setup instructions and usage examples, see the MCP server README.
REST Service
The server directory contains an API service for interacting with the Graphiti API. It is built using FastAPI.
Please see the server README for more information.
Optional Environment Variables
In addition to the Neo4j and OpenAi-compatible credentials, Graphiti also has a few optional environment variables. If you are using one of our supported models, such as Anthropic or Voyage models, the necessary environment variables must be set.
Database Configuration
Database names are configured directly in the driver constructors:
- Neo4j: Database name defaults to
neo4j(hardcoded in Neo4jDriver) - FalkorDB: Database name defaults to
default_db(hardcoded in FalkorDriver)
As of v0.17.0, if you need to customize your database configuration, you can instantiate a database driver and pass it
to the Graphiti constructor using the graph_driver parameter.
Neo4j with Custom Database Name
from graphiti_core import Graphiti
from graphiti_core.driver.neo4j_driver import Neo4jDriver
# Create a Neo4j driver with custom database name
driver = Neo4jDriver(
uri="bolt://localhost:7687",
user="neo4j",
password="password",
database="my_custom_database" # Custom database name
)
# Pass the driver to Graphiti
graphiti = Graphiti(graph_driver=driver)
FalkorDB with Custom Database Name
from graphiti_core import Graphiti
from graphiti_core.driver.falkordb_driver import FalkorDriver
# Create a FalkorDB driver with custom database name
driver = FalkorDriver(
host="localhost",
port=6379,
username="falkor_user", # Optional
password="falkor_password", # Optional
database="my_custom_graph" # Custom database name
)
# Pass the driver to Graphiti
graphiti = Graphiti(graph_driver=driver)
Kuzu
from graphiti_core import Graphiti
from graphiti_core.driver.kuzu_driver import KuzuDriver
# Create a Kuzu driver
driver = KuzuDriver(db="/tmp/graphiti.kuzu")
# Pass the driver to Graphiti
graphiti = Graphiti(graph_driver=driver)
Amazon Neptune
from graphiti_core import Graphiti
from graphiti_core.driver.neptune_driver import NeptuneDriver
# Create a FalkorDB driver with custom database name
driver = NeptuneDriver(
host= < NEPTUNE
ENDPOINT >,
aoss_host = < Amazon
OpenSearch
Serverless
Host >,
port = < PORT > # Optional, defaults to 8182,
aoss_port = < PORT > # Optional, defaults to 443
)
driver = NeptuneDriver(host=neptune_uri, aoss_host=aoss_host, port=neptune_port)
# Pass the driver to Graphiti
graphiti = Graphiti(graph_driver=driver)
Using Graphiti with Azure OpenAI
Graphiti supports Azure OpenAI for both LLM inference and embeddings. Azure deployments often require different endpoints for LLM and embedding services, and separate deployments for default and small models.
Important
Azure OpenAI v1 API Opt-in Required for Structured Outputs
Graphiti uses structured outputs via the
client.beta.chat.completions.parse()method, which requires Azure OpenAI deployments to opt into the v1 API. Without this opt-in, you'll encounter 404 Resource not found errors during episode ingestion.To enable v1 API support in your Azure OpenAI deployment, follow Microsoft's guide: Azure OpenAI API version lifecycle.
from openai import AsyncAzureOpenAI
from graphiti_core import Graphiti
from graphiti_core.llm_client import LLMConfig, OpenAIClient
from graphiti_core.embedder.openai import OpenAIEmbedder, OpenAIEmbedderConfig
from graphiti_core.cross_encoder.openai_reranker_client import OpenAIRerankerClient
# Azure OpenAI configuration - use separate endpoints for different services
api_key = "<your-api-key>"
api_version = "<your-api-version>"
llm_endpoint = "<your-llm-endpoint>" # e.g., "https://your-llm-resource.openai.azure.com/"
embedding_endpoint = "<your-embedding-endpoint>" # e.g., "https://your-embedding-resource.openai.azure.com/"
# Create separate Azure OpenAI clients for different services
llm_client_azure = AsyncAzureOpenAI(
api_key=api_key,
api_version=api_version,
azure_endpoint=llm_endpoint
)
embedding_client_azure = AsyncAzureOpenAI(
api_key=api_key,
api_version=api_version,
azure_endpoint=embedding_endpoint
)
# Create LLM Config with your Azure deployment names
azure_llm_config = LLMConfig(
small_model="gpt-4.1-nano",
model="gpt-4.1-mini",
)
# Initialize Graphiti with Azure OpenAI clients
graphiti = Graphiti(
"bolt://localhost:7687",
"neo4j",
"password",
llm_client=OpenAIClient(
config=azure_llm_config,
client=llm_client_azure
),
embedder=OpenAIEmbedder(
config=OpenAIEmbedderConfig(
embedding_model="text-embedding-3-small-deployment" # Your Azure embedding deployment name
),
client=embedding_client_azure
),
cross_encoder=OpenAIRerankerClient(
config=LLMConfig(
model=azure_llm_config.small_model # Use small model for reranking
),
client=llm_client_azure
)
)
# Now you can use Graphiti with Azure OpenAI
Make sure to replace the placeholder values with your actual Azure OpenAI credentials and deployment names that match your Azure OpenAI service configuration.
Using Graphiti with Google Gemini
Graphiti supports Google's Gemini models for LLM inference, embeddings, and cross-encoding/reranking. To use Gemini, you'll need to configure the LLM client, embedder, and the cross-encoder with your Google API key.
Install Graphiti:
uv add "graphiti-core[google-genai]"
# or
pip install "graphiti-core[google-genai]"
from graphiti_core import Graphiti
from graphiti_core.llm_client.gemini_client import GeminiClient, LLMConfig
from graphiti_core.embedder.gemini import GeminiEmbedder, GeminiEmbedderConfig
from graphiti_core.cross_encoder.gemini_reranker_client import GeminiRerankerClient
# Google API key configuration
api_key = "<your-google-api-key>"
# Initialize Graphiti with Gemini clients
graphiti = Graphiti(
"bolt://localhost:7687",
"neo4j",
"password",
llm_client=GeminiClient(
config=LLMConfig(
api_key=api_key,
model="gemini-2.0-flash"
)
),
embedder=GeminiEmbedder(
config=GeminiEmbedderConfig(
api_key=api_key,
embedding_model="embedding-001"
)
),
cross_encoder=GeminiRerankerClient(
config=LLMConfig(
api_key=api_key,
model="gemini-2.5-flash-lite-preview-06-17"
)
)
)
# Now you can use Graphiti with Google Gemini for all components
The Gemini reranker uses the gemini-2.5-flash-lite-preview-06-17 model by default, which is optimized for
cost-effective and low-latency classification tasks. It uses the same boolean classification approach as the OpenAI
reranker, leveraging Gemini's log probabilities feature to rank passage relevance.
Using Graphiti with Ollama (Local LLM)
Graphiti supports Ollama for running local LLMs and embedding models via Ollama's OpenAI-compatible API. This is ideal for privacy-focused applications or when you want to avoid API costs.
Install the models:
ollama pull deepseek-r1:7b # LLM
ollama pull nomic-embed-text # embeddings
from graphiti_core import Graphiti
from graphiti_core.llm_client.config import LLMConfig
from graphiti_core.llm_client.openai_generic_client import OpenAIGenericClient
from graphiti_core.embedder.openai import OpenAIEmbedder, OpenAIEmbedderConfig
from graphiti_core.cross_encoder.openai_reranker_client import OpenAIRerankerClient
# Configure Ollama LLM client
llm_config = LLMConfig(
api_key="ollama", # Ollama doesn't require a real API key, but some placeholder is needed
model="deepseek-r1:7b",
small_model="deepseek-r1:7b",
base_url="http://localhost:11434/v1", # Ollama's OpenAI-compatible endpoint
)
llm_client = OpenAIGenericClient(config=llm_config)
# Initialize Graphiti with Ollama clients
graphiti = Graphiti(
"bolt://localhost:7687",
"neo4j",
"password",
llm_client=llm_client,
embedder=OpenAIEmbedder(
config=OpenAIEmbedderConfig(
api_key="ollama", # Placeholder API key
embedding_model="nomic-embed-text",
embedding_dim=768,
base_url="http://localhost:11434/v1",
)
),
cross_encoder=OpenAIRerankerClient(client=llm_client, config=llm_config),
)
# Now you can use Graphiti with local Ollama models
Ensure Ollama is running (ollama serve) and that you have pulled the models you want to use.
Documentation
Telemetry
Graphiti collects anonymous usage statistics to help us understand how the framework is being used and improve it for everyone. We believe transparency is important, so here's exactly what we collect and why.
What We Collect
When you initialize a Graphiti instance, we collect:
- Anonymous identifier: A randomly generated UUID stored locally in
~/.cache/graphiti/telemetry_anon_id - System information: Operating system, Python version, and system architecture
- Graphiti version: The version you're using
- Configuration choices:
- LLM provider type (OpenAI, Azure, Anthropic, etc.)
- Database backend (Neo4j, FalkorDB, Kuzu, Amazon Neptune Database or Neptune Analytics)
- Embedder provider (OpenAI, Azure, Voyage, etc.)
What We Don't Collect
We are committed to protecting your privacy. We never collect:
- Personal information or identifiers
- API keys or credentials
- Your actual data, queries, or graph content
- IP addresses or hostnames
- File paths or system-specific information
- Any content from your episodes, nodes, or edges
Why We Collect This Data
This information helps us:
- Understand which configurations are most popular to prioritize support and testing
- Identify which LLM and database providers to focus development efforts on
- Track adoption patterns to guide our roadmap
- Ensure compatibility across different Python versions and operating systems
By sharing this anonymous information, you help us make Graphiti better for everyone in the community.
View the Telemetry Code
The Telemetry code may be found here.
How to Disable Telemetry
Telemetry is opt-out and can be disabled at any time. To disable telemetry collection:
Option 1: Environment Variable
export GRAPHITI_TELEMETRY_ENABLED=false
Option 2: Set in your shell profile
# For bash users (~/.bashrc or ~/.bash_profile)
echo 'export GRAPHITI_TELEMETRY_ENABLED=false' >> ~/.bashrc
# For zsh users (~/.zshrc)
echo 'export GRAPHITI_TELEMETRY_ENABLED=false' >> ~/.zshrc
Option 3: Set for a specific Python session
import os
os.environ['GRAPHITI_TELEMETRY_ENABLED'] = 'false'
# Then initialize Graphiti as usual
from graphiti_core import Graphiti
graphiti = Graphiti(...)
Telemetry is automatically disabled during test runs (when pytest is detected).
Technical Details
- Telemetry uses PostHog for anonymous analytics collection
- All telemetry operations are designed to fail silently - they will never interrupt your application or affect Graphiti functionality
- The anonymous ID is stored locally and is not tied to any personal information
Status and Roadmap
Graphiti is under active development. We aim to maintain API stability while working on:
- Supporting custom graph schemas:
- Allow developers to provide their own defined node and edge classes when ingesting episodes
- Enable more flexible knowledge representation tailored to specific use cases
- Enhancing retrieval capabilities with more robust and configurable options
- Graphiti MCP Server
- Expanding test coverage to ensure reliability and catch edge cases
Contributing
We encourage and appreciate all forms of contributions, whether it's code, documentation, addressing GitHub Issues, or answering questions in the Graphiti Discord channel. For detailed guidelines on code contributions, please refer to CONTRIBUTING.
Support
Join the Zep Discord server and make your way to the #Graphiti channel!
