graphiti/CLAUDE.md
Daniel Chalef aa6e38856a
[REFACTOR][FIX] Move away from DEFAULT_DATABASE environment variable in favour of driver-config support (dc) (#699)
* fix: remove global DEFAULT_DATABASE usage in favor of driver-specific
config

Fixes bugs introduced in PR #607. This removes reliance on the global
DEFAULT_DATABASE environment variable. It specifies the database within
each driver. PR #607 introduced a Neo4j compatability, as the database
names are different when attempting to support FalkorDB.

This refactor improves compatability across database types and ensures
future reliance by isolating the configuraiton to the driver level.

* fix: make falkordb support optional

This ensures that the the optional dependency and subsequent import is compliant with the graphiti-core project dependencies.

* chore: fmt code

* chore: undo changes to uv.lock

* fix: undo potentially breaking changes to drive interface

* fix: ensure a default database of "None" is provided - falling back to internal default

* chore: ensure default value exists for session and delete_all_indexes

* chore: fix typos and grammar

* chore: update package versions and dependencies in uv.lock and bulk_utils.py

* docs: update database configuration instructions for Neo4j and FalkorDB

Clarified default database names and how to override them in driver constructors. Updated testing requirements to include specific commands for running integration and unit tests.

* fix: ensure params defaults to an empty dictionary in Neo4jDriver

Updated the execute_query method to initialize params as an empty dictionary if not provided, ensuring compatibility with the database configuration.

---------

Co-authored-by: Urmzd <urmzd@dal.ca>
2025-07-10 17:25:39 -04:00

4.8 KiB

CLAUDE.md

This file provides guidance to Claude Code (claude.ai/code) when working with code in this repository.

Project Overview

Graphiti is a Python framework for building temporally-aware knowledge graphs designed for AI agents. It enables real-time incremental updates to knowledge graphs without batch recomputation, making it suitable for dynamic environments.

Key features:

  • Bi-temporal data model with explicit tracking of event occurrence times
  • Hybrid retrieval combining semantic embeddings, keyword search (BM25), and graph traversal
  • Support for custom entity definitions via Pydantic models
  • Integration with Neo4j and FalkorDB as graph storage backends

Development Commands

Main Development Commands (run from project root)

# Install dependencies
uv sync --extra dev

# Format code (ruff import sorting + formatting)
make format

# Lint code (ruff + pyright type checking)
make lint

# Run tests
make test

# Run all checks (format, lint, test)
make check

Server Development (run from server/ directory)

cd server/
# Install server dependencies
uv sync --extra dev

# Run server in development mode
uvicorn graph_service.main:app --reload

# Format, lint, test server code
make format
make lint
make test

MCP Server Development (run from mcp_server/ directory)

cd mcp_server/
# Install MCP server dependencies
uv sync

# Run with Docker Compose
docker-compose up

Code Architecture

Core Library (graphiti_core/)

  • Main Entry Point: graphiti.py - Contains the main Graphiti class that orchestrates all functionality
  • Graph Storage: driver/ - Database drivers for Neo4j and FalkorDB
  • LLM Integration: llm_client/ - Clients for OpenAI, Anthropic, Gemini, Groq
  • Embeddings: embedder/ - Embedding clients for various providers
  • Graph Elements: nodes.py, edges.py - Core graph data structures
  • Search: search/ - Hybrid search implementation with configurable strategies
  • Prompts: prompts/ - LLM prompts for entity extraction, deduplication, summarization
  • Utilities: utils/ - Maintenance operations, bulk processing, datetime handling

Server (server/)

  • FastAPI Service: graph_service/main.py - REST API server
  • Routers: routers/ - API endpoints for ingestion and retrieval
  • DTOs: dto/ - Data transfer objects for API contracts

MCP Server (mcp_server/)

  • MCP Implementation: graphiti_mcp_server.py - Model Context Protocol server for AI assistants
  • Docker Support: Containerized deployment with Neo4j

Testing

  • Unit Tests: tests/ - Comprehensive test suite using pytest
  • Integration Tests: Tests marked with _int suffix require database connections
  • Evaluation: tests/evals/ - End-to-end evaluation scripts

Configuration

Environment Variables

  • OPENAI_API_KEY - Required for LLM inference and embeddings
  • USE_PARALLEL_RUNTIME - Optional boolean for Neo4j parallel runtime (enterprise only)
  • Provider-specific keys: ANTHROPIC_API_KEY, GOOGLE_API_KEY, GROQ_API_KEY, VOYAGE_API_KEY

Database Setup

  • Neo4j: Version 5.26+ required, available via Neo4j Desktop
    • Database name defaults to neo4j (hardcoded in Neo4jDriver)
    • Override by passing database parameter to driver constructor
  • FalkorDB: Version 1.1.2+ as alternative backend
    • Database name defaults to default_db (hardcoded in FalkorDriver)
    • Override by passing database parameter to driver constructor

Development Guidelines

Code Style

  • Use Ruff for formatting and linting (configured in pyproject.toml)
  • Line length: 100 characters
  • Quote style: single quotes
  • Type checking with Pyright is enforced
  • Main project uses typeCheckingMode = "basic", server uses typeCheckingMode = "standard"

Testing Requirements

  • Run tests with make test or pytest
  • Integration tests require database connections and are marked with _int suffix
  • Use pytest-xdist for parallel test execution
  • Run specific test files: pytest tests/test_specific_file.py
  • Run specific test methods: pytest tests/test_file.py::test_method_name
  • Run only integration tests: pytest tests/ -k "_int"
  • Run only unit tests: pytest tests/ -k "not _int"

LLM Provider Support

The codebase supports multiple LLM providers but works best with services supporting structured output (OpenAI, Gemini). Other providers may cause schema validation issues, especially with smaller models.

MCP Server Usage Guidelines

When working with the MCP server, follow the patterns established in mcp_server/cursor_rules.md:

  • Always search for existing knowledge before adding new information
  • Use specific entity type filters (Preference, Procedure, Requirement)
  • Store new information immediately using add_memory
  • Follow discovered procedures and respect established preferences