Use official Dockerfile.standalone for custom MCP server build
This commit is contained in:
parent
66df6ce7df
commit
aab233496c
16 changed files with 2574 additions and 85 deletions
4
.github/workflows/build-custom-mcp.yml
vendored
4
.github/workflows/build-custom-mcp.yml
vendored
|
|
@ -92,8 +92,8 @@ jobs:
|
||||||
- name: Build and push Docker image
|
- name: Build and push Docker image
|
||||||
uses: docker/build-push-action@v5
|
uses: docker/build-push-action@v5
|
||||||
with:
|
with:
|
||||||
context: .
|
context: ./mcp_server
|
||||||
file: ./mcp_server/docker/Dockerfile.custom
|
file: ./mcp_server/docker/Dockerfile.standalone
|
||||||
platforms: linux/amd64,linux/arm64
|
platforms: linux/amd64,linux/arm64
|
||||||
push: true
|
push: true
|
||||||
tags: ${{ steps.docker_tags.outputs.tags }}
|
tags: ${{ steps.docker_tags.outputs.tags }}
|
||||||
|
|
|
||||||
1
.serena/.gitignore
vendored
Normal file
1
.serena/.gitignore
vendored
Normal file
|
|
@ -0,0 +1 @@
|
||||||
|
/cache
|
||||||
76
.serena/memories/code_structure.md
Normal file
76
.serena/memories/code_structure.md
Normal file
|
|
@ -0,0 +1,76 @@
|
||||||
|
# Graphiti Codebase Structure
|
||||||
|
|
||||||
|
## Root Directory Layout
|
||||||
|
```
|
||||||
|
graphiti/
|
||||||
|
├── graphiti_core/ # Core library (main Python package)
|
||||||
|
├── server/ # FastAPI REST API service
|
||||||
|
├── mcp_server/ # Model Context Protocol server for AI assistants
|
||||||
|
├── tests/ # Test suite (unit and integration tests)
|
||||||
|
├── examples/ # Example implementations and use cases
|
||||||
|
├── images/ # Documentation images and assets
|
||||||
|
├── signatures/ # CLA signatures
|
||||||
|
├── .github/ # GitHub Actions workflows
|
||||||
|
├── pyproject.toml # Project configuration and dependencies
|
||||||
|
├── Makefile # Development commands
|
||||||
|
├── README.md # Main documentation
|
||||||
|
├── CLAUDE.md # Claude Code assistant instructions
|
||||||
|
├── CONTRIBUTING.md # Contribution guidelines
|
||||||
|
└── docker-compose.yml # Docker configuration
|
||||||
|
```
|
||||||
|
|
||||||
|
## Core Library (`graphiti_core/`)
|
||||||
|
|
||||||
|
### Main Components
|
||||||
|
- **`graphiti.py`**: Main entry point containing the `Graphiti` class that orchestrates all functionality
|
||||||
|
- **`nodes.py`**: Core node/entity data structures
|
||||||
|
- **`edges.py`**: Core edge/relationship data structures
|
||||||
|
- **`graphiti_types.py`**: Type definitions
|
||||||
|
- **`errors.py`**: Custom exception classes
|
||||||
|
- **`helpers.py`**: Utility helper functions
|
||||||
|
- **`graph_queries.py`**: Graph query definitions
|
||||||
|
- **`decorators.py`**: Function decorators
|
||||||
|
- **`tracer.py`**: OpenTelemetry tracing support
|
||||||
|
|
||||||
|
### Subdirectories
|
||||||
|
- **`driver/`**: Database drivers for Neo4j, FalkorDB, Kuzu, Neptune
|
||||||
|
- **`llm_client/`**: LLM clients for OpenAI, Anthropic, Gemini, Groq
|
||||||
|
- **`embedder/`**: Embedding clients for various providers (OpenAI, Voyage, local models)
|
||||||
|
- **`cross_encoder/`**: Cross-encoder models for reranking
|
||||||
|
- **`search/`**: Hybrid search implementation with configurable strategies
|
||||||
|
- **`prompts/`**: LLM prompts for entity extraction, deduplication, summarization
|
||||||
|
- **`utils/`**: Maintenance operations, bulk processing, datetime handling
|
||||||
|
- **`models/`**: Pydantic models for data structures
|
||||||
|
- **`migrations/`**: Database migration scripts
|
||||||
|
- **`telemetry/`**: Analytics and telemetry code
|
||||||
|
|
||||||
|
## Server (`server/`)
|
||||||
|
- **`graph_service/main.py`**: FastAPI application entry point
|
||||||
|
- **`routers/`**: API endpoint definitions (ingestion, retrieval)
|
||||||
|
- **`dto/`**: Data Transfer Objects for API contracts
|
||||||
|
- Has its own `Makefile` for server-specific commands
|
||||||
|
|
||||||
|
## MCP Server (`mcp_server/`)
|
||||||
|
- **`graphiti_mcp_server.py`**: Model Context Protocol server implementation
|
||||||
|
- **`docker-compose.yml`**: Containerized deployment with Neo4j
|
||||||
|
- Has its own `pyproject.toml` and dependencies
|
||||||
|
|
||||||
|
## Tests (`tests/`)
|
||||||
|
- **Unit tests**: Standard pytest tests
|
||||||
|
- **Integration tests**: Files with `_int` suffix (require database connections)
|
||||||
|
- **`evals/`**: End-to-end evaluation scripts
|
||||||
|
- **`conftest.py`**: Pytest configuration and fixtures (at root level)
|
||||||
|
|
||||||
|
## Key Classes
|
||||||
|
From `graphiti_core/graphiti.py`:
|
||||||
|
- `Graphiti`: Main orchestrator class
|
||||||
|
- `AddEpisodeResults`: Results from adding episodes
|
||||||
|
- `AddBulkEpisodeResults`: Results from bulk episode operations
|
||||||
|
- `AddTripletResults`: Results from adding triplets
|
||||||
|
|
||||||
|
## Configuration Files
|
||||||
|
- **`pyproject.toml`**: Main project configuration (dependencies, build system, tool configs)
|
||||||
|
- **`pytest.ini`**: Pytest configuration
|
||||||
|
- **`.env.example`**: Example environment variables
|
||||||
|
- **`docker-compose.yml`**: Docker setup for development
|
||||||
|
- **`docker-compose.test.yml`**: Docker setup for testing
|
||||||
85
.serena/memories/code_style_and_conventions.md
Normal file
85
.serena/memories/code_style_and_conventions.md
Normal file
|
|
@ -0,0 +1,85 @@
|
||||||
|
# Code Style and Conventions
|
||||||
|
|
||||||
|
## Formatting and Linting
|
||||||
|
|
||||||
|
### Ruff Configuration
|
||||||
|
- **Tool**: Ruff (handles both linting and formatting)
|
||||||
|
- **Line length**: 100 characters
|
||||||
|
- **Quote style**: Single quotes (`'`)
|
||||||
|
- **Indentation**: Spaces (not tabs)
|
||||||
|
- **Docstring code format**: Enabled (formats code in docstrings)
|
||||||
|
|
||||||
|
### Linting Rules (Ruff)
|
||||||
|
Enabled rule sets:
|
||||||
|
- `E` - pycodestyle errors
|
||||||
|
- `F` - Pyflakes
|
||||||
|
- `UP` - pyupgrade (Python version upgrades)
|
||||||
|
- `B` - flake8-bugbear (common bugs)
|
||||||
|
- `SIM` - flake8-simplify (simplification suggestions)
|
||||||
|
- `I` - isort (import sorting)
|
||||||
|
|
||||||
|
Ignored rules:
|
||||||
|
- `E501` - Line too long (handled by line-length setting)
|
||||||
|
|
||||||
|
### Type Checking
|
||||||
|
- **Tool**: Pyright
|
||||||
|
- **Python version**: 3.10+
|
||||||
|
- **Type checking mode**: `basic` for main project, `standard` for server
|
||||||
|
- **Scope**: Main type checking focuses on `graphiti_core/` directory
|
||||||
|
- **Type hints**: Required and enforced
|
||||||
|
|
||||||
|
## Code Conventions
|
||||||
|
|
||||||
|
### General Guidelines
|
||||||
|
- Use type hints for all function parameters and return values
|
||||||
|
- Follow PEP 8 style guide (enforced by Ruff)
|
||||||
|
- Use Pydantic models for data validation and structure
|
||||||
|
- Prefer async/await for I/O operations
|
||||||
|
- Use descriptive variable and function names
|
||||||
|
|
||||||
|
### Python Version
|
||||||
|
- Minimum supported: Python 3.10
|
||||||
|
- Maximum supported: Python 3.x (< 4.0)
|
||||||
|
|
||||||
|
### Import Organization
|
||||||
|
Imports are automatically organized by Ruff using isort rules:
|
||||||
|
1. Standard library imports
|
||||||
|
2. Third-party imports
|
||||||
|
3. Local application imports
|
||||||
|
|
||||||
|
### Documentation
|
||||||
|
- Use docstrings for classes and public methods
|
||||||
|
- Keep README.md and CLAUDE.md up to date
|
||||||
|
- Add examples to `examples/` folder for new features
|
||||||
|
- Document breaking changes and migrations
|
||||||
|
|
||||||
|
### Testing Conventions
|
||||||
|
- Use pytest for all tests
|
||||||
|
- Async tests use `pytest-asyncio`
|
||||||
|
- Integration tests must have `_int` suffix in filename or test name
|
||||||
|
- Unit tests should not require external services
|
||||||
|
- Use fixtures from `conftest.py`
|
||||||
|
- Parallel execution supported via `pytest-xdist`
|
||||||
|
|
||||||
|
### Naming Conventions
|
||||||
|
- Classes: PascalCase (e.g., `Graphiti`, `AddEpisodeResults`)
|
||||||
|
- Functions/methods: snake_case (e.g., `add_episode`, `build_indices`)
|
||||||
|
- Constants: UPPER_SNAKE_CASE
|
||||||
|
- Private methods/attributes: prefix with underscore (e.g., `_internal_method`)
|
||||||
|
|
||||||
|
### Error Handling
|
||||||
|
- Use custom exceptions from `graphiti_core/errors.py`
|
||||||
|
- Provide meaningful error messages
|
||||||
|
- Use `tenacity` for retry logic on external service calls
|
||||||
|
|
||||||
|
### LLM Provider Support
|
||||||
|
- The codebase supports multiple LLM providers
|
||||||
|
- Best compatibility with services supporting structured output (OpenAI, Gemini)
|
||||||
|
- Smaller models may cause schema validation issues
|
||||||
|
- Always validate LLM outputs against expected schemas
|
||||||
|
|
||||||
|
## Configuration and Dependencies
|
||||||
|
- Use `pyproject.toml` for all project configuration
|
||||||
|
- Pin minimum versions in dependencies
|
||||||
|
- Optional features go in `[project.optional-dependencies]`
|
||||||
|
- Development dependencies go in `dev` extra
|
||||||
169
.serena/memories/development_commands.md
Normal file
169
.serena/memories/development_commands.md
Normal file
|
|
@ -0,0 +1,169 @@
|
||||||
|
# Development Commands
|
||||||
|
|
||||||
|
## Package Manager
|
||||||
|
This project uses **uv** (https://docs.astral.sh/uv/) instead of pip or poetry.
|
||||||
|
|
||||||
|
## Main Project Commands (from project root)
|
||||||
|
|
||||||
|
### Installation
|
||||||
|
```bash
|
||||||
|
# Install all dependencies including dev tools
|
||||||
|
make install
|
||||||
|
# OR
|
||||||
|
uv sync --extra dev
|
||||||
|
```
|
||||||
|
|
||||||
|
### Code Formatting
|
||||||
|
```bash
|
||||||
|
# Format code (runs ruff import sorting + code formatting)
|
||||||
|
make format
|
||||||
|
# Equivalent to:
|
||||||
|
# uv run ruff check --select I --fix
|
||||||
|
# uv run ruff format
|
||||||
|
```
|
||||||
|
|
||||||
|
### Linting
|
||||||
|
```bash
|
||||||
|
# Lint code (runs ruff checks + pyright type checking)
|
||||||
|
make lint
|
||||||
|
# Equivalent to:
|
||||||
|
# uv run ruff check
|
||||||
|
# uv run pyright ./graphiti_core
|
||||||
|
```
|
||||||
|
|
||||||
|
### Testing
|
||||||
|
```bash
|
||||||
|
# Run unit tests only (excludes integration tests)
|
||||||
|
make test
|
||||||
|
# Equivalent to:
|
||||||
|
# DISABLE_FALKORDB=1 DISABLE_KUZU=1 DISABLE_NEPTUNE=1 uv run pytest -m "not integration"
|
||||||
|
|
||||||
|
# Run all tests including integration tests
|
||||||
|
uv run pytest
|
||||||
|
|
||||||
|
# Run only integration tests
|
||||||
|
uv run pytest -k "_int"
|
||||||
|
|
||||||
|
# Run specific test file
|
||||||
|
uv run pytest tests/test_specific_file.py
|
||||||
|
|
||||||
|
# Run specific test method
|
||||||
|
uv run pytest tests/test_file.py::test_method_name
|
||||||
|
|
||||||
|
# Run tests in parallel (faster)
|
||||||
|
uv run pytest -n auto
|
||||||
|
```
|
||||||
|
|
||||||
|
### Combined Checks
|
||||||
|
```bash
|
||||||
|
# Run format, lint, and test in sequence
|
||||||
|
make check
|
||||||
|
# OR
|
||||||
|
make all
|
||||||
|
```
|
||||||
|
|
||||||
|
## Server Commands (from server/ directory)
|
||||||
|
|
||||||
|
```bash
|
||||||
|
cd server/
|
||||||
|
|
||||||
|
# Install server dependencies
|
||||||
|
uv sync --extra dev
|
||||||
|
|
||||||
|
# Run server in development mode with auto-reload
|
||||||
|
uvicorn graph_service.main:app --reload
|
||||||
|
|
||||||
|
# Format server code
|
||||||
|
make format
|
||||||
|
|
||||||
|
# Lint server code
|
||||||
|
make lint
|
||||||
|
|
||||||
|
# Test server code
|
||||||
|
make test
|
||||||
|
```
|
||||||
|
|
||||||
|
## MCP Server Commands (from mcp_server/ directory)
|
||||||
|
|
||||||
|
```bash
|
||||||
|
cd mcp_server/
|
||||||
|
|
||||||
|
# Install MCP server dependencies
|
||||||
|
uv sync
|
||||||
|
|
||||||
|
# Run with Docker Compose
|
||||||
|
docker-compose up
|
||||||
|
|
||||||
|
# Stop Docker Compose
|
||||||
|
docker-compose down
|
||||||
|
```
|
||||||
|
|
||||||
|
## Environment Variables for Testing
|
||||||
|
|
||||||
|
### Required for Integration Tests
|
||||||
|
```bash
|
||||||
|
export TEST_OPENAI_API_KEY=...
|
||||||
|
export TEST_OPENAI_MODEL=...
|
||||||
|
export TEST_ANTHROPIC_API_KEY=...
|
||||||
|
|
||||||
|
# For Neo4j
|
||||||
|
export TEST_URI=neo4j://...
|
||||||
|
export TEST_USER=...
|
||||||
|
export TEST_PASSWORD=...
|
||||||
|
```
|
||||||
|
|
||||||
|
### Optional Runtime Variables
|
||||||
|
```bash
|
||||||
|
export OPENAI_API_KEY=... # For LLM inference
|
||||||
|
export USE_PARALLEL_RUNTIME=true # Neo4j parallel runtime (enterprise only)
|
||||||
|
export ANTHROPIC_API_KEY=... # For Claude models
|
||||||
|
export GOOGLE_API_KEY=... # For Gemini models
|
||||||
|
export GROQ_API_KEY=... # For Groq models
|
||||||
|
export VOYAGE_API_KEY=... # For VoyageAI embeddings
|
||||||
|
```
|
||||||
|
|
||||||
|
## Git Workflow
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Create a new branch
|
||||||
|
git checkout -b feature/your-feature-name
|
||||||
|
|
||||||
|
# After making changes, run checks
|
||||||
|
make check
|
||||||
|
|
||||||
|
# Commit changes (ensure all checks pass first)
|
||||||
|
git add .
|
||||||
|
git commit -m "Your commit message"
|
||||||
|
|
||||||
|
# Push to your fork
|
||||||
|
git push origin feature/your-feature-name
|
||||||
|
```
|
||||||
|
|
||||||
|
## Common Development Tasks
|
||||||
|
|
||||||
|
### Before Submitting PR
|
||||||
|
1. `make check` - Ensures code is formatted, linted, and tested
|
||||||
|
2. Verify all tests pass including integration tests if applicable
|
||||||
|
3. Update documentation if needed
|
||||||
|
|
||||||
|
### Adding New Dependencies
|
||||||
|
Edit `pyproject.toml`:
|
||||||
|
- Core dependencies → `[project.dependencies]`
|
||||||
|
- Optional features → `[project.optional-dependencies]`
|
||||||
|
- Dev dependencies → `[project.optional-dependencies.dev]`
|
||||||
|
|
||||||
|
Then run:
|
||||||
|
```bash
|
||||||
|
uv sync --extra dev
|
||||||
|
```
|
||||||
|
|
||||||
|
### Database Setup
|
||||||
|
- **Neo4j**: Version 5.26+ required, use Neo4j Desktop
|
||||||
|
- **FalkorDB**: Version 1.1.2+ as alternative backend
|
||||||
|
|
||||||
|
## Tool Versions
|
||||||
|
- Python: 3.10+
|
||||||
|
- UV: Latest stable
|
||||||
|
- Pytest: 8.3.3+
|
||||||
|
- Ruff: 0.7.1+
|
||||||
|
- Pyright: 1.1.404+
|
||||||
104
.serena/memories/git_workflow.md
Normal file
104
.serena/memories/git_workflow.md
Normal file
|
|
@ -0,0 +1,104 @@
|
||||||
|
# Git Workflow for Graphiti Fork
|
||||||
|
|
||||||
|
## Repository Setup
|
||||||
|
|
||||||
|
This repository is a fork of the official Graphiti project with custom MCP server enhancements.
|
||||||
|
|
||||||
|
### Remote Configuration
|
||||||
|
|
||||||
|
```bash
|
||||||
|
origin https://github.com/Varming73/graphiti.git (your fork)
|
||||||
|
upstream https://github.com/getzep/graphiti.git (official Graphiti)
|
||||||
|
```
|
||||||
|
|
||||||
|
**Best Practice Convention:**
|
||||||
|
- `origin` = Your fork (where you push your changes)
|
||||||
|
- `upstream` = Official project (where you pull updates from)
|
||||||
|
|
||||||
|
## Common Workflows
|
||||||
|
|
||||||
|
### Push Your Changes
|
||||||
|
```bash
|
||||||
|
git add <files>
|
||||||
|
git commit -m "Your message"
|
||||||
|
git push origin main
|
||||||
|
```
|
||||||
|
|
||||||
|
### Pull Upstream Updates
|
||||||
|
```bash
|
||||||
|
# Fetch latest from upstream
|
||||||
|
git fetch upstream
|
||||||
|
|
||||||
|
# Merge upstream changes into your main branch
|
||||||
|
git merge upstream/main
|
||||||
|
|
||||||
|
# Or rebase if you prefer
|
||||||
|
git rebase upstream/main
|
||||||
|
|
||||||
|
# Push to your fork
|
||||||
|
git push origin main
|
||||||
|
```
|
||||||
|
|
||||||
|
### Check for Upstream Updates
|
||||||
|
```bash
|
||||||
|
git fetch upstream
|
||||||
|
git log HEAD..upstream/main --oneline # See what's new
|
||||||
|
```
|
||||||
|
|
||||||
|
### Sync with Upstream (Full Update)
|
||||||
|
```bash
|
||||||
|
# Fetch upstream
|
||||||
|
git fetch upstream
|
||||||
|
|
||||||
|
# Switch to main
|
||||||
|
git checkout main
|
||||||
|
|
||||||
|
# Merge or rebase
|
||||||
|
git merge upstream/main # or git rebase upstream/main
|
||||||
|
|
||||||
|
# Push to your fork
|
||||||
|
git push origin main
|
||||||
|
```
|
||||||
|
|
||||||
|
## Current Status
|
||||||
|
|
||||||
|
### Last Repository Replacement
|
||||||
|
- **Date**: 2025-11-08
|
||||||
|
- **Action**: Force-pushed clean code to replace "messed" project
|
||||||
|
- **Commit**: Added get_entities_by_type and compare_facts_over_time MCP tools
|
||||||
|
- **Result**: Successfully replaced entire fork history with clean implementation
|
||||||
|
|
||||||
|
### Upstream Tracking
|
||||||
|
- Upstream connection verified and working
|
||||||
|
- Can freely pull updates from official Graphiti project
|
||||||
|
- Your customizations remain in your fork
|
||||||
|
|
||||||
|
## MCP Server Customizations
|
||||||
|
|
||||||
|
Your fork contains these custom MCP tools (not in upstream):
|
||||||
|
1. `get_entities_by_type` - Retrieve entities by type classification
|
||||||
|
2. `compare_facts_over_time` - Compare facts between time periods
|
||||||
|
3. Enhanced `add_memory` UUID documentation
|
||||||
|
|
||||||
|
**Important**: When pulling upstream updates, these customizations are ONLY in `mcp_server/src/graphiti_mcp_server.py`. You may need to manually merge if upstream changes that file.
|
||||||
|
|
||||||
|
## Safety Notes
|
||||||
|
|
||||||
|
- **Never push to upstream** - You don't have permission and shouldn't try
|
||||||
|
- **Always test locally** before pushing to origin
|
||||||
|
- **Pull upstream regularly** to stay current with bug fixes and features
|
||||||
|
- **Document custom changes** in commit messages for future reference
|
||||||
|
|
||||||
|
## If You Need to Reset to Upstream
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Backup your current work first!
|
||||||
|
git checkout -b backup-branch
|
||||||
|
|
||||||
|
# Reset main to match upstream exactly
|
||||||
|
git checkout main
|
||||||
|
git reset --hard upstream/main
|
||||||
|
git push origin main --force
|
||||||
|
|
||||||
|
# Then cherry-pick your custom commits from backup-branch
|
||||||
|
```
|
||||||
193
.serena/memories/mcp_server_tools.md
Normal file
193
.serena/memories/mcp_server_tools.md
Normal file
|
|
@ -0,0 +1,193 @@
|
||||||
|
# MCP Server Tools Documentation
|
||||||
|
|
||||||
|
## Overview
|
||||||
|
The Graphiti MCP Server exposes Graphiti functionality through the Model Context Protocol (MCP) for AI assistants (like those in LibreChat). Each tool is decorated with `@mcp.tool()` and provides a specific capability.
|
||||||
|
|
||||||
|
## Tool Naming Convention
|
||||||
|
All tools follow MCP best practices:
|
||||||
|
- **snake_case naming**: All lowercase with underscores
|
||||||
|
- **Action-oriented**: Start with verbs (add, search, get, compare, delete)
|
||||||
|
- **Concise descriptions**: First line describes core action
|
||||||
|
- **Clear parameters**: Descriptions specify format and provide examples
|
||||||
|
|
||||||
|
Reference: https://modelcontextprotocol.io/specification/2025-06-18/server/tools
|
||||||
|
|
||||||
|
## Recent Changes
|
||||||
|
|
||||||
|
### 2025-11-08 - UUID Parameter Documentation Enhanced
|
||||||
|
**Problem**: LLMs were attempting to generate and provide UUIDs when adding NEW memories, which should never happen - UUIDs must be auto-generated for new episodes.
|
||||||
|
|
||||||
|
**Solution**: Enhanced the `uuid` parameter documentation in `add_memory` to be very explicit: "NEVER provide a UUID for new episodes - UUIDs are auto-generated. This parameter can ONLY be used for updating an existing episode by providing its existing UUID."
|
||||||
|
|
||||||
|
**Impact**: Clear guidance for LLMs to prevent them from trying to generate UUIDs for new memories while preserving the ability to update existing episodes.
|
||||||
|
|
||||||
|
## Tool List
|
||||||
|
|
||||||
|
### Core Memory Management
|
||||||
|
1. **add_memory** - Add episodes to the knowledge graph ✨ IMPROVED DOCS
|
||||||
|
2. **clear_graph** - Clear all data for specified group IDs
|
||||||
|
3. **get_status** - Get server and database connection status
|
||||||
|
|
||||||
|
### Search and Retrieval Tools
|
||||||
|
4. **search_nodes** - Search for nodes/entities using semantic search
|
||||||
|
5. **search_memory_facts** - Search for facts/relationships using semantic search
|
||||||
|
6. **get_entities_by_type** ⭐ NEW - Retrieve entities by their type classification
|
||||||
|
7. **compare_facts_over_time** ⭐ NEW - Compare facts between two time periods
|
||||||
|
|
||||||
|
### Entity and Episode Management
|
||||||
|
8. **get_entity_edge** - Retrieve a specific entity edge by UUID
|
||||||
|
9. **delete_entity_edge** - Delete an entity edge from the graph
|
||||||
|
10. **get_episodes** - Retrieve episodes from the graph
|
||||||
|
11. **delete_episode** - Delete an episode from the graph
|
||||||
|
|
||||||
|
## Tool Details
|
||||||
|
|
||||||
|
### add_memory (Updated Documentation)
|
||||||
|
**Purpose**: Add episodes to the knowledge graph
|
||||||
|
|
||||||
|
**MCP-Compliant Description**: "Add an episode to memory. This is the primary way to add information to the graph."
|
||||||
|
|
||||||
|
**Parameters**:
|
||||||
|
- `name`: str - Name of the episode
|
||||||
|
- `episode_body`: str - Content to persist (JSON string for source='json')
|
||||||
|
- `group_id`: Optional[str] - Group ID for this graph (uses default if not provided)
|
||||||
|
- `source`: str = 'text' - Source type ('text', 'json', or 'message')
|
||||||
|
- `source_description`: str = '' - Optional description of the source
|
||||||
|
- `uuid`: Optional[str] = None - **NEVER provide for NEW episodes**. Can ONLY be used to update an existing episode by providing its UUID.
|
||||||
|
|
||||||
|
**UUID Parameter Behavior**:
|
||||||
|
- **For NEW episodes**: Do NOT provide - auto-generated
|
||||||
|
- **For UPDATING episodes**: Provide the existing episode's UUID to replace/update it
|
||||||
|
- **Other uses**: Idempotent operations or external system integration (advanced)
|
||||||
|
|
||||||
|
**Implementation Notes**:
|
||||||
|
- Returns immediately, processes in background
|
||||||
|
- Episodes for same group_id processed sequentially
|
||||||
|
- Providing a UUID updates the episode with that UUID if it exists
|
||||||
|
|
||||||
|
### get_entities_by_type
|
||||||
|
**Added**: 2025-11-08
|
||||||
|
**Purpose**: Essential for PKM (Personal Knowledge Management) - enables browsing entities by their type classification
|
||||||
|
|
||||||
|
**MCP-Compliant Description**: "Retrieve entities by their type classification."
|
||||||
|
|
||||||
|
**Parameters**:
|
||||||
|
- `entity_types`: List[str] - Entity types to retrieve (e.g., ["Pattern", "Insight", "Preference"])
|
||||||
|
- `group_ids`: Optional[List[str]] - Filter by group IDs
|
||||||
|
- `max_entities`: int = 20 - Maximum entities to return
|
||||||
|
- `query`: Optional[str] - Optional search query to filter entities
|
||||||
|
|
||||||
|
**Implementation Notes**:
|
||||||
|
- Uses `SearchFilters(node_labels=entity_types)` from graphiti_core
|
||||||
|
- Uses `NODE_HYBRID_SEARCH_RRF` search config
|
||||||
|
- When query is provided: semantic search with type filter
|
||||||
|
- When query is empty: uses space (' ') as generic query to retrieve all of the type
|
||||||
|
- Returns `NodeSearchResponse` (same format as search_nodes)
|
||||||
|
|
||||||
|
**Use Cases**:
|
||||||
|
- "Show me all my Preferences"
|
||||||
|
- "List Patterns I've identified"
|
||||||
|
- "Get Insights about productivity"
|
||||||
|
- "Find all documented Procedures"
|
||||||
|
|
||||||
|
**Example**:
|
||||||
|
```python
|
||||||
|
# Get all preferences
|
||||||
|
get_entities_by_type(entity_types=["Preference"])
|
||||||
|
|
||||||
|
# Get patterns and insights about productivity
|
||||||
|
get_entities_by_type(
|
||||||
|
entity_types=["Pattern", "Insight"],
|
||||||
|
query="productivity"
|
||||||
|
)
|
||||||
|
```
|
||||||
|
|
||||||
|
### compare_facts_over_time
|
||||||
|
**Added**: 2025-11-08
|
||||||
|
**Purpose**: Track how knowledge/understanding evolved over time - critical for seeing how Patterns, Insights, and understanding changed
|
||||||
|
|
||||||
|
**MCP-Compliant Description**: "Compare facts between two time periods."
|
||||||
|
|
||||||
|
**Parameters**:
|
||||||
|
- `query`: str - Search query for facts to compare
|
||||||
|
- `start_time`: str - ISO 8601 timestamp (e.g., "2024-01-01" or "2024-01-01T10:30:00Z")
|
||||||
|
- `end_time`: str - ISO 8601 timestamp
|
||||||
|
- `group_ids`: Optional[List[str]] - Filter by group IDs
|
||||||
|
- `max_facts_per_period`: int = 10 - Max facts per time category
|
||||||
|
|
||||||
|
**Returns**: Dictionary with:
|
||||||
|
- `facts_from_start`: Facts valid at start_time
|
||||||
|
- `facts_at_end`: Facts valid at end_time
|
||||||
|
- `facts_invalidated`: Facts that were invalidated between start and end
|
||||||
|
- `facts_added`: Facts that became valid between start and end
|
||||||
|
- `summary`: Count statistics
|
||||||
|
|
||||||
|
**Implementation Notes**:
|
||||||
|
- Uses `DateFilter` and `ComparisonOperator` from graphiti_core.search.search_filters
|
||||||
|
- Uses `EDGE_HYBRID_SEARCH_RRF` search config
|
||||||
|
- Makes 4 separate searches with temporal filters:
|
||||||
|
1. Facts valid at start (valid_at <= start AND (invalid_at > start OR invalid_at IS NULL))
|
||||||
|
2. Facts valid at end (valid_at <= end AND (invalid_at > end OR invalid_at IS NULL))
|
||||||
|
3. Facts invalidated (invalid_at > start AND invalid_at <= end)
|
||||||
|
4. Facts added (created_at > start AND created_at <= end)
|
||||||
|
- Uses `format_fact_result()` helper for consistent formatting
|
||||||
|
|
||||||
|
**Use Cases**:
|
||||||
|
- "How did my understanding of sleep patterns change this month?"
|
||||||
|
- "What productivity insights were replaced?"
|
||||||
|
- "Show me how my procedures evolved"
|
||||||
|
- "Track changes in my preferences over time"
|
||||||
|
|
||||||
|
**Example**:
|
||||||
|
```python
|
||||||
|
compare_facts_over_time(
|
||||||
|
query="productivity patterns",
|
||||||
|
start_time="2024-01-01",
|
||||||
|
end_time="2024-03-01"
|
||||||
|
)
|
||||||
|
```
|
||||||
|
|
||||||
|
## Implementation Constraints
|
||||||
|
|
||||||
|
### Safe Design Principles
|
||||||
|
All tools follow strict constraints to maintain upstream compatibility:
|
||||||
|
1. **Only use public Graphiti APIs** - No custom Cypher queries, no internal methods
|
||||||
|
2. **MCP server only changes** - No modifications to graphiti_core/
|
||||||
|
3. **Existing patterns** - Follow same structure as existing tools
|
||||||
|
4. **Standard imports** - Only use imports already in the file or from stable public APIs
|
||||||
|
5. **MCP compliance** - Follow MCP specification for tool naming and descriptions
|
||||||
|
6. **LLM-friendly documentation** - Clear guidance to prevent LLM confusion (e.g., UUID usage)
|
||||||
|
|
||||||
|
### Dependencies
|
||||||
|
All required imports are either:
|
||||||
|
- Already present in the file (SearchFilters, format_fact_result)
|
||||||
|
- From stable public APIs (DateFilter, ComparisonOperator, search configs)
|
||||||
|
|
||||||
|
No new dependencies added to pyproject.toml.
|
||||||
|
|
||||||
|
## Testing Notes
|
||||||
|
|
||||||
|
### Validation Tests Passed
|
||||||
|
- ✅ Python syntax check (py_compile)
|
||||||
|
- ✅ Ruff formatting (auto-formatted)
|
||||||
|
- ✅ Ruff linting (all checks passed)
|
||||||
|
- ✅ No custom Cypher or internal APIs used
|
||||||
|
- ✅ Follows project code style conventions
|
||||||
|
- ✅ MCP specification compliance verified
|
||||||
|
- ✅ UUID documentation enhanced to prevent LLM misuse
|
||||||
|
|
||||||
|
### Manual Testing Required
|
||||||
|
Before production use, test:
|
||||||
|
1. add_memory without LLM trying to provide UUIDs for NEW episodes
|
||||||
|
2. add_memory with UUID for UPDATING existing episodes
|
||||||
|
3. get_entities_by_type with various entity type combinations
|
||||||
|
4. get_entities_by_type with and without query parameter
|
||||||
|
5. compare_facts_over_time with various date ranges
|
||||||
|
6. Error handling for invalid inputs (empty types, bad dates, etc.)
|
||||||
|
|
||||||
|
## File Location
|
||||||
|
`mcp_server/src/graphiti_mcp_server.py`
|
||||||
|
|
||||||
|
- `add_memory`: Updated documentation (lines 320-403)
|
||||||
|
- `get_entities_by_type`: Inserted after `search_nodes` function (lines 486-583)
|
||||||
|
- `compare_facts_over_time`: Inserted after `search_memory_facts` function (lines 585-766)
|
||||||
52
.serena/memories/project_overview.md
Normal file
52
.serena/memories/project_overview.md
Normal file
|
|
@ -0,0 +1,52 @@
|
||||||
|
# Graphiti Project Overview
|
||||||
|
|
||||||
|
## Purpose
|
||||||
|
Graphiti is a Python framework for building and querying temporally-aware knowledge graphs, specifically designed for AI agents operating in dynamic environments. It continuously integrates user interactions, structured/unstructured data, and external information into a coherent, queryable graph with incremental updates and efficient retrieval.
|
||||||
|
|
||||||
|
## Key Features
|
||||||
|
- **Bi-temporal data model**: Explicit tracking of event occurrence times
|
||||||
|
- **Hybrid retrieval**: Combining semantic embeddings, keyword search (BM25), and graph traversal
|
||||||
|
- **Custom entity definitions**: Support via Pydantic models
|
||||||
|
- **Real-time incremental updates**: No batch recomputation required
|
||||||
|
- **Multiple graph backends**: Neo4j and FalkorDB support
|
||||||
|
- **Optional OpenTelemetry tracing**: For distributed systems
|
||||||
|
|
||||||
|
## Use Cases
|
||||||
|
- Integrate and maintain dynamic user interactions and business data
|
||||||
|
- Facilitate state-based reasoning and task automation for agents
|
||||||
|
- Query complex, evolving data with semantic, keyword, and graph-based search methods
|
||||||
|
|
||||||
|
## Relationship to Zep
|
||||||
|
Graphiti powers the core of Zep, a turn-key context engineering platform for AI Agents. This is the open-source version that provides flexibility for custom implementations.
|
||||||
|
|
||||||
|
## Tech Stack
|
||||||
|
- **Language**: Python 3.10+
|
||||||
|
- **Package Manager**: uv (modern, fast Python package installer)
|
||||||
|
- **Core Dependencies**:
|
||||||
|
- Pydantic 2.11.5+ (data validation and models)
|
||||||
|
- Neo4j 5.26.0+ (primary graph database)
|
||||||
|
- OpenAI 1.91.0+ (LLM inference and embeddings)
|
||||||
|
- Tenacity 9.0.0+ (retry logic)
|
||||||
|
- DiskCache 5.6.3+ (caching)
|
||||||
|
|
||||||
|
- **Optional Integrations**:
|
||||||
|
- Anthropic (Claude models)
|
||||||
|
- Google Gemini
|
||||||
|
- Groq
|
||||||
|
- FalkorDB (alternative graph database)
|
||||||
|
- Kuzu (graph database)
|
||||||
|
- Neptune (AWS graph database)
|
||||||
|
- VoyageAI (embeddings)
|
||||||
|
- Sentence Transformers (local embeddings)
|
||||||
|
- OpenTelemetry (tracing)
|
||||||
|
|
||||||
|
- **Development Tools**:
|
||||||
|
- Ruff (linting and formatting)
|
||||||
|
- Pyright (type checking)
|
||||||
|
- Pytest (testing framework with pytest-asyncio and pytest-xdist)
|
||||||
|
|
||||||
|
## Project Version
|
||||||
|
Current version: 0.22.1pre2 (pre-release)
|
||||||
|
|
||||||
|
## Repository
|
||||||
|
https://github.com/getzep/graphiti
|
||||||
183
.serena/memories/system_commands.md
Normal file
183
.serena/memories/system_commands.md
Normal file
|
|
@ -0,0 +1,183 @@
|
||||||
|
# System Commands (Darwin/macOS)
|
||||||
|
|
||||||
|
This project is being developed on **Darwin** (macOS). Here are the relevant system commands:
|
||||||
|
|
||||||
|
## File System Navigation
|
||||||
|
|
||||||
|
### Basic Commands
|
||||||
|
```bash
|
||||||
|
ls # List directory contents
|
||||||
|
ls -la # List all files including hidden, with details
|
||||||
|
cd <dir> # Change directory
|
||||||
|
pwd # Print working directory
|
||||||
|
mkdir <dir> # Create directory
|
||||||
|
rm <file> # Remove file
|
||||||
|
rm -rf <dir> # Remove directory recursively
|
||||||
|
```
|
||||||
|
|
||||||
|
### macOS-Specific Notes
|
||||||
|
- Case-insensitive filesystem by default (though case-preserving)
|
||||||
|
- Hidden files start with `.` (like `.env`, `.gitignore`)
|
||||||
|
- Use `open .` to open current directory in Finder
|
||||||
|
- Use `open <file>` to open file with default application
|
||||||
|
|
||||||
|
## File Operations
|
||||||
|
|
||||||
|
### Reading Files
|
||||||
|
```bash
|
||||||
|
cat <file> # Display entire file
|
||||||
|
head -n 20 <file> # First 20 lines
|
||||||
|
tail -n 20 <file> # Last 20 lines
|
||||||
|
less <file> # Page through file
|
||||||
|
```
|
||||||
|
|
||||||
|
### Searching Files
|
||||||
|
```bash
|
||||||
|
find . -name "*.py" # Find Python files
|
||||||
|
find . -type f -name "test_*.py" # Find test files
|
||||||
|
grep -r "pattern" . # Search for pattern recursively
|
||||||
|
grep -r "pattern" --include="*.py" . # Search only in Python files
|
||||||
|
```
|
||||||
|
|
||||||
|
## Git Commands
|
||||||
|
|
||||||
|
### Basic Git Operations
|
||||||
|
```bash
|
||||||
|
git status # Check status
|
||||||
|
git branch # List branches
|
||||||
|
git checkout -b <branch> # Create and switch to new branch
|
||||||
|
git add <file> # Stage file
|
||||||
|
git add . # Stage all changes
|
||||||
|
git commit -m "message" # Commit changes
|
||||||
|
git push origin <branch> # Push to remote
|
||||||
|
git pull # Pull latest changes
|
||||||
|
git diff # Show unstaged changes
|
||||||
|
git diff --staged # Show staged changes
|
||||||
|
git log # View commit history
|
||||||
|
git log --oneline # Compact commit history
|
||||||
|
```
|
||||||
|
|
||||||
|
### Current Repository Info
|
||||||
|
- Current branch: `main`
|
||||||
|
- Main branch for PRs: `main`
|
||||||
|
|
||||||
|
## Process Management
|
||||||
|
|
||||||
|
```bash
|
||||||
|
ps aux # List all running processes
|
||||||
|
ps aux | grep python # Find Python processes
|
||||||
|
kill <PID> # Terminate process
|
||||||
|
kill -9 <PID> # Force kill process
|
||||||
|
```
|
||||||
|
|
||||||
|
## Environment Variables
|
||||||
|
|
||||||
|
### View Environment
|
||||||
|
```bash
|
||||||
|
env # List all environment variables
|
||||||
|
echo $PATH # Show PATH variable
|
||||||
|
echo $OPENAI_API_KEY # Show specific variable
|
||||||
|
```
|
||||||
|
|
||||||
|
### Set Environment Variables
|
||||||
|
```bash
|
||||||
|
export VAR_NAME=value # Set for current session
|
||||||
|
export OPENAI_API_KEY="sk-..." # Example
|
||||||
|
```
|
||||||
|
|
||||||
|
### Permanent Environment Variables
|
||||||
|
For permanent variables, add to `~/.zshrc` or `~/.bash_profile`:
|
||||||
|
```bash
|
||||||
|
echo 'export VAR_NAME=value' >> ~/.zshrc
|
||||||
|
source ~/.zshrc
|
||||||
|
```
|
||||||
|
|
||||||
|
## Docker Commands (if applicable)
|
||||||
|
|
||||||
|
```bash
|
||||||
|
docker ps # List running containers
|
||||||
|
docker ps -a # List all containers
|
||||||
|
docker-compose up # Start services
|
||||||
|
docker-compose up -d # Start in background
|
||||||
|
docker-compose down # Stop services
|
||||||
|
docker-compose logs -f # Follow logs
|
||||||
|
docker-compose ps # List compose services
|
||||||
|
```
|
||||||
|
|
||||||
|
## Network Commands
|
||||||
|
|
||||||
|
```bash
|
||||||
|
curl <url> # Make HTTP request
|
||||||
|
curl -I <url> # Get headers only
|
||||||
|
ping <host> # Check connectivity
|
||||||
|
netstat -an | grep LISTEN # Show listening ports
|
||||||
|
lsof -i :<port> # See what's using a port
|
||||||
|
```
|
||||||
|
|
||||||
|
## Permissions
|
||||||
|
|
||||||
|
```bash
|
||||||
|
chmod +x <file> # Make file executable
|
||||||
|
chmod 644 <file> # Set file permissions (rw-r--r--)
|
||||||
|
chmod 755 <file> # Set file permissions (rwxr-xr-x)
|
||||||
|
chown <user> <file> # Change file owner
|
||||||
|
```
|
||||||
|
|
||||||
|
## Useful Utilities
|
||||||
|
|
||||||
|
### Text Processing
|
||||||
|
```bash
|
||||||
|
wc -l <file> # Count lines
|
||||||
|
wc -w <file> # Count words
|
||||||
|
sort <file> # Sort lines
|
||||||
|
uniq <file> # Remove duplicates
|
||||||
|
awk '{print $1}' <file> # Print first column
|
||||||
|
sed 's/old/new/g' <file> # Replace text
|
||||||
|
```
|
||||||
|
|
||||||
|
### Archives
|
||||||
|
```bash
|
||||||
|
tar -czf archive.tar.gz <dir> # Create tar.gz archive
|
||||||
|
tar -xzf archive.tar.gz # Extract tar.gz archive
|
||||||
|
zip -r archive.zip <dir> # Create zip archive
|
||||||
|
unzip archive.zip # Extract zip archive
|
||||||
|
```
|
||||||
|
|
||||||
|
## macOS-Specific Commands
|
||||||
|
|
||||||
|
```bash
|
||||||
|
pbcopy < <file> # Copy file contents to clipboard
|
||||||
|
pbpaste > <file> # Paste clipboard to file
|
||||||
|
caffeinate # Prevent system sleep
|
||||||
|
say "text" # Text-to-speech
|
||||||
|
```
|
||||||
|
|
||||||
|
## Python/UV Specific
|
||||||
|
|
||||||
|
```bash
|
||||||
|
which python3 # Find Python executable location
|
||||||
|
python3 --version # Check Python version
|
||||||
|
uv --version # Check UV version
|
||||||
|
uv run python # Run Python with UV
|
||||||
|
uv pip list # List installed packages
|
||||||
|
```
|
||||||
|
|
||||||
|
## Development Workflow Integration
|
||||||
|
|
||||||
|
For this project, you'll commonly use:
|
||||||
|
```bash
|
||||||
|
# Navigate to project
|
||||||
|
cd /Users/lvarming/it-setup/projects/graphiti
|
||||||
|
|
||||||
|
# Check git status
|
||||||
|
git status
|
||||||
|
|
||||||
|
# Run development checks
|
||||||
|
make check
|
||||||
|
|
||||||
|
# Search for code patterns
|
||||||
|
grep -r "def add_episode" graphiti_core/
|
||||||
|
|
||||||
|
# Find specific files
|
||||||
|
find . -name "graphiti.py"
|
||||||
|
```
|
||||||
128
.serena/memories/task_completion_checklist.md
Normal file
128
.serena/memories/task_completion_checklist.md
Normal file
|
|
@ -0,0 +1,128 @@
|
||||||
|
# Task Completion Checklist
|
||||||
|
|
||||||
|
When you complete a coding task, follow these steps to ensure quality:
|
||||||
|
|
||||||
|
## 1. Format Code
|
||||||
|
```bash
|
||||||
|
make format
|
||||||
|
```
|
||||||
|
This will:
|
||||||
|
- Sort imports using ruff (isort rules)
|
||||||
|
- Format code to 100-character line length
|
||||||
|
- Apply single-quote style
|
||||||
|
- Format code in docstrings
|
||||||
|
|
||||||
|
## 2. Lint Code
|
||||||
|
```bash
|
||||||
|
make lint
|
||||||
|
```
|
||||||
|
This will:
|
||||||
|
- Run ruff checks for code quality issues
|
||||||
|
- Run pyright type checking on `graphiti_core/`
|
||||||
|
- Identify any style violations or type errors
|
||||||
|
|
||||||
|
Fix any issues reported by the linter.
|
||||||
|
|
||||||
|
## 3. Run Tests
|
||||||
|
```bash
|
||||||
|
# Run unit tests (default)
|
||||||
|
make test
|
||||||
|
|
||||||
|
# OR run all tests including integration tests
|
||||||
|
uv run pytest
|
||||||
|
```
|
||||||
|
|
||||||
|
Ensure all tests pass. If you:
|
||||||
|
- Modified existing functionality: Verify related tests still pass
|
||||||
|
- Added new functionality: Consider adding new tests
|
||||||
|
- Fixed a bug: Consider adding a regression test
|
||||||
|
|
||||||
|
## 4. Integration Testing (if applicable)
|
||||||
|
If your changes affect:
|
||||||
|
- Database interactions
|
||||||
|
- LLM integrations
|
||||||
|
- External service calls
|
||||||
|
|
||||||
|
Run integration tests:
|
||||||
|
```bash
|
||||||
|
# Ensure environment variables are set
|
||||||
|
export TEST_OPENAI_API_KEY=...
|
||||||
|
export TEST_URI=neo4j://...
|
||||||
|
export TEST_USER=...
|
||||||
|
export TEST_PASSWORD=...
|
||||||
|
|
||||||
|
# Run integration tests
|
||||||
|
uv run pytest -k "_int"
|
||||||
|
```
|
||||||
|
|
||||||
|
## 5. Type Checking
|
||||||
|
Pyright should have passed during `make lint`, but if you added new code, verify:
|
||||||
|
- All function parameters have type hints
|
||||||
|
- Return types are specified
|
||||||
|
- No `Any` types unless necessary
|
||||||
|
- Pydantic models are properly defined
|
||||||
|
|
||||||
|
## 6. Run Complete Check
|
||||||
|
Run the comprehensive check command:
|
||||||
|
```bash
|
||||||
|
make check
|
||||||
|
```
|
||||||
|
This runs format, lint, and test in sequence. All should pass.
|
||||||
|
|
||||||
|
## 7. Documentation Updates (if needed)
|
||||||
|
Consider if your changes require:
|
||||||
|
- README.md updates (for user-facing features)
|
||||||
|
- CLAUDE.md updates (for development patterns)
|
||||||
|
- Docstring additions/updates
|
||||||
|
- Example code in `examples/` folder
|
||||||
|
- Comments for complex logic
|
||||||
|
|
||||||
|
## 8. Git Commit
|
||||||
|
Only commit if all checks pass:
|
||||||
|
```bash
|
||||||
|
git add <files>
|
||||||
|
git commit -m "Descriptive commit message"
|
||||||
|
```
|
||||||
|
|
||||||
|
## 9. PR Preparation (if submitting changes)
|
||||||
|
Before creating a PR:
|
||||||
|
- Ensure `make check` passes completely
|
||||||
|
- Review your changes for any debug code or comments
|
||||||
|
- Check for any TODO items you added
|
||||||
|
- Verify no sensitive data (API keys, passwords) in code
|
||||||
|
- Consider if changes need an issue/RFC (>500 LOC changes require discussion)
|
||||||
|
|
||||||
|
## Quick Reference
|
||||||
|
Most common workflow:
|
||||||
|
```bash
|
||||||
|
# After making changes
|
||||||
|
make check
|
||||||
|
|
||||||
|
# If all passes, commit
|
||||||
|
git add .
|
||||||
|
git commit -m "Your message"
|
||||||
|
```
|
||||||
|
|
||||||
|
## Special Cases
|
||||||
|
|
||||||
|
### Server Changes
|
||||||
|
If you modified `server/` code:
|
||||||
|
```bash
|
||||||
|
cd server/
|
||||||
|
make format
|
||||||
|
make lint
|
||||||
|
make test
|
||||||
|
```
|
||||||
|
|
||||||
|
### MCP Server Changes
|
||||||
|
If you modified `mcp_server/` code:
|
||||||
|
```bash
|
||||||
|
cd mcp_server/
|
||||||
|
# Test with Docker
|
||||||
|
docker-compose up
|
||||||
|
```
|
||||||
|
|
||||||
|
### Large Architectural Changes
|
||||||
|
- Create GitHub issue (RFC) first
|
||||||
|
- Discuss technical design and justification
|
||||||
|
- Get feedback before implementing >500 LOC changes
|
||||||
84
.serena/project.yml
Normal file
84
.serena/project.yml
Normal file
|
|
@ -0,0 +1,84 @@
|
||||||
|
# list of languages for which language servers are started; choose from:
|
||||||
|
# al bash clojure cpp csharp csharp_omnisharp
|
||||||
|
# dart elixir elm erlang fortran go
|
||||||
|
# haskell java julia kotlin lua markdown
|
||||||
|
# nix perl php python python_jedi r
|
||||||
|
# rego ruby ruby_solargraph rust scala swift
|
||||||
|
# terraform typescript typescript_vts zig
|
||||||
|
# Note:
|
||||||
|
# - For C, use cpp
|
||||||
|
# - For JavaScript, use typescript
|
||||||
|
# Special requirements:
|
||||||
|
# - csharp: Requires the presence of a .sln file in the project folder.
|
||||||
|
# When using multiple languages, the first language server that supports a given file will be used for that file.
|
||||||
|
# The first language is the default language and the respective language server will be used as a fallback.
|
||||||
|
# Note that when using the JetBrains backend, language servers are not used and this list is correspondingly ignored.
|
||||||
|
languages:
|
||||||
|
- python
|
||||||
|
|
||||||
|
# the encoding used by text files in the project
|
||||||
|
# For a list of possible encodings, see https://docs.python.org/3.11/library/codecs.html#standard-encodings
|
||||||
|
encoding: "utf-8"
|
||||||
|
|
||||||
|
# whether to use the project's gitignore file to ignore files
|
||||||
|
# Added on 2025-04-07
|
||||||
|
ignore_all_files_in_gitignore: true
|
||||||
|
|
||||||
|
# list of additional paths to ignore
|
||||||
|
# same syntax as gitignore, so you can use * and **
|
||||||
|
# Was previously called `ignored_dirs`, please update your config if you are using that.
|
||||||
|
# Added (renamed) on 2025-04-07
|
||||||
|
ignored_paths: []
|
||||||
|
|
||||||
|
# whether the project is in read-only mode
|
||||||
|
# If set to true, all editing tools will be disabled and attempts to use them will result in an error
|
||||||
|
# Added on 2025-04-18
|
||||||
|
read_only: false
|
||||||
|
|
||||||
|
# list of tool names to exclude. We recommend not excluding any tools, see the readme for more details.
|
||||||
|
# Below is the complete list of tools for convenience.
|
||||||
|
# To make sure you have the latest list of tools, and to view their descriptions,
|
||||||
|
# execute `uv run scripts/print_tool_overview.py`.
|
||||||
|
#
|
||||||
|
# * `activate_project`: Activates a project by name.
|
||||||
|
# * `check_onboarding_performed`: Checks whether project onboarding was already performed.
|
||||||
|
# * `create_text_file`: Creates/overwrites a file in the project directory.
|
||||||
|
# * `delete_lines`: Deletes a range of lines within a file.
|
||||||
|
# * `delete_memory`: Deletes a memory from Serena's project-specific memory store.
|
||||||
|
# * `execute_shell_command`: Executes a shell command.
|
||||||
|
# * `find_referencing_code_snippets`: Finds code snippets in which the symbol at the given location is referenced.
|
||||||
|
# * `find_referencing_symbols`: Finds symbols that reference the symbol at the given location (optionally filtered by type).
|
||||||
|
# * `find_symbol`: Performs a global (or local) search for symbols with/containing a given name/substring (optionally filtered by type).
|
||||||
|
# * `get_current_config`: Prints the current configuration of the agent, including the active and available projects, tools, contexts, and modes.
|
||||||
|
# * `get_symbols_overview`: Gets an overview of the top-level symbols defined in a given file.
|
||||||
|
# * `initial_instructions`: Gets the initial instructions for the current project.
|
||||||
|
# Should only be used in settings where the system prompt cannot be set,
|
||||||
|
# e.g. in clients you have no control over, like Claude Desktop.
|
||||||
|
# * `insert_after_symbol`: Inserts content after the end of the definition of a given symbol.
|
||||||
|
# * `insert_at_line`: Inserts content at a given line in a file.
|
||||||
|
# * `insert_before_symbol`: Inserts content before the beginning of the definition of a given symbol.
|
||||||
|
# * `list_dir`: Lists files and directories in the given directory (optionally with recursion).
|
||||||
|
# * `list_memories`: Lists memories in Serena's project-specific memory store.
|
||||||
|
# * `onboarding`: Performs onboarding (identifying the project structure and essential tasks, e.g. for testing or building).
|
||||||
|
# * `prepare_for_new_conversation`: Provides instructions for preparing for a new conversation (in order to continue with the necessary context).
|
||||||
|
# * `read_file`: Reads a file within the project directory.
|
||||||
|
# * `read_memory`: Reads the memory with the given name from Serena's project-specific memory store.
|
||||||
|
# * `remove_project`: Removes a project from the Serena configuration.
|
||||||
|
# * `replace_lines`: Replaces a range of lines within a file with new content.
|
||||||
|
# * `replace_symbol_body`: Replaces the full definition of a symbol.
|
||||||
|
# * `restart_language_server`: Restarts the language server, may be necessary when edits not through Serena happen.
|
||||||
|
# * `search_for_pattern`: Performs a search for a pattern in the project.
|
||||||
|
# * `summarize_changes`: Provides instructions for summarizing the changes made to the codebase.
|
||||||
|
# * `switch_modes`: Activates modes by providing a list of their names
|
||||||
|
# * `think_about_collected_information`: Thinking tool for pondering the completeness of collected information.
|
||||||
|
# * `think_about_task_adherence`: Thinking tool for determining whether the agent is still on track with the current task.
|
||||||
|
# * `think_about_whether_you_are_done`: Thinking tool for determining whether the task is truly completed.
|
||||||
|
# * `write_memory`: Writes a named memory (for future reference) to Serena's project-specific memory store.
|
||||||
|
excluded_tools: []
|
||||||
|
|
||||||
|
# initial prompt for the project. It will always be given to the LLM upon activating the project
|
||||||
|
# (contrary to the memories, which are loaded on demand).
|
||||||
|
initial_prompt: ""
|
||||||
|
|
||||||
|
project_name: "graphiti"
|
||||||
|
included_optional_tools: []
|
||||||
379
DOCS/GitHub-DockerHub-Setup.md
Normal file
379
DOCS/GitHub-DockerHub-Setup.md
Normal file
|
|
@ -0,0 +1,379 @@
|
||||||
|
# GitHub Actions → Docker Hub Automated Build Setup
|
||||||
|
|
||||||
|
This guide explains how to automatically build your custom Graphiti MCP Docker image with your local changes and push it to Docker Hub using GitHub Actions.
|
||||||
|
|
||||||
|
## Why This Approach?
|
||||||
|
|
||||||
|
✅ **Automatic builds** - Every push to main triggers a new build
|
||||||
|
✅ **Reproducible** - Anyone can see exactly what was built
|
||||||
|
✅ **Multi-platform** - Builds for both AMD64 and ARM64
|
||||||
|
✅ **No local building** - GitHub does all the work
|
||||||
|
✅ **Version tracking** - Tied to git commits
|
||||||
|
✅ **Clean workflow** - Professional CI/CD pipeline
|
||||||
|
|
||||||
|
## Prerequisites
|
||||||
|
|
||||||
|
1. **GitHub account** with a fork of the graphiti repository
|
||||||
|
2. **Docker Hub account** (username: `lvarming`)
|
||||||
|
3. **Docker Hub Access Token** (for GitHub Actions to push images)
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Step 1: Create Docker Hub Access Token
|
||||||
|
|
||||||
|
1. Go to [Docker Hub](https://hub.docker.com/)
|
||||||
|
2. Click your username → **Account Settings**
|
||||||
|
3. Click **Security** → **New Access Token**
|
||||||
|
4. Give it a description: "GitHub Actions - Graphiti MCP"
|
||||||
|
5. **Copy the token** (you'll only see it once!)
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Step 2: Add Token to GitHub Repository Secrets
|
||||||
|
|
||||||
|
1. Go to your forked repository on GitHub
|
||||||
|
2. Click **Settings** → **Secrets and variables** → **Actions**
|
||||||
|
3. Click **New repository secret**
|
||||||
|
4. Name: `DOCKERHUB_TOKEN`
|
||||||
|
5. Value: Paste the access token from Step 1
|
||||||
|
6. Click **Add secret**
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Step 3: Verify Workflow Files
|
||||||
|
|
||||||
|
The repository already includes the necessary workflow file at:
|
||||||
|
```
|
||||||
|
.github/workflows/build-custom-mcp.yml
|
||||||
|
```
|
||||||
|
|
||||||
|
And the custom Dockerfile at:
|
||||||
|
```
|
||||||
|
mcp_server/docker/Dockerfile.custom
|
||||||
|
```
|
||||||
|
|
||||||
|
These files are configured to:
|
||||||
|
- Build using YOUR local `graphiti-core` changes (not PyPI)
|
||||||
|
- Push to `lvarming/graphiti-mcp` on Docker Hub
|
||||||
|
- Tag with version numbers and `latest`
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Step 4: Trigger a Build
|
||||||
|
|
||||||
|
### Option A: Automatic Build (On Push)
|
||||||
|
|
||||||
|
The workflow automatically triggers when you:
|
||||||
|
- Push to the `main` branch
|
||||||
|
- Modify files in `graphiti_core/` or `mcp_server/`
|
||||||
|
|
||||||
|
Simply commit and push your changes:
|
||||||
|
```bash
|
||||||
|
git add .
|
||||||
|
git commit -m "Update graphiti-core with custom changes"
|
||||||
|
git push origin main
|
||||||
|
```
|
||||||
|
|
||||||
|
### Option B: Manual Build
|
||||||
|
|
||||||
|
1. Go to your repository on GitHub
|
||||||
|
2. Click **Actions** tab
|
||||||
|
3. Select **Build Custom MCP Server** workflow
|
||||||
|
4. Click **Run workflow** dropdown
|
||||||
|
5. (Optional) Specify a custom tag, or leave as `latest`
|
||||||
|
6. Click **Run workflow**
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Step 5: Monitor the Build
|
||||||
|
|
||||||
|
1. Click on the running workflow to see progress
|
||||||
|
2. The build takes about 5-10 minutes
|
||||||
|
3. You'll see:
|
||||||
|
- Version extraction
|
||||||
|
- Docker image build (for AMD64 and ARM64)
|
||||||
|
- Push to Docker Hub
|
||||||
|
- Build summary with tags
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Step 6: Verify Image on Docker Hub
|
||||||
|
|
||||||
|
1. Go to [Docker Hub](https://hub.docker.com/)
|
||||||
|
2. Navigate to your repository: `lvarming/graphiti-mcp`
|
||||||
|
3. Check the **Tags** tab
|
||||||
|
4. You should see tags like:
|
||||||
|
- `latest`
|
||||||
|
- `mcp-1.0.0`
|
||||||
|
- `mcp-1.0.0-core-0.23.0`
|
||||||
|
- `sha-abc1234`
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Step 7: Use Your Custom Image
|
||||||
|
|
||||||
|
### In Unraid
|
||||||
|
|
||||||
|
Update your Docker container to use:
|
||||||
|
```
|
||||||
|
Repository: lvarming/graphiti-mcp:latest
|
||||||
|
```
|
||||||
|
|
||||||
|
### In Docker Compose
|
||||||
|
|
||||||
|
```yaml
|
||||||
|
services:
|
||||||
|
graphiti-mcp:
|
||||||
|
image: lvarming/graphiti-mcp:latest
|
||||||
|
container_name: graphiti-mcp
|
||||||
|
restart: unless-stopped
|
||||||
|
# ... rest of your config
|
||||||
|
```
|
||||||
|
|
||||||
|
### Pull Manually
|
||||||
|
|
||||||
|
```bash
|
||||||
|
docker pull lvarming/graphiti-mcp:latest
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Understanding the Build Process
|
||||||
|
|
||||||
|
### What Gets Built
|
||||||
|
|
||||||
|
The custom Dockerfile (`Dockerfile.custom`) does the following:
|
||||||
|
|
||||||
|
1. **Copies entire project** - Both `graphiti_core/` and `mcp_server/`
|
||||||
|
2. **Builds graphiti-core from local source** - Not from PyPI
|
||||||
|
3. **Installs MCP server** - Using the local graphiti-core
|
||||||
|
4. **Creates multi-platform image** - AMD64 and ARM64
|
||||||
|
|
||||||
|
### Version Tagging
|
||||||
|
|
||||||
|
Each build creates multiple tags:
|
||||||
|
|
||||||
|
| Tag | Description | Example |
|
||||||
|
|-----|-------------|---------|
|
||||||
|
| `latest` | Always points to most recent build | `lvarming/graphiti-mcp:latest` |
|
||||||
|
| `mcp-X.Y.Z` | MCP server version | `lvarming/graphiti-mcp:mcp-1.0.0` |
|
||||||
|
| `mcp-X.Y.Z-core-A.B.C` | Full version info | `lvarming/graphiti-mcp:mcp-1.0.0-core-0.23.0` |
|
||||||
|
| `sha-xxxxxxx` | Git commit SHA | `lvarming/graphiti-mcp:sha-abc1234` |
|
||||||
|
|
||||||
|
### Build Arguments
|
||||||
|
|
||||||
|
The workflow passes these build arguments:
|
||||||
|
|
||||||
|
```dockerfile
|
||||||
|
GRAPHITI_CORE_VERSION=0.23.0 # From pyproject.toml
|
||||||
|
MCP_SERVER_VERSION=1.0.0 # From mcp_server/pyproject.toml
|
||||||
|
BUILD_DATE=2025-11-08T12:00:00Z # UTC timestamp
|
||||||
|
VCS_REF=abc1234 # Git commit hash
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Workflow Customization
|
||||||
|
|
||||||
|
### Change Docker Hub Username
|
||||||
|
|
||||||
|
If you want to use a different Docker Hub account, edit `.github/workflows/build-custom-mcp.yml`:
|
||||||
|
|
||||||
|
```yaml
|
||||||
|
env:
|
||||||
|
DOCKERHUB_USERNAME: your-username # Change this
|
||||||
|
IMAGE_NAME: graphiti-mcp
|
||||||
|
```
|
||||||
|
|
||||||
|
### Change Trigger Conditions
|
||||||
|
|
||||||
|
To only build on tags instead of every push:
|
||||||
|
|
||||||
|
```yaml
|
||||||
|
on:
|
||||||
|
push:
|
||||||
|
tags:
|
||||||
|
- 'v*.*.*'
|
||||||
|
```
|
||||||
|
|
||||||
|
### Add Slack/Discord Notifications
|
||||||
|
|
||||||
|
Add a notification step at the end of the workflow:
|
||||||
|
|
||||||
|
```yaml
|
||||||
|
- name: Notify on Success
|
||||||
|
uses: slackapi/slack-github-action@v1
|
||||||
|
with:
|
||||||
|
webhook-url: ${{ secrets.SLACK_WEBHOOK }}
|
||||||
|
payload: |
|
||||||
|
{
|
||||||
|
"text": "✅ New Graphiti MCP image built: lvarming/graphiti-mcp:latest"
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Troubleshooting
|
||||||
|
|
||||||
|
### Build Fails - "Error: buildx failed"
|
||||||
|
|
||||||
|
**Cause**: Docker Buildx issue
|
||||||
|
**Solution**: Re-run the workflow (transient issue)
|
||||||
|
|
||||||
|
### Build Fails - "unauthorized: incorrect username or password"
|
||||||
|
|
||||||
|
**Cause**: Invalid Docker Hub credentials
|
||||||
|
**Solution**:
|
||||||
|
1. Verify `DOCKERHUB_TOKEN` secret is correct
|
||||||
|
2. Regenerate access token on Docker Hub
|
||||||
|
3. Update the secret in GitHub
|
||||||
|
|
||||||
|
### Build Fails - "No space left on device"
|
||||||
|
|
||||||
|
**Cause**: GitHub runner out of disk space
|
||||||
|
**Solution**: Add cleanup step before build:
|
||||||
|
|
||||||
|
```yaml
|
||||||
|
- name: Free up disk space
|
||||||
|
run: |
|
||||||
|
docker system prune -af
|
||||||
|
df -h
|
||||||
|
```
|
||||||
|
|
||||||
|
### Image Not Found on Docker Hub
|
||||||
|
|
||||||
|
**Cause**: Image is private
|
||||||
|
**Solution**:
|
||||||
|
1. Go to Docker Hub → lvarming/graphiti-mcp
|
||||||
|
2. Click **Settings**
|
||||||
|
3. Make repository **Public**
|
||||||
|
|
||||||
|
### Workflow Doesn't Trigger
|
||||||
|
|
||||||
|
**Cause**: Branch protection or incorrect path filters
|
||||||
|
**Solution**:
|
||||||
|
1. Check you're pushing to `main` branch
|
||||||
|
2. Verify changes are in `graphiti_core/` or `mcp_server/`
|
||||||
|
3. Manually trigger from Actions tab
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Advanced: Multi-Repository Setup
|
||||||
|
|
||||||
|
If you want separate images for development and production:
|
||||||
|
|
||||||
|
### Development Image
|
||||||
|
|
||||||
|
Create `.github/workflows/build-dev-mcp.yml`:
|
||||||
|
|
||||||
|
```yaml
|
||||||
|
name: Build Dev MCP Server
|
||||||
|
|
||||||
|
on:
|
||||||
|
push:
|
||||||
|
branches:
|
||||||
|
- dev
|
||||||
|
- feature/*
|
||||||
|
|
||||||
|
env:
|
||||||
|
DOCKERHUB_USERNAME: lvarming
|
||||||
|
IMAGE_NAME: graphiti-mcp-dev # Different image name
|
||||||
|
```
|
||||||
|
|
||||||
|
### Production Image
|
||||||
|
|
||||||
|
Keep the main workflow for production builds on `main` branch.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Comparing with Official Builds
|
||||||
|
|
||||||
|
| Feature | Official (zepai) | Your Custom Build |
|
||||||
|
|---------|-----------------|-------------------|
|
||||||
|
| Source | PyPI graphiti-core | Local graphiti-core |
|
||||||
|
| Trigger | Manual tags only | Auto on push + manual |
|
||||||
|
| Docker Hub | zepai/knowledge-graph-mcp | lvarming/graphiti-mcp |
|
||||||
|
| Build Platform | Depot (paid) | GitHub Actions (free) |
|
||||||
|
| Customization | Limited | Full control |
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Best Practices
|
||||||
|
|
||||||
|
### 1. **Pin Versions for Production**
|
||||||
|
|
||||||
|
Instead of `latest`, use specific versions:
|
||||||
|
```yaml
|
||||||
|
image: lvarming/graphiti-mcp:mcp-1.0.0-core-0.23.0
|
||||||
|
```
|
||||||
|
|
||||||
|
### 2. **Test Before Deploying**
|
||||||
|
|
||||||
|
Add a test step in the workflow:
|
||||||
|
```yaml
|
||||||
|
- name: Test image
|
||||||
|
run: |
|
||||||
|
docker run --rm lvarming/graphiti-mcp:latest --version
|
||||||
|
```
|
||||||
|
|
||||||
|
### 3. **Keep Workflows Updated**
|
||||||
|
|
||||||
|
GitHub Actions updates frequently. Use Dependabot:
|
||||||
|
|
||||||
|
Create `.github/dependabot.yml`:
|
||||||
|
```yaml
|
||||||
|
version: 2
|
||||||
|
updates:
|
||||||
|
- package-ecosystem: "github-actions"
|
||||||
|
directory: "/"
|
||||||
|
schedule:
|
||||||
|
interval: "weekly"
|
||||||
|
```
|
||||||
|
|
||||||
|
### 4. **Monitor Build Times**
|
||||||
|
|
||||||
|
If builds are slow, enable caching:
|
||||||
|
```yaml
|
||||||
|
cache-from: type=gha
|
||||||
|
cache-to: type=gha,mode=max
|
||||||
|
```
|
||||||
|
(Already enabled in the workflow!)
|
||||||
|
|
||||||
|
### 5. **Security Scanning**
|
||||||
|
|
||||||
|
Add Trivy security scanner:
|
||||||
|
```yaml
|
||||||
|
- name: Run Trivy vulnerability scanner
|
||||||
|
uses: aquasecurity/trivy-action@master
|
||||||
|
with:
|
||||||
|
image-ref: lvarming/graphiti-mcp:latest
|
||||||
|
format: 'table'
|
||||||
|
exit-code: '1'
|
||||||
|
severity: 'CRITICAL,HIGH'
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Next Steps
|
||||||
|
|
||||||
|
1. ✅ Set up Docker Hub access token
|
||||||
|
2. ✅ Add secret to GitHub repository
|
||||||
|
3. ✅ Push changes to trigger first build
|
||||||
|
4. ✅ Verify image appears on Docker Hub
|
||||||
|
5. ✅ Update your Unraid/LibreChat config to use new image
|
||||||
|
6. 📝 Document any custom changes in DOCS/
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Questions?
|
||||||
|
|
||||||
|
- **GitHub Actions Issues**: Check the Actions tab for detailed logs
|
||||||
|
- **Docker Hub Issues**: Verify your account and access token
|
||||||
|
- **Build Failures**: Review the workflow logs for specific errors
|
||||||
|
|
||||||
|
## Related Documentation
|
||||||
|
|
||||||
|
- [LibreChat Setup Guide](./Librechat.setup.md)
|
||||||
|
- [OpenAI Compatible Endpoints](./OpenAI-Compatible-Endpoints.md)
|
||||||
|
- [GitHub Actions Documentation](https://docs.github.com/en/actions)
|
||||||
|
- [Docker Hub Documentation](https://docs.docker.com/docker-hub/)
|
||||||
371
DOCS/Librechat.setup.md
Normal file
371
DOCS/Librechat.setup.md
Normal file
|
|
@ -0,0 +1,371 @@
|
||||||
|
Complete Setup Guide: Graphiti MCP + LibreChat + Neo4j on Unraid
|
||||||
|
|
||||||
|
Prerequisites
|
||||||
|
|
||||||
|
- LibreChat running in Docker on Unraid
|
||||||
|
- Neo4j Docker container running on Unraid
|
||||||
|
- OpenAI API key (or other LLM provider)
|
||||||
|
- Access to your Unraid Docker network
|
||||||
|
|
||||||
|
---
|
||||||
|
Step 1: Prepare Graphiti MCP Configuration
|
||||||
|
|
||||||
|
1.1 Create a directory on Unraid for Graphiti MCP
|
||||||
|
|
||||||
|
mkdir -p /mnt/user/appdata/graphiti-mcp/config
|
||||||
|
|
||||||
|
1.2 Create .env file
|
||||||
|
|
||||||
|
Create /mnt/user/appdata/graphiti-mcp/.env with your settings:
|
||||||
|
|
||||||
|
# Neo4j Connection - IMPORTANT: Use your existing Neo4j container details
|
||||||
|
# If your Neo4j container is named "neo4j", use: bolt://neo4j:7687
|
||||||
|
# Replace with your actual container name if different
|
||||||
|
NEO4J_URI=bolt://YOUR_NEO4J_CONTAINER_NAME:7687
|
||||||
|
NEO4J_USER=neo4j
|
||||||
|
NEO4J_PASSWORD=YOUR_NEO4J_PASSWORD
|
||||||
|
NEO4J_DATABASE=neo4j
|
||||||
|
|
||||||
|
# OpenAI Configuration (Required)
|
||||||
|
OPENAI_API_KEY=sk-your-openai-api-key-here
|
||||||
|
|
||||||
|
# LLM Model (default: gpt-5-mini)
|
||||||
|
MODEL_NAME=gpt-5-mini
|
||||||
|
|
||||||
|
# Concurrency Control - adjust based on your OpenAI tier
|
||||||
|
# Tier 1 (free): 1-2, Tier 2: 5-8, Tier 3: 10-15
|
||||||
|
SEMAPHORE_LIMIT=10
|
||||||
|
|
||||||
|
# Group ID for namespacing (optional)
|
||||||
|
GRAPHITI_GROUP_ID=main
|
||||||
|
|
||||||
|
# Disable telemetry (optional)
|
||||||
|
GRAPHITI_TELEMETRY_ENABLED=false
|
||||||
|
|
||||||
|
1.3 Create config file
|
||||||
|
|
||||||
|
Create /mnt/user/appdata/graphiti-mcp/config/config.yaml:
|
||||||
|
|
||||||
|
server:
|
||||||
|
transport: "http"
|
||||||
|
host: "0.0.0.0"
|
||||||
|
port: 8000
|
||||||
|
|
||||||
|
llm:
|
||||||
|
provider: "openai"
|
||||||
|
model: "gpt-5-mini"
|
||||||
|
max_tokens: 4096
|
||||||
|
|
||||||
|
providers:
|
||||||
|
openai:
|
||||||
|
api_key: ${OPENAI_API_KEY}
|
||||||
|
api_url: ${OPENAI_API_URL:https://api.openai.com/v1}
|
||||||
|
|
||||||
|
embedder:
|
||||||
|
provider: "openai"
|
||||||
|
model: "text-embedding-3-small"
|
||||||
|
dimensions: 1536
|
||||||
|
|
||||||
|
providers:
|
||||||
|
openai:
|
||||||
|
api_key: ${OPENAI_API_KEY}
|
||||||
|
|
||||||
|
database:
|
||||||
|
provider: "neo4j"
|
||||||
|
|
||||||
|
providers:
|
||||||
|
neo4j:
|
||||||
|
uri: ${NEO4J_URI}
|
||||||
|
username: ${NEO4J_USER}
|
||||||
|
password: ${NEO4J_PASSWORD}
|
||||||
|
database: ${NEO4J_DATABASE:neo4j}
|
||||||
|
use_parallel_runtime: false
|
||||||
|
|
||||||
|
graphiti:
|
||||||
|
group_id: ${GRAPHITI_GROUP_ID:main}
|
||||||
|
user_id: ${USER_ID:mcp_user}
|
||||||
|
entity_types:
|
||||||
|
- name: "Preference"
|
||||||
|
description: "User preferences, choices, opinions, or selections"
|
||||||
|
- name: "Requirement"
|
||||||
|
description: "Specific needs, features, or functionality that must be fulfilled"
|
||||||
|
- name: "Procedure"
|
||||||
|
description: "Standard operating procedures and sequential instructions"
|
||||||
|
- name: "Location"
|
||||||
|
description: "Physical or virtual places where activities occur"
|
||||||
|
- name: "Event"
|
||||||
|
description: "Time-bound activities, occurrences, or experiences"
|
||||||
|
- name: "Organization"
|
||||||
|
description: "Companies, institutions, groups, or formal entities"
|
||||||
|
- name: "Document"
|
||||||
|
description: "Information content in various forms"
|
||||||
|
- name: "Topic"
|
||||||
|
description: "Subject of conversation, interest, or knowledge domain"
|
||||||
|
- name: "Object"
|
||||||
|
description: "Physical items, tools, devices, or possessions"
|
||||||
|
|
||||||
|
---
|
||||||
|
Step 2: Deploy Graphiti MCP on Unraid
|
||||||
|
|
||||||
|
You have two options for deploying on Unraid:
|
||||||
|
|
||||||
|
Option A: Using Unraid Docker Template (Recommended)
|
||||||
|
|
||||||
|
1. Go to Docker tab in Unraid
|
||||||
|
2. Click Add Container
|
||||||
|
3. Fill in the following settings:
|
||||||
|
|
||||||
|
Basic Settings:
|
||||||
|
- Name: graphiti-mcp
|
||||||
|
- Repository: lvarming/graphiti-mcp:latest # Custom build with your changes
|
||||||
|
- Network Type: bridge (or custom: br0 if you have a custom network)
|
||||||
|
|
||||||
|
Port Mappings:
|
||||||
|
- Container Port: 8000 → Host Port: 8000
|
||||||
|
|
||||||
|
Path Mappings:
|
||||||
|
- Config Path:
|
||||||
|
- Container Path: /app/mcp/config/config.yaml
|
||||||
|
- Host Path: /mnt/user/appdata/graphiti-mcp/config/config.yaml
|
||||||
|
- Access Mode: Read Only
|
||||||
|
|
||||||
|
Environment Variables:
|
||||||
|
Add each variable from your .env file:
|
||||||
|
NEO4J_URI=bolt://YOUR_NEO4J_CONTAINER_NAME:7687
|
||||||
|
NEO4J_USER=neo4j
|
||||||
|
NEO4J_PASSWORD=your_password
|
||||||
|
NEO4J_DATABASE=neo4j
|
||||||
|
OPENAI_API_KEY=sk-your-key-here
|
||||||
|
GRAPHITI_GROUP_ID=main
|
||||||
|
SEMAPHORE_LIMIT=10
|
||||||
|
CONFIG_PATH=/app/mcp/config/config.yaml
|
||||||
|
PATH=/root/.local/bin:${PATH}
|
||||||
|
|
||||||
|
Extra Parameters:
|
||||||
|
--env-file=/mnt/user/appdata/graphiti-mcp/.env
|
||||||
|
|
||||||
|
Network: Ensure this container is on the same Docker network as your Neo4j and LibreChat containers.
|
||||||
|
|
||||||
|
Option B: Using Docker Compose
|
||||||
|
|
||||||
|
Create /mnt/user/appdata/graphiti-mcp/docker-compose.yml:
|
||||||
|
|
||||||
|
version: '3.8'
|
||||||
|
|
||||||
|
services:
|
||||||
|
graphiti-mcp:
|
||||||
|
image: lvarming/graphiti-mcp:latest # Custom build with your changes
|
||||||
|
container_name: graphiti-mcp
|
||||||
|
restart: unless-stopped
|
||||||
|
env_file:
|
||||||
|
- .env
|
||||||
|
environment:
|
||||||
|
- NEO4J_URI=${NEO4J_URI}
|
||||||
|
- NEO4J_USER=${NEO4J_USER}
|
||||||
|
- NEO4J_PASSWORD=${NEO4J_PASSWORD}
|
||||||
|
- NEO4J_DATABASE=${NEO4J_DATABASE:-neo4j}
|
||||||
|
- GRAPHITI_GROUP_ID=${GRAPHITI_GROUP_ID:-main}
|
||||||
|
- SEMAPHORE_LIMIT=${SEMAPHORE_LIMIT:-10}
|
||||||
|
- CONFIG_PATH=/app/mcp/config/config.yaml
|
||||||
|
- PATH=/root/.local/bin:${PATH}
|
||||||
|
volumes:
|
||||||
|
- ./config/config.yaml:/app/mcp/config/config.yaml:ro
|
||||||
|
ports:
|
||||||
|
- "8000:8000"
|
||||||
|
networks:
|
||||||
|
- unraid_network # Replace with your network name
|
||||||
|
|
||||||
|
networks:
|
||||||
|
unraid_network:
|
||||||
|
external: true # Use existing Unraid network
|
||||||
|
|
||||||
|
Then run:
|
||||||
|
cd /mnt/user/appdata/graphiti-mcp
|
||||||
|
docker-compose up -d
|
||||||
|
|
||||||
|
---
|
||||||
|
Step 3: Configure Docker Networking
|
||||||
|
|
||||||
|
Find Your Neo4j Container Name
|
||||||
|
|
||||||
|
docker ps | grep neo4j
|
||||||
|
|
||||||
|
The container name will be something like neo4j or neo4j-community. Use this exact name in your NEO4J_URI.
|
||||||
|
|
||||||
|
Ensure Same Network
|
||||||
|
|
||||||
|
All three containers (Neo4j, Graphiti MCP, LibreChat) should be on the same Docker network.
|
||||||
|
|
||||||
|
Check which network your Neo4j is on:
|
||||||
|
docker inspect YOUR_NEO4J_CONTAINER_NAME | grep NetworkMode
|
||||||
|
|
||||||
|
Connect Graphiti MCP to the same network:
|
||||||
|
docker network connect NETWORK_NAME graphiti-mcp
|
||||||
|
|
||||||
|
---
|
||||||
|
Step 4: Configure LibreChat
|
||||||
|
|
||||||
|
4.1 Add Graphiti MCP to LibreChat's librechat.yaml
|
||||||
|
|
||||||
|
Edit your LibreChat configuration file (usually /mnt/user/appdata/librechat/librechat.yaml):
|
||||||
|
|
||||||
|
# ... existing LibreChat config ...
|
||||||
|
|
||||||
|
# Add MCP server configuration
|
||||||
|
mcpServers:
|
||||||
|
graphiti-memory:
|
||||||
|
url: "http://graphiti-mcp:8000/mcp/"
|
||||||
|
# For multi-user support with user-specific graphs
|
||||||
|
server_instructions: |
|
||||||
|
You have access to a knowledge graph memory system through Graphiti.
|
||||||
|
|
||||||
|
IMPORTANT USAGE GUIDELINES:
|
||||||
|
1. Always search existing knowledge before adding new information
|
||||||
|
2. Use entity type filters: Preference, Procedure, Requirement
|
||||||
|
3. Store new information immediately using add_memory
|
||||||
|
4. Follow discovered procedures and respect preferences
|
||||||
|
|
||||||
|
Available tools:
|
||||||
|
- add_episode: Store new conversations/information
|
||||||
|
- search_nodes: Find entities and summaries
|
||||||
|
- search_facts: Find relationships between entities
|
||||||
|
- get_episodes: Retrieve recent conversations
|
||||||
|
|
||||||
|
# Optional: Hide from chat menu (agent-only access)
|
||||||
|
# chatMenu: false
|
||||||
|
|
||||||
|
# Optional: User-specific group IDs for isolation
|
||||||
|
# This requires configuring Graphiti to accept dynamic group_id
|
||||||
|
# user_headers:
|
||||||
|
# X-User-ID: "{{LIBRECHAT_USER_ID}}"
|
||||||
|
# X-User-Email: "{{LIBRECHAT_USER_EMAIL}}"
|
||||||
|
|
||||||
|
4.2 For Production: Use Streamable HTTP
|
||||||
|
|
||||||
|
According to LibreChat docs, for multi-user deployments, ensure the transport is HTTP (which is already default for Graphiti MCP).
|
||||||
|
|
||||||
|
4.3 Restart LibreChat
|
||||||
|
|
||||||
|
docker restart YOUR_LIBRECHAT_CONTAINER_NAME
|
||||||
|
|
||||||
|
---
|
||||||
|
Step 5: Test the Setup
|
||||||
|
|
||||||
|
5.1 Verify Graphiti MCP is Running
|
||||||
|
|
||||||
|
curl http://YOUR_UNRAID_IP:8000/health
|
||||||
|
|
||||||
|
You should see a health status response.
|
||||||
|
|
||||||
|
5.2 Test Neo4j Connection
|
||||||
|
|
||||||
|
Check Graphiti MCP logs:
|
||||||
|
docker logs graphiti-mcp
|
||||||
|
|
||||||
|
Look for successful Neo4j connection messages.
|
||||||
|
|
||||||
|
5.3 Test in LibreChat
|
||||||
|
|
||||||
|
1. Open LibreChat in your browser
|
||||||
|
2. Start a new chat
|
||||||
|
3. In an agent configuration, you should see graphiti-memory available
|
||||||
|
4. Try asking the agent to remember something:
|
||||||
|
Please remember that I prefer dark mode for all interfaces
|
||||||
|
5. Then later ask:
|
||||||
|
What do you know about my preferences?
|
||||||
|
|
||||||
|
---
|
||||||
|
Step 6: Advanced Configuration (Optional)
|
||||||
|
|
||||||
|
Per-User Graph Isolation
|
||||||
|
|
||||||
|
To give each LibreChat user their own knowledge graph, you need to:
|
||||||
|
|
||||||
|
1. Modify Graphiti MCP to accept dynamic group_id from headers
|
||||||
|
2. Update LibreChat config to send user info:
|
||||||
|
|
||||||
|
mcpServers:
|
||||||
|
graphiti-memory:
|
||||||
|
url: "http://graphiti-mcp:8000/mcp/"
|
||||||
|
user_headers:
|
||||||
|
X-User-ID: "{{LIBRECHAT_USER_ID}}"
|
||||||
|
X-User-Email: "{{LIBRECHAT_USER_EMAIL}}"
|
||||||
|
|
||||||
|
This requires custom modification of Graphiti MCP to read the X-User-ID header and use it as the group_id.
|
||||||
|
|
||||||
|
Using Different LLM Providers
|
||||||
|
|
||||||
|
If you want to use Claude or other providers instead of OpenAI, update config.yaml:
|
||||||
|
|
||||||
|
llm:
|
||||||
|
provider: "anthropic"
|
||||||
|
model: "claude-sonnet-4-5-latest"
|
||||||
|
|
||||||
|
providers:
|
||||||
|
anthropic:
|
||||||
|
api_key: ${ANTHROPIC_API_KEY}
|
||||||
|
|
||||||
|
And add ANTHROPIC_API_KEY to your .env file.
|
||||||
|
|
||||||
|
---
|
||||||
|
Troubleshooting
|
||||||
|
|
||||||
|
Graphiti MCP Can't Connect to Neo4j
|
||||||
|
|
||||||
|
- Issue: Connection refused or timeout
|
||||||
|
- Solution:
|
||||||
|
- Verify Neo4j container name: docker ps | grep neo4j
|
||||||
|
- Use exact container name in NEO4J_URI: bolt://container_name:7687
|
||||||
|
- Ensure both containers are on same Docker network
|
||||||
|
- Check Neo4j is listening on port 7687: docker exec NEO4J_CONTAINER netstat -tlnp | grep 7687
|
||||||
|
|
||||||
|
LibreChat Can't See Graphiti Tools
|
||||||
|
|
||||||
|
- Issue: Tools not appearing in agent builder
|
||||||
|
- Solution:
|
||||||
|
- Check Graphiti MCP is running: curl http://localhost:8000/health
|
||||||
|
- Verify librechat.yaml syntax is correct
|
||||||
|
- Restart LibreChat: docker restart librechat
|
||||||
|
- Check LibreChat logs: docker logs librechat
|
||||||
|
|
||||||
|
Rate Limit Errors
|
||||||
|
|
||||||
|
- Issue: 429 errors from OpenAI
|
||||||
|
- Solution: Lower SEMAPHORE_LIMIT in .env (try 5 or lower)
|
||||||
|
|
||||||
|
Memory/Performance Issues
|
||||||
|
|
||||||
|
- Issue: Slow responses or high memory usage
|
||||||
|
- Solution:
|
||||||
|
- Adjust Neo4j memory in your Neo4j container settings
|
||||||
|
- Reduce SEMAPHORE_LIMIT to lower concurrent processing
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
⏺ Quick Start Summary
|
||||||
|
|
||||||
|
Building Your Custom Docker Image:
|
||||||
|
|
||||||
|
This setup uses a custom Docker image built from YOUR fork with YOUR changes.
|
||||||
|
The image is automatically built by GitHub Actions and pushed to Docker Hub.
|
||||||
|
|
||||||
|
Setup Steps:
|
||||||
|
1. Fork the graphiti repository to your GitHub account
|
||||||
|
2. Add Docker Hub credentials to your repository secrets:
|
||||||
|
- Go to Settings → Secrets and variables → Actions
|
||||||
|
- Add secret: DOCKERHUB_TOKEN (your Docker Hub access token)
|
||||||
|
3. Push changes to trigger automatic build, or manually trigger from Actions tab
|
||||||
|
4. Image will be available at: lvarming/graphiti-mcp:latest
|
||||||
|
|
||||||
|
Key Points:
|
||||||
|
|
||||||
|
1. Docker Image: Use lvarming/graphiti-mcp:latest (your custom build)
|
||||||
|
2. Port: Expose 8000 for HTTP transport
|
||||||
|
3. Neo4j Connection: Use bolt://YOUR_NEO4J_CONTAINER_NAME:7687 (container name, not localhost)
|
||||||
|
4. Network: All 3 containers (Neo4j, Graphiti MCP, LibreChat) must be on same Docker network
|
||||||
|
5. LibreChat Config: Add to librechat.yaml under mcpServers with URL: http://graphiti-mcp:8000/mcp/
|
||||||
|
6. Required Env Vars: OPENAI_API_KEY, NEO4J_URI, NEO4J_USER, NEO4J_PASSWORD
|
||||||
|
|
||||||
|
The setup gives LibreChat powerful knowledge graph memory capabilities, allowing it to remember user preferences, procedures, and facts across conversations!
|
||||||
|
|
||||||
|
Let me know if you need help with any specific step or run into issues during setup.
|
||||||
569
DOCS/OpenAI-Compatible-Endpoints.md
Normal file
569
DOCS/OpenAI-Compatible-Endpoints.md
Normal file
|
|
@ -0,0 +1,569 @@
|
||||||
|
# OpenAI-Compatible Custom Endpoint Support in Graphiti
|
||||||
|
|
||||||
|
## Overview
|
||||||
|
|
||||||
|
This document analyzes how Graphiti handles OpenAI-compatible custom endpoints (like OpenRouter, NagaAI, Together.ai, etc.) and provides recommendations for improving support.
|
||||||
|
|
||||||
|
## Current Architecture
|
||||||
|
|
||||||
|
Graphiti has **three main OpenAI-compatible client implementations**:
|
||||||
|
|
||||||
|
### 1. OpenAIClient (Default)
|
||||||
|
|
||||||
|
**File**: `graphiti_core/llm_client/openai_client.py`
|
||||||
|
|
||||||
|
- Extends `BaseOpenAIClient`
|
||||||
|
- Uses the **new OpenAI Responses API** (`/v1/responses` endpoint)
|
||||||
|
- Uses `client.responses.parse()` for structured outputs (OpenAI SDK v1.91+)
|
||||||
|
- This is the **default client** exported in the public API
|
||||||
|
|
||||||
|
```python
|
||||||
|
response = await self.client.responses.parse(
|
||||||
|
model=model,
|
||||||
|
input=messages,
|
||||||
|
temperature=temperature,
|
||||||
|
max_output_tokens=max_tokens,
|
||||||
|
text_format=response_model,
|
||||||
|
reasoning={'effort': reasoning},
|
||||||
|
text={'verbosity': verbosity},
|
||||||
|
)
|
||||||
|
```
|
||||||
|
|
||||||
|
### 2. OpenAIGenericClient (Legacy)
|
||||||
|
|
||||||
|
**File**: `graphiti_core/llm_client/openai_generic_client.py`
|
||||||
|
|
||||||
|
- Uses the **standard Chat Completions API** (`/v1/chat/completions`)
|
||||||
|
- Uses `client.chat.completions.create()`
|
||||||
|
- **Only supports unstructured JSON responses** (not Pydantic schemas)
|
||||||
|
- Currently **not exported** in `__init__.py` (hidden from public API)
|
||||||
|
|
||||||
|
```python
|
||||||
|
response = await self.client.chat.completions.create(
|
||||||
|
model=model,
|
||||||
|
messages=messages,
|
||||||
|
temperature=temperature,
|
||||||
|
max_tokens=max_tokens,
|
||||||
|
response_format={'type': 'json_object'},
|
||||||
|
)
|
||||||
|
```
|
||||||
|
|
||||||
|
### 3. AzureOpenAILLMClient
|
||||||
|
|
||||||
|
**File**: `graphiti_core/llm_client/azure_openai_client.py`
|
||||||
|
|
||||||
|
- Azure-specific implementation
|
||||||
|
- Also uses `responses.parse()` like `OpenAIClient`
|
||||||
|
- Handles Azure-specific authentication and endpoints
|
||||||
|
|
||||||
|
## The Root Problem
|
||||||
|
|
||||||
|
### Issue Description
|
||||||
|
|
||||||
|
When users configure Graphiti with custom OpenAI-compatible endpoints, they encounter errors because:
|
||||||
|
|
||||||
|
1. **`OpenAIClient` uses the new `/v1/responses` endpoint** via `client.responses.parse()`
|
||||||
|
- This is a **new OpenAI API** (introduced in OpenAI SDK v1.91.0) for structured outputs
|
||||||
|
- This endpoint is **proprietary to OpenAI** and **not part of the standard OpenAI-compatible API specification**
|
||||||
|
|
||||||
|
2. **Most OpenAI-compatible services** (OpenRouter, NagaAI, Ollama, Together.ai, etc.) **only implement** the standard `/v1/chat/completions` endpoint
|
||||||
|
- They do **NOT** implement `/v1/responses`
|
||||||
|
|
||||||
|
3. When you configure a `base_url` pointing to these services, Graphiti tries to call:
|
||||||
|
```
|
||||||
|
https://your-custom-endpoint.com/v1/responses
|
||||||
|
```
|
||||||
|
Instead of the expected:
|
||||||
|
```
|
||||||
|
https://your-custom-endpoint.com/v1/chat/completions
|
||||||
|
```
|
||||||
|
|
||||||
|
### Example Error Scenario
|
||||||
|
|
||||||
|
```python
|
||||||
|
from graphiti_core import Graphiti
|
||||||
|
from graphiti_core.llm_client import OpenAIClient, LLMConfig
|
||||||
|
|
||||||
|
config = LLMConfig(
|
||||||
|
api_key="sk-or-v1-...",
|
||||||
|
model="meta-llama/llama-3-8b-instruct",
|
||||||
|
base_url="https://openrouter.ai/api/v1"
|
||||||
|
)
|
||||||
|
|
||||||
|
llm_client = OpenAIClient(config=config)
|
||||||
|
graphiti = Graphiti(uri, user, password, llm_client=llm_client)
|
||||||
|
|
||||||
|
# This will fail because OpenRouter doesn't have /v1/responses endpoint
|
||||||
|
# Error: 404 Not Found - https://openrouter.ai/api/v1/responses
|
||||||
|
```
|
||||||
|
|
||||||
|
## Current Workaround (Documented)
|
||||||
|
|
||||||
|
The README documents using `OpenAIGenericClient` with Ollama:
|
||||||
|
|
||||||
|
```python
|
||||||
|
from graphiti_core.llm_client.openai_generic_client import OpenAIGenericClient
|
||||||
|
from graphiti_core.llm_client.config import LLMConfig
|
||||||
|
|
||||||
|
llm_config = LLMConfig(
|
||||||
|
api_key="ollama",
|
||||||
|
model="deepseek-r1:7b",
|
||||||
|
base_url="http://localhost:11434/v1"
|
||||||
|
)
|
||||||
|
|
||||||
|
llm_client = OpenAIGenericClient(config=llm_config)
|
||||||
|
```
|
||||||
|
|
||||||
|
### Limitations of Current Workaround
|
||||||
|
|
||||||
|
- `OpenAIGenericClient` **doesn't support structured outputs with Pydantic models**
|
||||||
|
- It only returns raw JSON and manually validates schemas
|
||||||
|
- It's not the recommended/default client
|
||||||
|
- It's **not exported** in the public API (`graphiti_core.llm_client`)
|
||||||
|
- Users must know to import from the internal module path
|
||||||
|
|
||||||
|
## Recommended Solutions
|
||||||
|
|
||||||
|
### Priority 1: Quick Wins (High Priority)
|
||||||
|
|
||||||
|
#### 1.1 Export `OpenAIGenericClient` in Public API
|
||||||
|
|
||||||
|
**File**: `graphiti_core/llm_client/__init__.py`
|
||||||
|
|
||||||
|
**Current**:
|
||||||
|
```python
|
||||||
|
from .client import LLMClient
|
||||||
|
from .config import LLMConfig
|
||||||
|
from .errors import RateLimitError
|
||||||
|
from .openai_client import OpenAIClient
|
||||||
|
|
||||||
|
__all__ = ['LLMClient', 'OpenAIClient', 'LLMConfig', 'RateLimitError']
|
||||||
|
```
|
||||||
|
|
||||||
|
**Proposed**:
|
||||||
|
```python
|
||||||
|
from .client import LLMClient
|
||||||
|
from .config import LLMConfig
|
||||||
|
from .errors import RateLimitError
|
||||||
|
from .openai_client import OpenAIClient
|
||||||
|
from .openai_generic_client import OpenAIGenericClient
|
||||||
|
|
||||||
|
__all__ = ['LLMClient', 'OpenAIClient', 'OpenAIGenericClient', 'LLMConfig', 'RateLimitError']
|
||||||
|
```
|
||||||
|
|
||||||
|
#### 1.2 Add Clear Documentation
|
||||||
|
|
||||||
|
**File**: `README.md`
|
||||||
|
|
||||||
|
Add a dedicated section:
|
||||||
|
|
||||||
|
```markdown
|
||||||
|
### Using OpenAI-Compatible Endpoints (OpenRouter, NagaAI, Together.ai, etc.)
|
||||||
|
|
||||||
|
Most OpenAI-compatible services only support the standard Chat Completions API,
|
||||||
|
not OpenAI's newer Responses API. Use `OpenAIGenericClient` for these services:
|
||||||
|
|
||||||
|
**OpenRouter Example**:
|
||||||
|
```python
|
||||||
|
from graphiti_core import Graphiti
|
||||||
|
from graphiti_core.llm_client import OpenAIGenericClient, LLMConfig
|
||||||
|
|
||||||
|
config = LLMConfig(
|
||||||
|
api_key="sk-or-v1-...",
|
||||||
|
model="meta-llama/llama-3-8b-instruct",
|
||||||
|
base_url="https://openrouter.ai/api/v1"
|
||||||
|
)
|
||||||
|
|
||||||
|
llm_client = OpenAIGenericClient(config=config)
|
||||||
|
graphiti = Graphiti(uri, user, password, llm_client=llm_client)
|
||||||
|
```
|
||||||
|
|
||||||
|
**Together.ai Example**:
|
||||||
|
```python
|
||||||
|
config = LLMConfig(
|
||||||
|
api_key="your-together-api-key",
|
||||||
|
model="meta-llama/Llama-3-70b-chat-hf",
|
||||||
|
base_url="https://api.together.xyz/v1"
|
||||||
|
)
|
||||||
|
llm_client = OpenAIGenericClient(config=config)
|
||||||
|
```
|
||||||
|
|
||||||
|
**Note**: `OpenAIGenericClient` has limited structured output support compared to
|
||||||
|
the default `OpenAIClient`. It uses JSON mode instead of Pydantic schema validation.
|
||||||
|
```
|
||||||
|
|
||||||
|
#### 1.3 Add Better Error Messages
|
||||||
|
|
||||||
|
**File**: `graphiti_core/llm_client/openai_client.py`
|
||||||
|
|
||||||
|
Add error handling that detects the issue:
|
||||||
|
|
||||||
|
```python
|
||||||
|
async def _create_structured_completion(self, ...):
|
||||||
|
try:
|
||||||
|
response = await self.client.responses.parse(...)
|
||||||
|
return response
|
||||||
|
except openai.NotFoundError as e:
|
||||||
|
if self.config.base_url and "api.openai.com" not in self.config.base_url:
|
||||||
|
raise Exception(
|
||||||
|
f"The OpenAI Responses API (/v1/responses) is not available at {self.config.base_url}. "
|
||||||
|
f"Most OpenAI-compatible services only support /v1/chat/completions. "
|
||||||
|
f"Please use OpenAIGenericClient instead of OpenAIClient for custom endpoints. "
|
||||||
|
f"See: https://help.getzep.com/graphiti/guides/custom-endpoints"
|
||||||
|
) from e
|
||||||
|
raise
|
||||||
|
```
|
||||||
|
|
||||||
|
### Priority 2: Better UX (Medium Priority)
|
||||||
|
|
||||||
|
#### 2.1 Add Auto-Detection Logic
|
||||||
|
|
||||||
|
**File**: `graphiti_core/llm_client/config.py`
|
||||||
|
|
||||||
|
```python
|
||||||
|
class LLMConfig:
|
||||||
|
def __init__(
|
||||||
|
self,
|
||||||
|
api_key: str | None = None,
|
||||||
|
model: str | None = None,
|
||||||
|
base_url: str | None = None,
|
||||||
|
temperature: float = DEFAULT_TEMPERATURE,
|
||||||
|
max_tokens: int = DEFAULT_MAX_TOKENS,
|
||||||
|
small_model: str | None = None,
|
||||||
|
use_responses_api: bool | None = None, # NEW: Auto-detect if None
|
||||||
|
):
|
||||||
|
self.base_url = base_url
|
||||||
|
self.api_key = api_key
|
||||||
|
self.model = model
|
||||||
|
self.small_model = small_model
|
||||||
|
self.temperature = temperature
|
||||||
|
self.max_tokens = max_tokens
|
||||||
|
|
||||||
|
# Auto-detect API style based on base_url
|
||||||
|
if use_responses_api is None:
|
||||||
|
self.use_responses_api = self._should_use_responses_api()
|
||||||
|
else:
|
||||||
|
self.use_responses_api = use_responses_api
|
||||||
|
|
||||||
|
def _should_use_responses_api(self) -> bool:
|
||||||
|
"""Determine if we should use the Responses API based on base_url."""
|
||||||
|
if self.base_url is None:
|
||||||
|
return True # Default OpenAI
|
||||||
|
|
||||||
|
# Known services that support Responses API
|
||||||
|
supported_services = ["api.openai.com", "azure.com"]
|
||||||
|
return any(service in self.base_url for service in supported_services)
|
||||||
|
```
|
||||||
|
|
||||||
|
#### 2.2 Create a Unified Smart Client
|
||||||
|
|
||||||
|
**Option A**: Modify `OpenAIClient` to Fall Back
|
||||||
|
|
||||||
|
```python
|
||||||
|
class OpenAIClient(BaseOpenAIClient):
|
||||||
|
def __init__(self, config: LLMConfig | None = None, ...):
|
||||||
|
super().__init__(config, ...)
|
||||||
|
if config is None:
|
||||||
|
config = LLMConfig()
|
||||||
|
|
||||||
|
self.use_responses_api = config.use_responses_api
|
||||||
|
self.client = AsyncOpenAI(api_key=config.api_key, base_url=config.base_url)
|
||||||
|
|
||||||
|
async def _create_structured_completion(self, ...):
|
||||||
|
if self.use_responses_api:
|
||||||
|
# Use responses.parse() for OpenAI native
|
||||||
|
return await self.client.responses.parse(...)
|
||||||
|
else:
|
||||||
|
# Fall back to chat.completions with JSON schema for compatibility
|
||||||
|
return await self.client.chat.completions.create(
|
||||||
|
model=model,
|
||||||
|
messages=messages,
|
||||||
|
temperature=temperature,
|
||||||
|
max_tokens=max_tokens,
|
||||||
|
response_format={
|
||||||
|
"type": "json_schema",
|
||||||
|
"json_schema": {
|
||||||
|
"name": response_model.__name__,
|
||||||
|
"schema": response_model.model_json_schema(),
|
||||||
|
"strict": False
|
||||||
|
}
|
||||||
|
}
|
||||||
|
)
|
||||||
|
```
|
||||||
|
|
||||||
|
**Option B**: Create a Factory Function
|
||||||
|
|
||||||
|
```python
|
||||||
|
# graphiti_core/llm_client/__init__.py
|
||||||
|
|
||||||
|
def create_openai_client(
|
||||||
|
config: LLMConfig | None = None,
|
||||||
|
cache: bool = False,
|
||||||
|
**kwargs
|
||||||
|
) -> LLMClient:
|
||||||
|
"""
|
||||||
|
Factory to create the appropriate OpenAI-compatible client.
|
||||||
|
|
||||||
|
Automatically selects between OpenAIClient (for native OpenAI)
|
||||||
|
and OpenAIGenericClient (for OpenAI-compatible services).
|
||||||
|
|
||||||
|
Args:
|
||||||
|
config: LLM configuration including base_url
|
||||||
|
cache: Whether to enable caching
|
||||||
|
**kwargs: Additional arguments passed to the client
|
||||||
|
|
||||||
|
Returns:
|
||||||
|
LLMClient: Either OpenAIClient or OpenAIGenericClient
|
||||||
|
|
||||||
|
Example:
|
||||||
|
>>> # Automatically uses OpenAIGenericClient for OpenRouter
|
||||||
|
>>> config = LLMConfig(
|
||||||
|
... api_key="sk-or-v1-...",
|
||||||
|
... model="meta-llama/llama-3-8b-instruct",
|
||||||
|
... base_url="https://openrouter.ai/api/v1"
|
||||||
|
... )
|
||||||
|
>>> client = create_openai_client(config)
|
||||||
|
"""
|
||||||
|
if config is None:
|
||||||
|
config = LLMConfig()
|
||||||
|
|
||||||
|
# Auto-detect based on base_url
|
||||||
|
if config.base_url is None or "api.openai.com" in config.base_url:
|
||||||
|
return OpenAIClient(config, cache, **kwargs)
|
||||||
|
else:
|
||||||
|
return OpenAIGenericClient(config, cache, **kwargs)
|
||||||
|
```
|
||||||
|
|
||||||
|
#### 2.3 Enhance `OpenAIGenericClient` with Better Structured Output Support
|
||||||
|
|
||||||
|
**File**: `graphiti_core/llm_client/openai_generic_client.py`
|
||||||
|
|
||||||
|
```python
|
||||||
|
async def _generate_response(
|
||||||
|
self,
|
||||||
|
messages: list[Message],
|
||||||
|
response_model: type[BaseModel] | None = None,
|
||||||
|
max_tokens: int = DEFAULT_MAX_TOKENS,
|
||||||
|
model_size: ModelSize = ModelSize.medium,
|
||||||
|
) -> dict[str, typing.Any]:
|
||||||
|
openai_messages: list[ChatCompletionMessageParam] = []
|
||||||
|
for m in messages:
|
||||||
|
m.content = self._clean_input(m.content)
|
||||||
|
if m.role == 'user':
|
||||||
|
openai_messages.append({'role': 'user', 'content': m.content})
|
||||||
|
elif m.role == 'system':
|
||||||
|
openai_messages.append({'role': 'system', 'content': m.content})
|
||||||
|
|
||||||
|
try:
|
||||||
|
# Try to use json_schema format (supported by more providers)
|
||||||
|
if response_model:
|
||||||
|
response = await self.client.chat.completions.create(
|
||||||
|
model=self.model or DEFAULT_MODEL,
|
||||||
|
messages=openai_messages,
|
||||||
|
temperature=self.temperature,
|
||||||
|
max_tokens=max_tokens or self.max_tokens,
|
||||||
|
response_format={
|
||||||
|
"type": "json_schema",
|
||||||
|
"json_schema": {
|
||||||
|
"name": response_model.__name__,
|
||||||
|
"schema": response_model.model_json_schema(),
|
||||||
|
"strict": False # Most providers don't support strict mode
|
||||||
|
}
|
||||||
|
}
|
||||||
|
)
|
||||||
|
else:
|
||||||
|
response = await self.client.chat.completions.create(
|
||||||
|
model=self.model or DEFAULT_MODEL,
|
||||||
|
messages=openai_messages,
|
||||||
|
temperature=self.temperature,
|
||||||
|
max_tokens=max_tokens or self.max_tokens,
|
||||||
|
response_format={'type': 'json_object'},
|
||||||
|
)
|
||||||
|
|
||||||
|
result = response.choices[0].message.content or '{}'
|
||||||
|
return json.loads(result)
|
||||||
|
except Exception as e:
|
||||||
|
logger.error(f'Error in generating LLM response: {e}')
|
||||||
|
raise
|
||||||
|
```
|
||||||
|
|
||||||
|
### Priority 3: Nice to Have (Low Priority)
|
||||||
|
|
||||||
|
#### 3.1 Provider-Specific Clients
|
||||||
|
|
||||||
|
Create convenience clients for popular providers:
|
||||||
|
|
||||||
|
```python
|
||||||
|
# graphiti_core/llm_client/openrouter_client.py
|
||||||
|
class OpenRouterClient(OpenAIGenericClient):
|
||||||
|
"""Pre-configured client for OpenRouter.
|
||||||
|
|
||||||
|
Example:
|
||||||
|
>>> client = OpenRouterClient(
|
||||||
|
... api_key="sk-or-v1-...",
|
||||||
|
... model="meta-llama/llama-3-8b-instruct"
|
||||||
|
... )
|
||||||
|
"""
|
||||||
|
def __init__(
|
||||||
|
self,
|
||||||
|
api_key: str,
|
||||||
|
model: str,
|
||||||
|
temperature: float = DEFAULT_TEMPERATURE,
|
||||||
|
max_tokens: int = DEFAULT_MAX_TOKENS,
|
||||||
|
**kwargs
|
||||||
|
):
|
||||||
|
config = LLMConfig(
|
||||||
|
api_key=api_key,
|
||||||
|
model=model,
|
||||||
|
base_url="https://openrouter.ai/api/v1",
|
||||||
|
temperature=temperature,
|
||||||
|
max_tokens=max_tokens
|
||||||
|
)
|
||||||
|
super().__init__(config=config, **kwargs)
|
||||||
|
```
|
||||||
|
|
||||||
|
```python
|
||||||
|
# graphiti_core/llm_client/together_client.py
|
||||||
|
class TogetherClient(OpenAIGenericClient):
|
||||||
|
"""Pre-configured client for Together.ai.
|
||||||
|
|
||||||
|
Example:
|
||||||
|
>>> client = TogetherClient(
|
||||||
|
... api_key="your-together-key",
|
||||||
|
... model="meta-llama/Llama-3-70b-chat-hf"
|
||||||
|
... )
|
||||||
|
"""
|
||||||
|
def __init__(
|
||||||
|
self,
|
||||||
|
api_key: str,
|
||||||
|
model: str,
|
||||||
|
temperature: float = DEFAULT_TEMPERATURE,
|
||||||
|
max_tokens: int = DEFAULT_MAX_TOKENS,
|
||||||
|
**kwargs
|
||||||
|
):
|
||||||
|
config = LLMConfig(
|
||||||
|
api_key=api_key,
|
||||||
|
model=model,
|
||||||
|
base_url="https://api.together.xyz/v1",
|
||||||
|
temperature=temperature,
|
||||||
|
max_tokens=max_tokens
|
||||||
|
)
|
||||||
|
super().__init__(config=config, **kwargs)
|
||||||
|
```
|
||||||
|
|
||||||
|
#### 3.2 Provider Compatibility Matrix
|
||||||
|
|
||||||
|
Add to documentation:
|
||||||
|
|
||||||
|
| Provider | Standard Client | Generic Client | Structured Outputs | Notes |
|
||||||
|
|----------|----------------|----------------|-------------------|-------|
|
||||||
|
| OpenAI | ✅ `OpenAIClient` | ✅ | ✅ Full (Responses API) | Recommended: Use `OpenAIClient` |
|
||||||
|
| Azure OpenAI | ✅ `AzureOpenAILLMClient` | ✅ | ✅ Full (Responses API) | Requires API version 2024-08-01-preview+ |
|
||||||
|
| OpenRouter | ❌ | ✅ `OpenAIGenericClient` | ⚠️ Limited (JSON Schema) | Use `OpenAIGenericClient` |
|
||||||
|
| Together.ai | ❌ | ✅ `OpenAIGenericClient` | ⚠️ Limited (JSON Schema) | Use `OpenAIGenericClient` |
|
||||||
|
| Ollama | ❌ | ✅ `OpenAIGenericClient` | ⚠️ Limited (JSON mode) | Local deployment |
|
||||||
|
| Groq | ❌ | ✅ `OpenAIGenericClient` | ⚠️ Limited (JSON Schema) | Very fast inference |
|
||||||
|
| Perplexity | ❌ | ✅ `OpenAIGenericClient` | ⚠️ Limited (JSON mode) | Primarily for search |
|
||||||
|
|
||||||
|
## Testing Recommendations
|
||||||
|
|
||||||
|
### Unit Tests
|
||||||
|
|
||||||
|
1. **Endpoint detection logic**
|
||||||
|
```python
|
||||||
|
def test_should_use_responses_api():
|
||||||
|
# OpenAI native should use Responses API
|
||||||
|
config = LLMConfig(base_url="https://api.openai.com/v1")
|
||||||
|
assert config.use_responses_api is True
|
||||||
|
|
||||||
|
# Custom endpoints should not
|
||||||
|
config = LLMConfig(base_url="https://openrouter.ai/api/v1")
|
||||||
|
assert config.use_responses_api is False
|
||||||
|
```
|
||||||
|
|
||||||
|
2. **Client selection**
|
||||||
|
```python
|
||||||
|
def test_create_openai_client_auto_selection():
|
||||||
|
# Should return OpenAIClient for OpenAI
|
||||||
|
config = LLMConfig(api_key="test")
|
||||||
|
client = create_openai_client(config)
|
||||||
|
assert isinstance(client, OpenAIClient)
|
||||||
|
|
||||||
|
# Should return OpenAIGenericClient for others
|
||||||
|
config = LLMConfig(api_key="test", base_url="https://openrouter.ai/api/v1")
|
||||||
|
client = create_openai_client(config)
|
||||||
|
assert isinstance(client, OpenAIGenericClient)
|
||||||
|
```
|
||||||
|
|
||||||
|
### Integration Tests
|
||||||
|
|
||||||
|
1. **Mock server tests** with responses for both endpoints
|
||||||
|
2. **Real provider tests** (optional, may require API keys):
|
||||||
|
- OpenRouter
|
||||||
|
- Together.ai
|
||||||
|
- Ollama (local)
|
||||||
|
|
||||||
|
### Manual Testing Checklist
|
||||||
|
|
||||||
|
- [ ] OpenRouter with Llama models
|
||||||
|
- [ ] Together.ai with various models
|
||||||
|
- [ ] Ollama with local models
|
||||||
|
- [ ] Groq with fast models
|
||||||
|
- [ ] Verify error messages are helpful
|
||||||
|
- [ ] Test both structured and unstructured outputs
|
||||||
|
|
||||||
|
## Summary of Issues
|
||||||
|
|
||||||
|
| Issue | Current State | Impact | Priority |
|
||||||
|
|-------|---------------|--------|----------|
|
||||||
|
| `/v1/responses` endpoint usage | Used by default `OpenAIClient` | **BREAKS** all non-OpenAI providers | High |
|
||||||
|
| `OpenAIGenericClient` not exported | Hidden from public API | Users can't easily use it | High |
|
||||||
|
| Poor error messages | Generic 404 errors | Confusing for users | High |
|
||||||
|
| No auto-detection | Must manually choose client | Poor DX | Medium |
|
||||||
|
| Limited docs | Only Ollama example | Users don't know how to configure other providers | High |
|
||||||
|
| No structured output in Generic client | Only supports loose JSON | Reduced type safety for custom endpoints | Medium |
|
||||||
|
| No provider-specific helpers | Generic configuration only | More setup required | Low |
|
||||||
|
|
||||||
|
## Implementation Roadmap
|
||||||
|
|
||||||
|
### Phase 1: Quick Fixes (1-2 days)
|
||||||
|
1. Export `OpenAIGenericClient` in public API
|
||||||
|
2. Add documentation section for custom endpoints
|
||||||
|
3. Improve error messages in `OpenAIClient`
|
||||||
|
4. Add examples for OpenRouter, Together.ai
|
||||||
|
|
||||||
|
### Phase 2: Enhanced Support (3-5 days)
|
||||||
|
1. Add auto-detection logic to `LLMConfig`
|
||||||
|
2. Create factory function for client selection
|
||||||
|
3. Enhance `OpenAIGenericClient` with better JSON schema support
|
||||||
|
4. Add comprehensive tests
|
||||||
|
|
||||||
|
### Phase 3: Polish (2-3 days)
|
||||||
|
1. Create provider-specific client classes
|
||||||
|
2. Build compatibility matrix documentation
|
||||||
|
3. Add integration tests with real providers
|
||||||
|
4. Update all examples and guides
|
||||||
|
|
||||||
|
## References
|
||||||
|
|
||||||
|
- OpenAI SDK v1.91.0+ Responses API: https://platform.openai.com/docs/api-reference/responses
|
||||||
|
- OpenAI Chat Completions API: https://platform.openai.com/docs/api-reference/chat
|
||||||
|
- OpenRouter API: https://openrouter.ai/docs
|
||||||
|
- Together.ai API: https://docs.together.ai/docs/openai-api-compatibility
|
||||||
|
- Ollama OpenAI compatibility: https://github.com/ollama/ollama/blob/main/docs/openai.md
|
||||||
|
|
||||||
|
## Contributing
|
||||||
|
|
||||||
|
If you're implementing these changes, please ensure:
|
||||||
|
|
||||||
|
1. All changes follow the repository guidelines in `AGENTS.md`
|
||||||
|
2. Run `make format` before committing
|
||||||
|
3. Run `make lint` and `make test` to verify changes
|
||||||
|
4. Update documentation for any new public APIs
|
||||||
|
5. Add examples demonstrating the new functionality
|
||||||
|
|
||||||
|
## Questions or Issues?
|
||||||
|
|
||||||
|
- Open an issue: https://github.com/getzep/graphiti/issues
|
||||||
|
- Discussion: https://github.com/getzep/graphiti/discussions
|
||||||
|
- Documentation: https://help.getzep.com/graphiti
|
||||||
178
DOCS/README.md
Normal file
178
DOCS/README.md
Normal file
|
|
@ -0,0 +1,178 @@
|
||||||
|
# Graphiti Custom Build Documentation
|
||||||
|
|
||||||
|
This directory contains documentation for building and deploying a custom Graphiti MCP server with your local changes.
|
||||||
|
|
||||||
|
## Quick Navigation
|
||||||
|
|
||||||
|
### 🐳 Docker Build Setup
|
||||||
|
**[GitHub-DockerHub-Setup.md](./GitHub-DockerHub-Setup.md)**
|
||||||
|
- Complete guide for automated Docker builds via GitHub Actions
|
||||||
|
- Builds with YOUR local graphiti-core changes (not PyPI)
|
||||||
|
- Pushes to Docker Hub (`lvarming/graphiti-mcp`)
|
||||||
|
- **Start here** if you want to build custom Docker images
|
||||||
|
|
||||||
|
### 🖥️ LibreChat Integration
|
||||||
|
**[Librechat.setup.md](./Librechat.setup.md)**
|
||||||
|
- Complete setup guide for Graphiti MCP + LibreChat + Neo4j on Unraid
|
||||||
|
- Uses your custom Docker image from Docker Hub
|
||||||
|
- Step-by-step deployment instructions
|
||||||
|
|
||||||
|
### 🔌 OpenAI API Compatibility
|
||||||
|
**[OpenAI-Compatible-Endpoints.md](./OpenAI-Compatible-Endpoints.md)**
|
||||||
|
- Analysis of OpenAI-compatible endpoint support
|
||||||
|
- Explains `/v1/responses` vs `/v1/chat/completions` issue
|
||||||
|
- Recommendations for supporting OpenRouter, Together.ai, Ollama, etc.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Quick Start for Custom Builds
|
||||||
|
|
||||||
|
### 1. Setup GitHub → Docker Hub Pipeline
|
||||||
|
|
||||||
|
Follow **[GitHub-DockerHub-Setup.md](./GitHub-DockerHub-Setup.md)** to:
|
||||||
|
1. Create Docker Hub access token
|
||||||
|
2. Add token to GitHub repository secrets
|
||||||
|
3. Push changes to trigger automatic build
|
||||||
|
|
||||||
|
### 2. Deploy on Unraid
|
||||||
|
|
||||||
|
Follow **[Librechat.setup.md](./Librechat.setup.md)** to:
|
||||||
|
1. Configure Neo4j connection
|
||||||
|
2. Deploy Graphiti MCP container using `lvarming/graphiti-mcp:latest`
|
||||||
|
3. Integrate with LibreChat
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## What's Different in This Setup?
|
||||||
|
|
||||||
|
### Standard Graphiti Deployment
|
||||||
|
```yaml
|
||||||
|
# Uses official image from PyPI
|
||||||
|
image: zepai/knowledge-graph-mcp:standalone
|
||||||
|
```
|
||||||
|
|
||||||
|
### Your Custom Deployment
|
||||||
|
```yaml
|
||||||
|
# Uses YOUR image with YOUR changes
|
||||||
|
image: lvarming/graphiti-mcp:latest
|
||||||
|
```
|
||||||
|
|
||||||
|
The custom image includes:
|
||||||
|
- ✅ Your local `graphiti-core` changes
|
||||||
|
- ✅ Your MCP server modifications
|
||||||
|
- ✅ Both Neo4j and FalkorDB drivers
|
||||||
|
- ✅ Built automatically on every push
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Files in This Repository
|
||||||
|
|
||||||
|
### Workflow Files
|
||||||
|
- `.github/workflows/build-custom-mcp.yml` - GitHub Actions workflow for automated builds
|
||||||
|
|
||||||
|
### Docker Files
|
||||||
|
- `mcp_server/docker/Dockerfile.custom` - Custom Dockerfile that uses local graphiti-core
|
||||||
|
|
||||||
|
### Documentation
|
||||||
|
- `DOCS/GitHub-DockerHub-Setup.md` - Docker build setup guide
|
||||||
|
- `DOCS/Librechat.setup.md` - LibreChat integration guide
|
||||||
|
- `DOCS/OpenAI-Compatible-Endpoints.md` - API compatibility analysis
|
||||||
|
- `DOCS/README.md` - This file
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Workflow Overview
|
||||||
|
|
||||||
|
```mermaid
|
||||||
|
graph LR
|
||||||
|
A[Make Changes] --> B[Git Push]
|
||||||
|
B --> C[GitHub Actions]
|
||||||
|
C --> D[Build Docker Image]
|
||||||
|
D --> E[Push to Docker Hub]
|
||||||
|
E --> F[Deploy on Unraid]
|
||||||
|
F --> G[Use in LibreChat]
|
||||||
|
```
|
||||||
|
|
||||||
|
1. **Make Changes** - Modify `graphiti_core/` or `mcp_server/`
|
||||||
|
2. **Git Push** - Push to `main` branch on GitHub
|
||||||
|
3. **GitHub Actions** - Automatically triggered
|
||||||
|
4. **Build Image** - Using `Dockerfile.custom` with local code
|
||||||
|
5. **Push to Docker Hub** - Tagged as `lvarming/graphiti-mcp:latest`
|
||||||
|
6. **Deploy on Unraid** - Pull latest image
|
||||||
|
7. **Use in LibreChat** - Configure MCP server URL
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Version Information
|
||||||
|
|
||||||
|
Your builds include comprehensive version tracking:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
docker inspect lvarming/graphiti-mcp:latest | jq '.[0].Config.Labels'
|
||||||
|
```
|
||||||
|
|
||||||
|
Returns:
|
||||||
|
```json
|
||||||
|
{
|
||||||
|
"org.opencontainers.image.title": "Graphiti MCP Server (Custom Build)",
|
||||||
|
"org.opencontainers.image.version": "1.0.0",
|
||||||
|
"graphiti.core.version": "0.23.0",
|
||||||
|
"graphiti.core.source": "local",
|
||||||
|
"org.opencontainers.image.revision": "abc1234",
|
||||||
|
"org.opencontainers.image.created": "2025-11-08T12:00:00Z"
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Key Benefits
|
||||||
|
|
||||||
|
### 🚀 Automated
|
||||||
|
- No manual Docker builds
|
||||||
|
- No need to push images yourself
|
||||||
|
- Triggered automatically on code changes
|
||||||
|
|
||||||
|
### 🔄 Reproducible
|
||||||
|
- Every build is traced to a git commit
|
||||||
|
- Anyone can see exactly what was built
|
||||||
|
- Version labels include all metadata
|
||||||
|
|
||||||
|
### 🏗️ Multi-Platform
|
||||||
|
- Builds for AMD64 and ARM64
|
||||||
|
- Works on Intel, AMD, and Apple Silicon
|
||||||
|
- Single command works everywhere
|
||||||
|
|
||||||
|
### 🎯 Clean Workflow
|
||||||
|
- Professional CI/CD pipeline
|
||||||
|
- Follows industry best practices
|
||||||
|
- Easy to maintain and extend
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Support
|
||||||
|
|
||||||
|
### Issues with Docker Builds?
|
||||||
|
See [GitHub-DockerHub-Setup.md - Troubleshooting](./GitHub-DockerHub-Setup.md#troubleshooting)
|
||||||
|
|
||||||
|
### Issues with Deployment?
|
||||||
|
See [Librechat.setup.md - Troubleshooting](./Librechat.setup.md#troubleshooting)
|
||||||
|
|
||||||
|
### Issues with API Compatibility?
|
||||||
|
See [OpenAI-Compatible-Endpoints.md](./OpenAI-Compatible-Endpoints.md)
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Contributing
|
||||||
|
|
||||||
|
If you make improvements to these docs or workflows:
|
||||||
|
|
||||||
|
1. Update the relevant documentation file
|
||||||
|
2. Test the changes
|
||||||
|
3. Commit and push
|
||||||
|
4. (Optional) Share with the community via PR to upstream
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## License
|
||||||
|
|
||||||
|
This documentation follows the same license as the Graphiti project.
|
||||||
|
|
@ -1,83 +0,0 @@
|
||||||
# syntax=docker/dockerfile:1
|
|
||||||
# Custom Graphiti MCP Server Image with Local graphiti-core Changes
|
|
||||||
# This Dockerfile builds the MCP server using the LOCAL graphiti-core code
|
|
||||||
# instead of pulling from PyPI
|
|
||||||
|
|
||||||
FROM python:3.11-slim-bookworm
|
|
||||||
|
|
||||||
# Install system dependencies
|
|
||||||
RUN apt-get update && apt-get install -y --no-install-recommends \
|
|
||||||
curl \
|
|
||||||
ca-certificates \
|
|
||||||
git \
|
|
||||||
&& rm -rf /var/lib/apt/lists/*
|
|
||||||
|
|
||||||
# Install uv for Python package management
|
|
||||||
ADD https://astral.sh/uv/install.sh /uv-installer.sh
|
|
||||||
RUN sh /uv-installer.sh && rm /uv-installer.sh
|
|
||||||
|
|
||||||
# Add uv to PATH
|
|
||||||
ENV PATH="/root/.local/bin:${PATH}"
|
|
||||||
|
|
||||||
# Configure uv for optimal Docker usage
|
|
||||||
ENV UV_COMPILE_BYTECODE=1 \
|
|
||||||
UV_LINK_MODE=copy \
|
|
||||||
UV_PYTHON_DOWNLOADS=never \
|
|
||||||
MCP_SERVER_HOST="0.0.0.0" \
|
|
||||||
PYTHONUNBUFFERED=1
|
|
||||||
|
|
||||||
# Set up application directory
|
|
||||||
WORKDIR /app
|
|
||||||
|
|
||||||
# Copy the ENTIRE graphiti project (both core and mcp_server)
|
|
||||||
# This allows us to use the local graphiti-core
|
|
||||||
COPY . /app
|
|
||||||
|
|
||||||
# Build and install graphiti-core from local source first
|
|
||||||
WORKDIR /app
|
|
||||||
RUN uv pip install --system "./[neo4j,falkordb]"
|
|
||||||
|
|
||||||
# Now set up MCP server
|
|
||||||
WORKDIR /app/mcp_server
|
|
||||||
|
|
||||||
# Remove the local path override for graphiti-core in Docker builds
|
|
||||||
# and remove graphiti-core from dependencies since we already installed it
|
|
||||||
RUN sed -i '/\[tool\.uv\.sources\]/,/graphiti-core/d' pyproject.toml && \
|
|
||||||
sed -i '/graphiti-core\[falkordb\]/d' pyproject.toml
|
|
||||||
|
|
||||||
# Install remaining MCP server dependencies (graphiti-core already installed from local)
|
|
||||||
RUN --mount=type=cache,target=/root/.cache/uv \
|
|
||||||
uv sync --no-dev
|
|
||||||
|
|
||||||
# Accept version build arguments
|
|
||||||
ARG GRAPHITI_CORE_VERSION=local
|
|
||||||
ARG MCP_SERVER_VERSION=1.0.0
|
|
||||||
ARG BUILD_DATE
|
|
||||||
ARG VCS_REF
|
|
||||||
|
|
||||||
# Store version info
|
|
||||||
RUN echo "${GRAPHITI_CORE_VERSION}" > /app/mcp_server/.graphiti-core-version
|
|
||||||
|
|
||||||
# Create log directory
|
|
||||||
RUN mkdir -p /var/log/graphiti
|
|
||||||
|
|
||||||
# Add Docker labels with version information
|
|
||||||
LABEL org.opencontainers.image.title="Graphiti MCP Server (Custom Build)" \
|
|
||||||
org.opencontainers.image.description="Custom Graphiti MCP server with local graphiti-core changes" \
|
|
||||||
org.opencontainers.image.version="${MCP_SERVER_VERSION}" \
|
|
||||||
org.opencontainers.image.created="${BUILD_DATE}" \
|
|
||||||
org.opencontainers.image.revision="${VCS_REF}" \
|
|
||||||
org.opencontainers.image.vendor="Custom Build" \
|
|
||||||
org.opencontainers.image.source="https://github.com/lvarming/graphiti" \
|
|
||||||
graphiti.core.version="${GRAPHITI_CORE_VERSION}" \
|
|
||||||
graphiti.core.source="local"
|
|
||||||
|
|
||||||
# Expose MCP server port
|
|
||||||
EXPOSE 8000
|
|
||||||
|
|
||||||
# Health check - verify MCP server is responding
|
|
||||||
HEALTHCHECK --interval=10s --timeout=5s --start-period=15s --retries=3 \
|
|
||||||
CMD curl -f http://localhost:8000/health || exit 1
|
|
||||||
|
|
||||||
# Run the MCP server
|
|
||||||
CMD ["uv", "run", "main.py"]
|
|
||||||
Loading…
Add table
Reference in a new issue