| .. | ||
| __init__.py | ||
| conftest.py | ||
| pytest.ini | ||
| README.md | ||
| run_tests.py | ||
| test_async_operations.py | ||
| test_comprehensive_integration.py | ||
| test_configuration.py | ||
| test_falkordb_integration.py | ||
| test_fixtures.py | ||
| test_http_integration.py | ||
| test_inmemory_example.py | ||
| test_integration.py | ||
| test_mcp_integration.py | ||
| test_mcp_transports.py | ||
| test_stdio_simple.py | ||
| test_stress_load.py | ||
Graphiti MCP Server Integration Tests
This directory contains a comprehensive integration test suite for the Graphiti MCP Server using the official Python MCP SDK.
Overview
The test suite is designed to thoroughly test all aspects of the Graphiti MCP server with special consideration for LLM inference latency and system performance.
Test Organization
Core Test Modules
test_inmemory_example.py- Recommended in-memory tests using FastMCP Client (fast, reliable)test_comprehensive_integration.py- Integration test suite using subprocess (slower, may have env issues)test_async_operations.py- Tests for concurrent operations and async patternstest_stress_load.py- Stress testing and load testing scenariostest_fixtures.py- Shared fixtures and test utilitiestest_mcp_integration.py- Original MCP integration teststest_configuration.py- Configuration loading and validation tests
Test Categories
Tests are organized with pytest markers:
unit- Fast unit tests without external dependenciesintegration- Tests requiring database and servicesslow- Long-running tests (stress/load tests)requires_neo4j- Tests requiring Neo4jrequires_falkordb- Tests requiring FalkorDBrequires_openai- Tests requiring OpenAI API key
Installation
# Install test dependencies
uv add --dev pytest pytest-asyncio pytest-timeout pytest-xdist faker psutil
# Install MCP SDK (fastmcp is already a dependency of graphiti-core)
uv add mcp
Note on FastMCP: The
fastmcppackage (v2.13.3) is a dependency ofgraphiti-coreand provides theClientclass for testing. The MCP server usesmcp.server.fastmcp.FastMCPwhich is bundled in the officialmcppackage.
Running Tests
Quick Start (Recommended)
The fastest and most reliable way to test is using the in-memory tests:
# Run in-memory tests (fast, ~1 second)
uv run pytest tests/test_inmemory_example.py -v -s
This uses FastMCP's recommended testing pattern with in-memory transport, avoiding subprocess issues.
Alternative: Subprocess-based Tests
The original test runner spawns subprocess servers. These tests may experience environment variable issues:
# Run smoke tests (may timeout due to subprocess issues)
python tests/run_tests.py smoke
# Run integration tests with mock LLM
python tests/run_tests.py integration --mock-llm
# Run all tests
python tests/run_tests.py all
Note
: The subprocess-based tests use
StdioServerParameterswhich can have environment variable isolation issues. If you encounterValueError: invalid literal for int()errors related toSEMAPHORE_LIMITorMAX_REFLEXION_ITERATIONS, use the in-memory tests instead.
Test Runner Options
python tests/run_tests.py [suite] [options]
Suites:
unit - Unit tests only
integration - Integration tests
comprehensive - Comprehensive integration suite
async - Async operation tests
stress - Stress and load tests
smoke - Quick smoke tests
all - All tests
Options:
--database - Database backend (neo4j, falkordb)
--mock-llm - Use mock LLM for faster testing
--parallel N - Run tests in parallel with N workers
--coverage - Generate coverage report
--skip-slow - Skip slow tests
--timeout N - Test timeout in seconds
--check-only - Only check prerequisites
Examples
# Quick smoke test with FalkorDB (default)
python tests/run_tests.py smoke
# Full integration test with Neo4j
python tests/run_tests.py integration --database neo4j
# Stress testing with parallel execution
python tests/run_tests.py stress --parallel 4
# Run with coverage
python tests/run_tests.py all --coverage
# Check prerequisites only
python tests/run_tests.py all --check-only
Test Coverage
Core Operations
- Server initialization and tool discovery
- Adding memories (text, JSON, message)
- Episode queue management
- Search operations (semantic, hybrid)
- Episode retrieval and deletion
- Entity and edge operations
Async Operations
- Concurrent operations
- Queue management
- Sequential processing within groups
- Parallel processing across groups
Performance Testing
- Latency measurement
- Throughput testing
- Batch processing
- Resource usage monitoring
Stress Testing
- Sustained load scenarios
- Spike load handling
- Memory leak detection
- Connection pool exhaustion
- Rate limit handling
Configuration
Environment Variables
# Database configuration
export DATABASE_PROVIDER=falkordb # or neo4j
export NEO4J_URI=bolt://localhost:7687
export NEO4J_USER=neo4j
export NEO4J_PASSWORD=graphiti
export FALKORDB_URI=redis://localhost:6379
# LLM configuration
export OPENAI_API_KEY=your_key_here # or use --mock-llm
# Test configuration
export TEST_MODE=true
export LOG_LEVEL=INFO
pytest.ini Configuration
The pytest.ini file configures:
- Test discovery patterns
- Async mode settings
- Test markers
- Timeout settings
- Output formatting
In-Memory Testing Pattern (Recommended)
The test_inmemory_example.py file demonstrates FastMCP's recommended testing approach:
import os
import sys
from pathlib import Path
import pytest
from fastmcp.client import Client
# Set env vars BEFORE importing graphiti modules
def set_env_if_empty(key: str, value: str):
if not os.environ.get(key):
os.environ[key] = value
set_env_if_empty('SEMAPHORE_LIMIT', '10')
set_env_if_empty('MAX_REFLEXION_ITERATIONS', '0')
set_env_if_empty('FALKORDB_URI', 'redis://localhost:6379')
# Import after env vars are set
from graphiti_mcp_server import mcp
@pytest.fixture
async def mcp_client():
"""In-memory MCP client - no subprocess needed."""
async with Client(transport=mcp) as client:
yield client
async def test_list_tools(mcp_client: Client):
tools = await mcp_client.list_tools()
assert len(tools) > 0
Benefits of In-Memory Testing
| Aspect | In-Memory | Subprocess |
|---|---|---|
| Speed | ~1 second | 10+ minutes |
| Reliability | High | Environment issues |
| Debugging | Easy | Difficult |
| Resource Usage | Low | High |
Available MCP Tools
The Graphiti MCP server exposes these tools:
add_memory- Add episodes to the knowledge graphsearch_nodes- Search for entity nodessearch_memory_facts- Search for facts/relationshipsdelete_entity_edge- Delete an edgedelete_episode- Delete an episodeget_entity_edge- Get edge by UUIDget_episodes- Get recent episodesclear_graph- Clear all dataget_status- Get server status
Test Fixtures
Data Generation
The test suite includes comprehensive data generators:
from test_fixtures import TestDataGenerator
# Generate test data
company = TestDataGenerator.generate_company_profile()
conversation = TestDataGenerator.generate_conversation()
document = TestDataGenerator.generate_technical_document()
Test Client
Simplified client creation:
from test_fixtures import graphiti_test_client
async with graphiti_test_client(database="falkordb") as (session, group_id):
# Use session for testing
result = await session.call_tool('add_memory', {...})
Performance Considerations
LLM Latency Management
The tests account for LLM inference latency through:
- Configurable timeouts - Different timeouts for different operations
- Mock LLM option - Fast testing without API calls
- Intelligent polling - Adaptive waiting for episode processing
- Batch operations - Testing efficiency of batched requests
Resource Management
- Memory leak detection
- Connection pool monitoring
- Resource usage tracking
- Graceful degradation testing
CI/CD Integration
GitHub Actions
name: MCP Integration Tests
on: [push, pull_request]
jobs:
test:
runs-on: ubuntu-latest
services:
neo4j:
image: neo4j:5.26
env:
NEO4J_AUTH: neo4j/graphiti
ports:
- 7687:7687
steps:
- uses: actions/checkout@v2
- name: Install dependencies
run: |
pip install uv
uv sync --extra dev
- name: Run smoke tests
run: python tests/run_tests.py smoke --mock-llm
- name: Run integration tests
run: python tests/run_tests.py integration --database neo4j
env:
OPENAI_API_KEY: ${{ secrets.OPENAI_API_KEY }}
Troubleshooting
Common Issues
-
Database connection failures
# Check Neo4j curl http://localhost:7474 # Check FalkorDB redis-cli ping -
API key issues
# Use mock LLM for testing without API key python tests/run_tests.py all --mock-llm -
Timeout errors
# Increase timeout for slow systems python tests/run_tests.py integration --timeout 600 -
Memory issues
# Skip stress tests on low-memory systems python tests/run_tests.py all --skip-slow -
Environment variable errors (
ValueError: invalid literal for int())# This occurs when SEMAPHORE_LIMIT or MAX_REFLEXION_ITERATIONS is set to empty string # Solution 1: Use in-memory tests (recommended) uv run pytest tests/test_inmemory_example.py -v # Solution 2: Set env vars explicitly SEMAPHORE_LIMIT=10 MAX_REFLEXION_ITERATIONS=0 python tests/run_tests.py smokeRoot cause: The graphiti_core/helpers.py module parses environment variables at import time. If these are set to empty strings (not unset),
int('')fails.
Test Reports
Performance Report
After running performance tests:
from test_fixtures import PerformanceBenchmark
benchmark = PerformanceBenchmark()
# ... run tests ...
print(benchmark.report())
Load Test Report
Stress tests generate detailed reports:
LOAD TEST REPORT
================
Test Run 1:
Total Operations: 100
Success Rate: 95.0%
Throughput: 12.5 ops/s
Latency (avg/p50/p95/p99/max): 0.8/0.7/1.5/2.1/3.2s
Contributing
When adding new tests:
- Use appropriate pytest markers
- Include docstrings explaining test purpose
- Use fixtures for common operations
- Consider LLM latency in test design
- Add timeout handling for long operations
- Include performance metrics where relevant
License
See main project LICENSE file.