| .. | ||
| __init__.py | ||
| conftest.py | ||
| pytest.ini | ||
| README.md | ||
| run_tests.py | ||
| test_async_operations.py | ||
| test_comprehensive_integration.py | ||
| test_configuration.py | ||
| test_falkordb_integration.py | ||
| test_fixtures.py | ||
| test_http_integration.py | ||
| test_integration.py | ||
| test_mcp_integration.py | ||
| test_mcp_transports.py | ||
| test_stdio_simple.py | ||
| test_stress_load.py | ||
Graphiti MCP Server Integration Tests
This directory contains a comprehensive integration test suite for the Graphiti MCP Server using the official Python MCP SDK.
Overview
The test suite is designed to thoroughly test all aspects of the Graphiti MCP server with special consideration for LLM inference latency and system performance.
Test Organization
Core Test Modules
test_comprehensive_integration.py- Main integration test suite covering all MCP toolstest_async_operations.py- Tests for concurrent operations and async patternstest_stress_load.py- Stress testing and load testing scenariostest_fixtures.py- Shared fixtures and test utilitiestest_mcp_integration.py- Original MCP integration teststest_configuration.py- Configuration loading and validation tests
Test Categories
Tests are organized with pytest markers:
unit- Fast unit tests without external dependenciesintegration- Tests requiring database and servicesslow- Long-running tests (stress/load tests)requires_neo4j- Tests requiring Neo4jrequires_falkordb- Tests requiring FalkorDBrequires_openai- Tests requiring OpenAI API key
Installation
# Install test dependencies
uv add --dev pytest pytest-asyncio pytest-timeout pytest-xdist faker psutil
# Install MCP SDK
uv add mcp
Running Tests
Quick Start
# Run smoke tests (quick validation)
python tests/run_tests.py smoke
# Run integration tests with mock LLM
python tests/run_tests.py integration --mock-llm
# Run all tests
python tests/run_tests.py all
Test Runner Options
python tests/run_tests.py [suite] [options]
Suites:
unit - Unit tests only
integration - Integration tests
comprehensive - Comprehensive integration suite
async - Async operation tests
stress - Stress and load tests
smoke - Quick smoke tests
all - All tests
Options:
--database - Database backend (neo4j, falkordb)
--mock-llm - Use mock LLM for faster testing
--parallel N - Run tests in parallel with N workers
--coverage - Generate coverage report
--skip-slow - Skip slow tests
--timeout N - Test timeout in seconds
--check-only - Only check prerequisites
Examples
# Quick smoke test with FalkorDB (default)
python tests/run_tests.py smoke
# Full integration test with Neo4j
python tests/run_tests.py integration --database neo4j
# Stress testing with parallel execution
python tests/run_tests.py stress --parallel 4
# Run with coverage
python tests/run_tests.py all --coverage
# Check prerequisites only
python tests/run_tests.py all --check-only
Test Coverage
Core Operations
- Server initialization and tool discovery
- Adding memories (text, JSON, message)
- Episode queue management
- Search operations (semantic, hybrid)
- Episode retrieval and deletion
- Entity and edge operations
Async Operations
- Concurrent operations
- Queue management
- Sequential processing within groups
- Parallel processing across groups
Performance Testing
- Latency measurement
- Throughput testing
- Batch processing
- Resource usage monitoring
Stress Testing
- Sustained load scenarios
- Spike load handling
- Memory leak detection
- Connection pool exhaustion
- Rate limit handling
Configuration
Environment Variables
# Database configuration
export DATABASE_PROVIDER=falkordb # or neo4j
export NEO4J_URI=bolt://localhost:7687
export NEO4J_USER=neo4j
export NEO4J_PASSWORD=graphiti
export FALKORDB_URI=redis://localhost:6379
# LLM configuration
export OPENAI_API_KEY=your_key_here # or use --mock-llm
# Test configuration
export TEST_MODE=true
export LOG_LEVEL=INFO
pytest.ini Configuration
The pytest.ini file configures:
- Test discovery patterns
- Async mode settings
- Test markers
- Timeout settings
- Output formatting
Test Fixtures
Data Generation
The test suite includes comprehensive data generators:
from test_fixtures import TestDataGenerator
# Generate test data
company = TestDataGenerator.generate_company_profile()
conversation = TestDataGenerator.generate_conversation()
document = TestDataGenerator.generate_technical_document()
Test Client
Simplified client creation:
from test_fixtures import graphiti_test_client
async with graphiti_test_client(database="falkordb") as (session, group_id):
# Use session for testing
result = await session.call_tool('add_memory', {...})
Performance Considerations
LLM Latency Management
The tests account for LLM inference latency through:
- Configurable timeouts - Different timeouts for different operations
- Mock LLM option - Fast testing without API calls
- Intelligent polling - Adaptive waiting for episode processing
- Batch operations - Testing efficiency of batched requests
Resource Management
- Memory leak detection
- Connection pool monitoring
- Resource usage tracking
- Graceful degradation testing
CI/CD Integration
GitHub Actions
name: MCP Integration Tests
on: [push, pull_request]
jobs:
test:
runs-on: ubuntu-latest
services:
neo4j:
image: neo4j:5.26
env:
NEO4J_AUTH: neo4j/graphiti
ports:
- 7687:7687
steps:
- uses: actions/checkout@v2
- name: Install dependencies
run: |
pip install uv
uv sync --extra dev
- name: Run smoke tests
run: python tests/run_tests.py smoke --mock-llm
- name: Run integration tests
run: python tests/run_tests.py integration --database neo4j
env:
OPENAI_API_KEY: ${{ secrets.OPENAI_API_KEY }}
Troubleshooting
Common Issues
-
Database connection failures
# Check Neo4j curl http://localhost:7474 # Check FalkorDB redis-cli ping -
API key issues
# Use mock LLM for testing without API key python tests/run_tests.py all --mock-llm -
Timeout errors
# Increase timeout for slow systems python tests/run_tests.py integration --timeout 600 -
Memory issues
# Skip stress tests on low-memory systems python tests/run_tests.py all --skip-slow
Test Reports
Performance Report
After running performance tests:
from test_fixtures import PerformanceBenchmark
benchmark = PerformanceBenchmark()
# ... run tests ...
print(benchmark.report())
Load Test Report
Stress tests generate detailed reports:
LOAD TEST REPORT
================
Test Run 1:
Total Operations: 100
Success Rate: 95.0%
Throughput: 12.5 ops/s
Latency (avg/p50/p95/p99/max): 0.8/0.7/1.5/2.1/3.2s
Contributing
When adding new tests:
- Use appropriate pytest markers
- Include docstrings explaining test purpose
- Use fixtures for common operations
- Consider LLM latency in test design
- Add timeout handling for long operations
- Include performance metrics where relevant
License
See main project LICENSE file.