graphiti/mcp_server/tests
Daniel Chalef 4fe92a2b5b fix: Use contextlib.suppress instead of try-except-pass (SIM105)
- Replace try-except-pass with contextlib.suppress in test_async_operations.py
- Replace try-except-pass with contextlib.suppress in test_fixtures.py
- Fixes ruff SIM105 linting errors
2025-10-29 18:46:50 -07:00
..
__init__.py feat: MCP Server v1.0.0rc0 - Complete refactoring with modular architecture 2025-10-26 17:23:57 -07:00
conftest.py feat: MCP Server v1.0.0rc0 - Complete refactoring with modular architecture 2025-10-26 17:23:57 -07:00
pytest.ini conductor-checkpoint-msg_01TscHXmijzkqcTJX5sGTYP8 2025-10-26 18:14:13 -07:00
README.md conductor-checkpoint-msg_01TscHXmijzkqcTJX5sGTYP8 2025-10-26 18:14:13 -07:00
run_tests.py fix: Fix all linting errors in test suite 2025-10-29 18:41:18 -07:00
test_async_operations.py fix: Use contextlib.suppress instead of try-except-pass (SIM105) 2025-10-29 18:46:50 -07:00
test_comprehensive_integration.py fix: Fix all linting errors in test suite 2025-10-29 18:41:18 -07:00
test_configuration.py feat: MCP Server v1.0.0rc0 - Complete refactoring with modular architecture 2025-10-26 17:23:57 -07:00
test_falkordb_integration.py feat: MCP Server v1.0.0rc0 - Complete refactoring with modular architecture 2025-10-26 17:23:57 -07:00
test_fixtures.py fix: Use contextlib.suppress instead of try-except-pass (SIM105) 2025-10-29 18:46:50 -07:00
test_http_integration.py feat: MCP Server v1.0.0rc0 - Complete refactoring with modular architecture 2025-10-26 17:23:57 -07:00
test_integration.py feat: MCP Server v1.0.0rc0 - Complete refactoring with modular architecture 2025-10-26 17:23:57 -07:00
test_mcp_integration.py feat: MCP Server v1.0.0rc0 - Complete refactoring with modular architecture 2025-10-26 17:23:57 -07:00
test_mcp_transports.py feat: MCP Server v1.0.0rc0 - Complete refactoring with modular architecture 2025-10-26 17:23:57 -07:00
test_stdio_simple.py conductor-checkpoint-msg_01Q7VLFTJrtmpkaB7hfUzZLP 2025-10-27 12:15:35 -07:00
test_stress_load.py fix: Fix all linting errors in test suite 2025-10-29 18:41:18 -07:00

Graphiti MCP Server Integration Tests

This directory contains a comprehensive integration test suite for the Graphiti MCP Server using the official Python MCP SDK.

Overview

The test suite is designed to thoroughly test all aspects of the Graphiti MCP server with special consideration for LLM inference latency and system performance.

Test Organization

Core Test Modules

  • test_comprehensive_integration.py - Main integration test suite covering all MCP tools
  • test_async_operations.py - Tests for concurrent operations and async patterns
  • test_stress_load.py - Stress testing and load testing scenarios
  • test_fixtures.py - Shared fixtures and test utilities
  • test_mcp_integration.py - Original MCP integration tests
  • test_configuration.py - Configuration loading and validation tests

Test Categories

Tests are organized with pytest markers:

  • unit - Fast unit tests without external dependencies
  • integration - Tests requiring database and services
  • slow - Long-running tests (stress/load tests)
  • requires_neo4j - Tests requiring Neo4j
  • requires_falkordb - Tests requiring FalkorDB
  • requires_kuzu - Tests requiring KuzuDB
  • requires_openai - Tests requiring OpenAI API key

Installation

# Install test dependencies
uv add --dev pytest pytest-asyncio pytest-timeout pytest-xdist faker psutil

# Install MCP SDK
uv add mcp

Running Tests

Quick Start

# Run smoke tests (quick validation)
python tests/run_tests.py smoke

# Run integration tests with mock LLM
python tests/run_tests.py integration --mock-llm

# Run all tests
python tests/run_tests.py all

Test Runner Options

python tests/run_tests.py [suite] [options]

Suites:
  unit          - Unit tests only
  integration   - Integration tests
  comprehensive - Comprehensive integration suite
  async         - Async operation tests
  stress        - Stress and load tests
  smoke         - Quick smoke tests
  all           - All tests

Options:
  --database    - Database backend (neo4j, falkordb, kuzu)
  --mock-llm    - Use mock LLM for faster testing
  --parallel N  - Run tests in parallel with N workers
  --coverage    - Generate coverage report
  --skip-slow   - Skip slow tests
  --timeout N   - Test timeout in seconds
  --check-only  - Only check prerequisites

Examples

# Quick smoke test with KuzuDB
python tests/run_tests.py smoke --database kuzu

# Full integration test with Neo4j
python tests/run_tests.py integration --database neo4j

# Stress testing with parallel execution
python tests/run_tests.py stress --parallel 4

# Run with coverage
python tests/run_tests.py all --coverage

# Check prerequisites only
python tests/run_tests.py all --check-only

Test Coverage

Core Operations

  • Server initialization and tool discovery
  • Adding memories (text, JSON, message)
  • Episode queue management
  • Search operations (semantic, hybrid)
  • Episode retrieval and deletion
  • Entity and edge operations

Async Operations

  • Concurrent operations
  • Queue management
  • Sequential processing within groups
  • Parallel processing across groups

Performance Testing

  • Latency measurement
  • Throughput testing
  • Batch processing
  • Resource usage monitoring

Stress Testing

  • Sustained load scenarios
  • Spike load handling
  • Memory leak detection
  • Connection pool exhaustion
  • Rate limit handling

Configuration

Environment Variables

# Database configuration
export DATABASE_PROVIDER=kuzu  # or neo4j, falkordb
export NEO4J_URI=bolt://localhost:7687
export NEO4J_USER=neo4j
export NEO4J_PASSWORD=graphiti
export FALKORDB_URI=redis://localhost:6379
export KUZU_PATH=./test_kuzu.db

# LLM configuration
export OPENAI_API_KEY=your_key_here  # or use --mock-llm

# Test configuration
export TEST_MODE=true
export LOG_LEVEL=INFO

pytest.ini Configuration

The pytest.ini file configures:

  • Test discovery patterns
  • Async mode settings
  • Test markers
  • Timeout settings
  • Output formatting

Test Fixtures

Data Generation

The test suite includes comprehensive data generators:

from test_fixtures import TestDataGenerator

# Generate test data
company = TestDataGenerator.generate_company_profile()
conversation = TestDataGenerator.generate_conversation()
document = TestDataGenerator.generate_technical_document()

Test Client

Simplified client creation:

from test_fixtures import graphiti_test_client

async with graphiti_test_client(database="kuzu") as (session, group_id):
    # Use session for testing
    result = await session.call_tool('add_memory', {...})

Performance Considerations

LLM Latency Management

The tests account for LLM inference latency through:

  1. Configurable timeouts - Different timeouts for different operations
  2. Mock LLM option - Fast testing without API calls
  3. Intelligent polling - Adaptive waiting for episode processing
  4. Batch operations - Testing efficiency of batched requests

Resource Management

  • Memory leak detection
  • Connection pool monitoring
  • Resource usage tracking
  • Graceful degradation testing

CI/CD Integration

GitHub Actions

name: MCP Integration Tests

on: [push, pull_request]

jobs:
  test:
    runs-on: ubuntu-latest

    services:
      neo4j:
        image: neo4j:5.26
        env:
          NEO4J_AUTH: neo4j/graphiti
        ports:
          - 7687:7687

    steps:
      - uses: actions/checkout@v2

      - name: Install dependencies
        run: |
          pip install uv
          uv sync --extra dev

      - name: Run smoke tests
        run: python tests/run_tests.py smoke --mock-llm

      - name: Run integration tests
        run: python tests/run_tests.py integration --database neo4j
        env:
          OPENAI_API_KEY: ${{ secrets.OPENAI_API_KEY }}

Troubleshooting

Common Issues

  1. Database connection failures

    # Check Neo4j
    curl http://localhost:7474
    
    # Check FalkorDB
    redis-cli ping
    
  2. API key issues

    # Use mock LLM for testing without API key
    python tests/run_tests.py all --mock-llm
    
  3. Timeout errors

    # Increase timeout for slow systems
    python tests/run_tests.py integration --timeout 600
    
  4. Memory issues

    # Skip stress tests on low-memory systems
    python tests/run_tests.py all --skip-slow
    

Test Reports

Performance Report

After running performance tests:

from test_fixtures import PerformanceBenchmark

benchmark = PerformanceBenchmark()
# ... run tests ...
print(benchmark.report())

Load Test Report

Stress tests generate detailed reports:

LOAD TEST REPORT
================
Test Run 1:
  Total Operations: 100
  Success Rate: 95.0%
  Throughput: 12.5 ops/s
  Latency (avg/p50/p95/p99/max): 0.8/0.7/1.5/2.1/3.2s

Contributing

When adding new tests:

  1. Use appropriate pytest markers
  2. Include docstrings explaining test purpose
  3. Use fixtures for common operations
  4. Consider LLM latency in test design
  5. Add timeout handling for long operations
  6. Include performance metrics where relevant

License

See main project LICENSE file.