graphiti/mcp_server/docs/cursor_rules.md
Daniel Chalef 21530c6408 feat: MCP Server v1.0.0rc0 - Complete refactoring with modular architecture
This is a major refactoring of the MCP Server to support multiple providers
through a YAML-based configuration system with factory pattern implementation.

## Key Changes

### Architecture Improvements
- Modular configuration system with YAML-based settings
- Factory pattern for LLM, Embedder, and Database providers
- Support for multiple database backends (Neo4j, FalkorDB, KuzuDB)
- Clean separation of concerns with dedicated service modules

### Provider Support
- **LLM**: OpenAI, Anthropic, Gemini, Groq
- **Embedders**: OpenAI, Voyage, Gemini, Anthropic, Sentence Transformers
- **Databases**: Neo4j, FalkorDB, KuzuDB (new default)
- Azure OpenAI support with AD authentication

### Configuration
- YAML configuration with environment variable expansion
- CLI argument overrides for runtime configuration
- Multiple pre-configured Docker Compose setups
- Proper boolean handling in environment variables

### Testing & CI
- Comprehensive test suite with unit and integration tests
- GitHub Actions workflows for linting and testing
- Multi-database testing support

### Docker Support
- Updated Docker images with multi-stage builds
- Database-specific docker-compose configurations
- Persistent volume support for all databases

### Bug Fixes
- Fixed KuzuDB connectivity checks
- Corrected Docker command paths
- Improved error handling and logging
- Fixed boolean environment variable expansion

Co-authored-by: Claude <noreply@anthropic.com>
2025-10-26 17:23:57 -07:00

2.4 KiB

Instructions for Using Graphiti's MCP Tools for Agent Memory

Before Starting Any Task

  • Always search first: Use the search_nodes tool to look for relevant preferences and procedures before beginning work.
  • Search for facts too: Use the search_facts tool to discover relationships and factual information that may be relevant to your task.
  • Filter by entity type: Specify Preference, Procedure, or Requirement in your node search to get targeted results.
  • Review all matches: Carefully examine any preferences, procedures, or facts that match your current task.

Always Save New or Updated Information

  • Capture requirements and preferences immediately: When a user expresses a requirement or preference, use add_memory to store it right away.
    • Best practice: Split very long requirements into shorter, logical chunks.
  • Be explicit if something is an update to existing knowledge. Only add what's changed or new to the graph.
  • Document procedures clearly: When you discover how a user wants things done, record it as a procedure.
  • Record factual relationships: When you learn about connections between entities, store these as facts.
  • Be specific with categories: Label preferences and procedures with clear categories for better retrieval later.

During Your Work

  • Respect discovered preferences: Align your work with any preferences you've found.
  • Follow procedures exactly: If you find a procedure for your current task, follow it step by step.
  • Apply relevant facts: Use factual information to inform your decisions and recommendations.
  • Stay consistent: Maintain consistency with previously identified preferences, procedures, and facts.

Best Practices

  • Search before suggesting: Always check if there's established knowledge before making recommendations.
  • Combine node and fact searches: For complex tasks, search both nodes and facts to build a complete picture.
  • Use center_node_uuid: When exploring related information, center your search around a specific node.
  • Prioritize specific matches: More specific information takes precedence over general information.
  • Be proactive: If you notice patterns in user behavior, consider storing them as preferences or procedures.

Remember: The knowledge graph is your memory. Use it consistently to provide personalized assistance that respects the user's established preferences, procedures, and factual context.