Added detailed documentation for SEMAPHORE_LIMIT configuration to help users optimize episode processing concurrency based on their LLM provider's rate limits.
Changes:
1. **graphiti_mcp_server.py**
- Expanded inline comments from 3 lines to 26 lines
- Added provider-specific tuning guidelines (OpenAI, Anthropic, Azure, Ollama)
- Documented symptoms of too-high/too-low settings
- Added monitoring recommendations
2. **README.md**
- Expanded "Concurrency and LLM Provider 429 Rate Limit Errors" section
- Added tier-specific recommendations for each provider
- Explained relationship between episode concurrency and LLM request rates
- Added troubleshooting symptoms and monitoring guidance
- Included example .env configuration
3. **config.yaml**
- Added header comment referencing detailed documentation
- Noted default value and suitable use case
4. **.env.example**
- Added SEMAPHORE_LIMIT with inline tuning guidelines
- Quick reference for all major LLM provider tiers
- Cross-reference to README for full details
Benefits:
- Users can now make informed decisions about concurrency settings
- Reduces likelihood of 429 rate limit errors from misconfiguration
- Helps users maximize throughput within their rate limits
- Provides clear troubleshooting guidance
Addresses PR #1024 review comment about magic number documentation.
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude <noreply@anthropic.com>
Removed all remaining Kuzu references from:
- Test fixtures (test_fixtures.py): Changed default database to falkordb, removed kuzu configuration
- Test runner (run_tests.py): Removed kuzu from database choices, checks, and markers
- Integration tests (test_comprehensive_integration.py): Removed kuzu from parameterized tests and environment setup
- Test README: Updated all examples and documentation to reflect falkordb as default
- Docker README: Completely rewrote to remove KuzuDB section, updated with FalkorDB combined image as default
All Kuzu support has been completely removed from the MCP server codebase. FalkorDB (via combined container) is now the default database backend.
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude <noreply@anthropic.com>
BREAKING CHANGE: Kuzu is no longer supported. FalkorDB is now the default.
- Renamed Dockerfile.falkordb-combined to Dockerfile (default)
- Renamed docker-compose-falkordb-combined.yml to docker-compose.yml (default)
- Updated config.yaml to use FalkorDB with localhost:6379 as default
- Removed Kuzu from pyproject.toml dependencies (now only falkordb extra)
- Updated Dockerfile to use graphiti-core[falkordb] instead of [kuzu,falkordb]
- Completely removed all Kuzu references from README
- Updated README to document FalkorDB combined container as default
- Docker Compose now starts single container with FalkorDB + MCP server
- Prerequisites now require Docker instead of Python for default setup
- Removed old Kuzu docker-compose files
Running from command line now requires external FalkorDB instance at localhost:6379
- Override FalkorDB ENTRYPOINT to use custom startup script
- Use correct FalkorDB module path: /var/lib/falkordb/bin/falkordb.so
- Create config-docker-falkordb-combined.yaml with localhost URI
- Create /var/lib/falkordb/data directory for persistence
- Both FalkorDB and MCP server now start successfully
- Tested: FalkorDB ready, MCP server running on port 8000
Replace outdated text-embedding-ada-002 with the newer, more efficient
text-embedding-3-small model as the default embedder. The new model
offers better performance and is more cost-effective.
Updated:
- config/config.yaml: Changed default model
- README.md: Updated documentation to reflect new default
Changed the default LLM model from gpt-4o-mini to gpt-4.1 as requested.
This is the latest GPT-4 series model.
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude <noreply@anthropic.com>
Changed the default LLM model from gpt-4o to gpt-4o-mini across all
configuration files for better cost efficiency while maintaining quality.
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude <noreply@anthropic.com>
- Changed default transport to 'http' as SSE is deprecated
- Updated all configuration files to use HTTP transport
- Updated Docker compose commands to use HTTP transport
- Updated comments to reflect HTTP transport usage
This change ensures the MCP server uses the recommended HTTP transport
instead of the deprecated SSE transport.
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude <noreply@anthropic.com>
- Fix API key detection: Remove hardcoded OpenAI checks, let factories handle provider-specific validation
- Fix .env file loading: Search for .env in mcp_server directory first
- Change default transport to SSE for broader compatibility (was stdio)
- Add proper error handling with warnings for failed client initialization
- Model already defaults to gpt-4o as requested
These changes ensure the MCP server properly loads API keys from .env files
and creates the appropriate LLM/embedder clients based on configuration.
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude <noreply@anthropic.com>
This is a major refactoring of the MCP Server to support multiple providers
through a YAML-based configuration system with factory pattern implementation.
## Key Changes
### Architecture Improvements
- Modular configuration system with YAML-based settings
- Factory pattern for LLM, Embedder, and Database providers
- Support for multiple database backends (Neo4j, FalkorDB, KuzuDB)
- Clean separation of concerns with dedicated service modules
### Provider Support
- **LLM**: OpenAI, Anthropic, Gemini, Groq
- **Embedders**: OpenAI, Voyage, Gemini, Anthropic, Sentence Transformers
- **Databases**: Neo4j, FalkorDB, KuzuDB (new default)
- Azure OpenAI support with AD authentication
### Configuration
- YAML configuration with environment variable expansion
- CLI argument overrides for runtime configuration
- Multiple pre-configured Docker Compose setups
- Proper boolean handling in environment variables
### Testing & CI
- Comprehensive test suite with unit and integration tests
- GitHub Actions workflows for linting and testing
- Multi-database testing support
### Docker Support
- Updated Docker images with multi-stage builds
- Database-specific docker-compose configurations
- Persistent volume support for all databases
### Bug Fixes
- Fixed KuzuDB connectivity checks
- Corrected Docker command paths
- Improved error handling and logging
- Fixed boolean environment variable expansion
Co-authored-by: Claude <noreply@anthropic.com>