Added detailed documentation for SEMAPHORE_LIMIT configuration to help users optimize episode processing concurrency based on their LLM provider's rate limits. Changes: 1. **graphiti_mcp_server.py** - Expanded inline comments from 3 lines to 26 lines - Added provider-specific tuning guidelines (OpenAI, Anthropic, Azure, Ollama) - Documented symptoms of too-high/too-low settings - Added monitoring recommendations 2. **README.md** - Expanded "Concurrency and LLM Provider 429 Rate Limit Errors" section - Added tier-specific recommendations for each provider - Explained relationship between episode concurrency and LLM request rates - Added troubleshooting symptoms and monitoring guidance - Included example .env configuration 3. **config.yaml** - Added header comment referencing detailed documentation - Noted default value and suitable use case 4. **.env.example** - Added SEMAPHORE_LIMIT with inline tuning guidelines - Quick reference for all major LLM provider tiers - Cross-reference to README for full details Benefits: - Users can now make informed decisions about concurrency settings - Reduces likelihood of 429 rate limit errors from misconfiguration - Helps users maximize throughput within their rate limits - Provides clear troubleshooting guidance Addresses PR #1024 review comment about magic number documentation. 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude <noreply@anthropic.com>
49 lines
1.6 KiB
Text
49 lines
1.6 KiB
Text
# Graphiti MCP Server Environment Configuration
|
|
|
|
# Neo4j Database Configuration
|
|
# These settings are used to connect to your Neo4j database
|
|
NEO4J_URI=bolt://localhost:7687
|
|
NEO4J_USER=neo4j
|
|
NEO4J_PASSWORD=demodemo
|
|
|
|
# OpenAI API Configuration
|
|
# Required for LLM operations
|
|
OPENAI_API_KEY=your_openai_api_key_here
|
|
MODEL_NAME=gpt-4.1-mini
|
|
|
|
# Optional: Only needed for non-standard OpenAI endpoints
|
|
# OPENAI_BASE_URL=https://api.openai.com/v1
|
|
|
|
# Optional: Group ID for namespacing graph data
|
|
# GROUP_ID=my_project
|
|
|
|
# Concurrency Control
|
|
# Controls how many episodes can be processed simultaneously
|
|
# Default: 10 (suitable for OpenAI Tier 3, mid-tier Anthropic)
|
|
# Adjust based on your LLM provider's rate limits:
|
|
# - OpenAI Tier 1 (free): 1-2
|
|
# - OpenAI Tier 2: 5-8
|
|
# - OpenAI Tier 3: 10-15
|
|
# - OpenAI Tier 4: 20-50
|
|
# - Anthropic default: 5-8
|
|
# - Anthropic high tier: 15-30
|
|
# - Ollama (local): 1-5
|
|
# See README.md "Concurrency and LLM Provider 429 Rate Limit Errors" for details
|
|
SEMAPHORE_LIMIT=10
|
|
|
|
# Optional: Path configuration for Docker
|
|
# PATH=/root/.local/bin:${PATH}
|
|
|
|
# Optional: Memory settings for Neo4j (used in Docker Compose)
|
|
# NEO4J_server_memory_heap_initial__size=512m
|
|
# NEO4J_server_memory_heap_max__size=1G
|
|
# NEO4J_server_memory_pagecache_size=512m
|
|
|
|
# Azure OpenAI configuration
|
|
# Optional: Only needed for Azure OpenAI endpoints
|
|
# AZURE_OPENAI_ENDPOINT=your_azure_openai_endpoint_here
|
|
# AZURE_OPENAI_API_VERSION=2025-01-01-preview
|
|
# AZURE_OPENAI_DEPLOYMENT_NAME=gpt-4o-gpt-4o-mini-deployment
|
|
# AZURE_OPENAI_EMBEDDING_API_VERSION=2023-05-15
|
|
# AZURE_OPENAI_EMBEDDING_DEPLOYMENT_NAME=text-embedding-3-large-deployment
|
|
# AZURE_OPENAI_USE_MANAGED_IDENTITY=false
|