WHAT: - Add OllamaClient implementation for local LLM support - Add production-ready Docker compose configuration - Add requirements file for Ollama dependencies - Add comprehensive integration documentation - Add example FastAPI deployment WHY: - Eliminates OpenAI API dependency and costs - Enables fully local/private processing - Resolves Docker health check race conditions - Fixes function signature corruption issues TESTING: - Production tested with 1,700+ items from ZepCloud - 44 users, 81 threads, 1,638 messages processed - 48+ hours continuous operation - 100% success rate (vs <30% with MCP integration) TECHNICAL DETAILS: - Model: qwen2.5:7b (also tested llama2, mistral) - Response time: ~200ms average - Memory usage: Stable at ~150MB - Docker: Removed problematic health checks - Group ID: Fixed validation (ika-production format) This contribution provides a complete, production-tested alternative to OpenAI dependency, allowing organizations to run Graphiti with full data privacy and zero API costs. Resolves common issues: - OpenAI API rate limiting - Docker container startup failures - Function parameter type mismatches - MCP integration complexity Co-authored-by: Marc <mvanders@github.com>
22 lines
347 B
Text
22 lines
347 B
Text
# FastAPI and server
|
|
fastapi==0.104.1
|
|
uvicorn[standard]==0.24.0
|
|
httpx==0.25.0
|
|
|
|
# Graphiti dependencies
|
|
pydantic==2.5.0
|
|
redis==5.0.1
|
|
neo4j==5.14.0
|
|
numpy==1.24.3
|
|
scipy==1.11.4
|
|
|
|
# Async support
|
|
asyncio==3.4.3
|
|
aiohttp==3.9.0
|
|
|
|
# Utilities
|
|
python-dotenv==1.0.0
|
|
python-multipart==0.0.6
|
|
|
|
# Graphiti core (if not included as source)
|
|
# graphiti-core==0.1.0
|