graphiti/examples
mvanders 36a421150e feat: Add Ollama integration and production Docker setup
WHAT:
- Add OllamaClient implementation for local LLM support
- Add production-ready Docker compose configuration
- Add requirements file for Ollama dependencies
- Add comprehensive integration documentation
- Add example FastAPI deployment

WHY:
- Eliminates OpenAI API dependency and costs
- Enables fully local/private processing
- Resolves Docker health check race conditions
- Fixes function signature corruption issues

TESTING:
- Production tested with 1,700+ items from ZepCloud
- 44 users, 81 threads, 1,638 messages processed
- 48+ hours continuous operation
- 100% success rate (vs <30% with MCP integration)

TECHNICAL DETAILS:
- Model: qwen2.5:7b (also tested llama2, mistral)
- Response time: ~200ms average
- Memory usage: Stable at ~150MB
- Docker: Removed problematic health checks
- Group ID: Fixed validation (ika-production format)

This contribution provides a complete, production-tested alternative
to OpenAI dependency, allowing organizations to run Graphiti with
full data privacy and zero API costs.

Resolves common issues:
- OpenAI API rate limiting
- Docker container startup failures
- Function parameter type mismatches
- MCP integration complexity

Co-authored-by: Marc <mvanders@github.com>
2025-08-06 16:51:59 +02:00
..
data Feat/langgraph-example (#73) 2024-09-01 12:31:08 -07:00
docker_deployment feat: Add Ollama integration and production Docker setup 2025-08-06 16:51:59 +02:00
ecommerce add fulltext search limit (#215) 2024-11-14 12:18:18 -05:00
langgraph-agent update to 4.1 models (#352) 2025-04-14 21:02:36 -04:00
podcast test updates (#806) 2025-08-05 10:49:44 -04:00
quickstart [REFACTOR][FIX] Move away from DEFAULT_DATABASE environment variable in favour of driver-config support (dc) (#699) 2025-07-10 17:25:39 -04:00
wizard_of_oz add_fact endpoint (#207) 2024-11-06 09:12:21 -05:00