\# Ollama Integration for Graphiti \## Overview This integration allows Graphiti to use Ollama for local LLM processing, eliminating OpenAI API costs. \## Production Testing \- Successfully processed 1,700+ items \- 44 users, 81 threads, 1,638 messages \- 48+ hours continuous operation \- 100% success rate \## Setup 1\. Install Ollama: https://ollama.ai 2\. Pull model: `ollama pull qwen2.5:7b` 3\. Use provided `docker-compose-production.yml` 4\. Configure environment variables \## Benefits \- No API costs \- Complete data privacy \- Faster response times (200ms average) \- No rate limiting Tested by: Marc (mvanders) - August 2025