Fix Railway deployment: Remove cache mounts, add port config, create deployment guides

- Fix Docker cache mount issues that caused Railway build failures
- Add port argument support to MCP server for Railway compatibility
- Create Railway-optimized Dockerfile without cache mounts
- Add railway.json configuration for proper deployment
- Create comprehensive deployment and ChatGPT integration guides
- Add environment variable templates for Railway deployment
- Support Railway's PORT environment variable handling
- Ready for ChatGPT MCP SSE integration

🚀 Generated with Claude Code (https://claude.ai/code)

Co-Authored-By: Claude <noreply@anthropic.com>
This commit is contained in:
Tyler Lafleur 2025-09-19 19:44:48 -05:00
parent a5741efd13
commit b56d469648
6 changed files with 524 additions and 59 deletions

28
.env.railway Normal file
View file

@ -0,0 +1,28 @@
# Railway Environment Variables Template
# Copy these to your Railway project environment variables
# Required: OpenAI API Configuration
OPENAI_API_KEY=sk-proj-your-openai-api-key-here
MODEL_NAME=gpt-4.1-mini
SMALL_MODEL_NAME=gpt-4.1-nano
# Neo4j Database Configuration
# Option 1: Neo4j Aura Cloud (Recommended for production)
NEO4J_URI=neo4j+s://your-instance.databases.neo4j.io
NEO4J_USER=neo4j
NEO4J_PASSWORD=your-aura-password
# Option 2: Local Neo4j (Development only)
# NEO4J_URI=bolt://localhost:7687
# NEO4J_USER=neo4j
# NEO4J_PASSWORD=password
# Optional Configuration
LLM_TEMPERATURE=0.0
SEMAPHORE_LIMIT=10
GRAPHITI_TELEMETRY_ENABLED=false
# Railway automatically sets PORT and HOST
# These are handled by the application automatically
# PORT=8000
# MCP_SERVER_HOST=0.0.0.0

240
CHATGPT_INTEGRATION.md Normal file
View file

@ -0,0 +1,240 @@
# ChatGPT Integration Guide for Graphiti Memory Server
This guide explains how to integrate your deployed Graphiti MCP server with ChatGPT for persistent memory capabilities.
## Deployment URL Format
After Railway deployment, your server will be available at:
- **Base URL**: `https://graphiti-production-xxxx.up.railway.app`
- **MCP SSE Endpoint**: `https://graphiti-production-xxxx.up.railway.app/sse`
## ChatGPT Integration Methods
### Method 1: ChatGPT with Native MCP Support
If your ChatGPT client supports MCP natively:
```json
{
"mcpServers": {
"graphiti-memory": {
"transport": "sse",
"url": "https://your-railway-domain.up.railway.app/sse",
"timeout": 30000
}
}
}
```
### Method 2: Custom ChatGPT Integration
For custom implementations, use HTTP requests to the SSE endpoint with proper MCP protocol formatting.
## Available Memory Tools
Your ChatGPT will have access to these memory functions:
### 1. Add Memory (`add_memory`)
Store information in the knowledge graph:
```json
{
"name": "meeting_notes_2024",
"episode_body": "Discussed project timeline and deliverables with team",
"source": "text",
"group_id": "work_meetings"
}
```
### 2. Search Memory Nodes (`search_memory_nodes`)
Find entities and concepts:
```json
{
"query": "project timeline",
"max_nodes": 10,
"group_ids": ["work_meetings"]
}
```
### 3. Search Memory Facts (`search_memory_facts`)
Find relationships between entities:
```json
{
"query": "project deliverables",
"max_facts": 10,
"group_ids": ["work_meetings"]
}
```
### 4. Get Recent Episodes (`get_episodes`)
Retrieve recent memories:
```json
{
"group_id": "work_meetings",
"last_n": 10
}
```
## Security and API Keys
Your deployment uses these credentials (set in Railway environment):
```bash
# Your OpenAI API key (keep secure in Railway environment variables)
OPENAI_API_KEY=sk-proj-your-openai-api-key-here
# Model configuration
MODEL_NAME=gpt-4.1-mini
SMALL_MODEL_NAME=gpt-4.1-nano
# Database (use Neo4j Aura for production)
NEO4J_URI=neo4j+s://your-aura-instance.databases.neo4j.io
NEO4J_USER=neo4j
NEO4J_PASSWORD=your-aura-password
```
## Data Persistence and Safety
### Memory Persistence
- **Knowledge Graph**: All memories are stored in Neo4j as a connected graph
- **Temporal Awareness**: Memories include timestamps and can track changes over time
- **Relationships**: Entities are automatically linked based on content
### Data Safety
- **Secure Transport**: All communication uses HTTPS
- **Environment Variables**: Sensitive data stored securely in Railway
- **Data Isolation**: Use `group_id` to organize different contexts
- **Backup**: Consider regular Neo4j database backups
## Usage Examples
### Personal Assistant Memory
```json
{
"name": "user_preferences",
"episode_body": "User prefers meetings scheduled in the morning, dislikes late-night calls, works in PST timezone",
"source": "text",
"group_id": "user_profile"
}
```
### Project Context
```json
{
"name": "project_requirements",
"episode_body": "Railway deployment needs Docker optimization, MCP SSE transport, environment variable configuration",
"source": "text",
"group_id": "railway_project"
}
```
### Conversation Memory
```json
{
"name": "conversation_context",
"episode_body": "User asked about deploying Graphiti to Railway for ChatGPT integration. Needs MCP server with SSE transport.",
"source": "text",
"group_id": "current_conversation"
}
```
## Testing Your Integration
### 1. Basic Connectivity Test
Verify the SSE endpoint is accessible:
```bash
curl -H "Accept: text/event-stream" https://your-railway-domain.up.railway.app/sse
```
### 2. MCP Inspector Test
Test with official MCP tools:
```bash
npx @modelcontextprotocol/inspector --url https://your-railway-domain.up.railway.app/sse
```
Expected output should show:
- ✅ Connected to MCP server
- ✅ Tools available: add_memory, search_memory_nodes, search_memory_facts, etc.
- ✅ Server status: healthy
### 3. Memory Operations Test
**Step 1: Add test memory**
```json
{
"tool": "add_memory",
"arguments": {
"name": "connection_test",
"episode_body": "Testing ChatGPT integration with Graphiti memory server",
"source": "text",
"group_id": "integration_test"
}
}
```
**Step 2: Search for the memory**
```json
{
"tool": "search_memory_nodes",
"arguments": {
"query": "ChatGPT integration test",
"max_nodes": 5,
"group_ids": ["integration_test"]
}
}
```
**Step 3: Verify persistence**
The search should return the memory you just added, confirming the integration works.
## Troubleshooting
### Common Issues
**Connection Timeouts:**
- Check Railway deployment status
- Verify environment variables are set
- Test endpoint directly with curl
**Memory Not Persisting:**
- Verify Neo4j database connection
- Check Railway logs for database errors
- Ensure group_id consistency
**API Rate Limits:**
- Monitor OpenAI API usage
- Adjust SEMAPHORE_LIMIT if needed
- Check for 429 errors in logs
### Debug Steps
1. **Check Railway Logs**:
- Look for server startup messages
- Verify environment variable loading
- Check for database connection confirmations
2. **Test Database Connection**:
- Verify Neo4j credentials
- Test connection from Railway environment
- Check Neo4j Aura instance status
3. **Validate MCP Protocol**:
- Use MCP Inspector for protocol validation
- Check SSE stream format
- Verify tool availability
## Performance Considerations
- **Concurrent Operations**: Controlled by `SEMAPHORE_LIMIT` (default: 10)
- **Memory Efficiency**: Use specific `group_id` values to organize data
- **Search Optimization**: Limit result sets with `max_nodes` and `max_facts`
- **Rate Limiting**: Monitor OpenAI API usage to avoid 429 errors
## Next Steps
1. Deploy to Railway with proper environment variables
2. Configure ChatGPT with the SSE endpoint
3. Test memory operations with simple examples
4. Implement in your ChatGPT workflow
5. Monitor performance and adjust configuration as needed
Your ChatGPT will now have persistent memory capabilities powered by Graphiti's knowledge graph!

View file

@ -1,87 +1,60 @@
# syntax=docker/dockerfile:1.9
FROM python:3.12-slim as builder
# Railway-optimized Dockerfile for Graphiti MCP Server
FROM python:3.12-slim
WORKDIR /app
# Install system dependencies for building
# Install system dependencies
RUN apt-get update && apt-get install -y --no-install-recommends \
curl \
ca-certificates \
gcc \
curl \
ca-certificates \
&& rm -rf /var/lib/apt/lists/*
# Install uv using the installer script
ADD https://astral.sh/uv/install.sh /uv-installer.sh
RUN sh /uv-installer.sh && rm /uv-installer.sh
ENV PATH="/root/.local/bin:$PATH"
# Configure uv for optimal Docker usage
# Add uv to PATH
ENV PATH="/root/.local/bin:${PATH}"
# Configure uv for optimal Docker usage without cache mounts
ENV UV_COMPILE_BYTECODE=1 \
UV_LINK_MODE=copy \
UV_PYTHON_DOWNLOADS=never
# Copy and build main graphiti-core project
COPY ./pyproject.toml ./README.md ./
COPY ./graphiti_core ./graphiti_core
# Build graphiti-core wheel
RUN --mount=type=cache,target=/root/.cache/uv \
uv build
# Install the built wheel to make it available for server
RUN --mount=type=cache,target=/root/.cache/uv \
pip install dist/*.whl
# Runtime stage - build the server here
FROM python:3.12-slim
# Install uv using the installer script
RUN apt-get update && apt-get install -y --no-install-recommends \
curl \
ca-certificates \
&& rm -rf /var/lib/apt/lists/*
ADD https://astral.sh/uv/install.sh /uv-installer.sh
RUN sh /uv-installer.sh && rm /uv-installer.sh
ENV PATH="/root/.local/bin:$PATH"
# Configure uv for runtime
ENV UV_COMPILE_BYTECODE=1 \
UV_LINK_MODE=copy \
UV_PYTHON_DOWNLOADS=never
UV_PYTHON_DOWNLOADS=never \
MCP_SERVER_HOST="0.0.0.0" \
PYTHONUNBUFFERED=1
# Create non-root user
RUN groupadd -r app && useradd -r -d /app -g app app
# Copy graphiti-core wheel from builder
COPY --from=builder /app/dist/*.whl /tmp/
# First, copy and install the core graphiti library
COPY ./pyproject.toml ./README.md ./
COPY ./graphiti_core ./graphiti_core
# Install graphiti-core wheel first
RUN --mount=type=cache,target=/root/.cache/uv \
uv pip install --system /tmp/*.whl
# Build and install graphiti-core (no cache mount for Railway compatibility)
RUN uv build && \
pip install dist/*.whl
# Set up the server application
WORKDIR /app
COPY ./server/pyproject.toml ./server/README.md ./server/uv.lock ./
COPY ./server/graph_service ./graph_service
# Now set up the MCP server
COPY ./mcp_server/pyproject.toml ./mcp_server/uv.lock ./mcp_server/
COPY ./mcp_server/graphiti_mcp_server.py ./
# Install server dependencies and application
RUN --mount=type=cache,target=/root/.cache/uv \
uv sync --frozen --no-dev
# Install MCP server dependencies (no cache mount for Railway compatibility)
RUN uv sync --frozen --no-dev
# Change ownership to app user
RUN chown -R app:app /app
# Set environment variables
ENV PYTHONUNBUFFERED=1 \
PATH="/app/.venv/bin:$PATH"
# Switch to non-root user
USER app
# Set port
# Set environment variables for Railway
ENV PORT=8000
ENV MCP_SERVER_HOST=0.0.0.0
# Expose port (Railway will override with PORT env var)
EXPOSE $PORT
# Use uv run for execution
CMD ["uv", "run", "uvicorn", "graph_service.main:app", "--host", "0.0.0.0", "--port", "8000"]
# Command to run the MCP server with SSE transport
# Railway will set PORT environment variable, host and port are configured via env vars
CMD ["uv", "run", "graphiti_mcp_server.py", "--transport", "sse"]

201
RAILWAY_DEPLOYMENT.md Normal file
View file

@ -0,0 +1,201 @@
# Railway Deployment Guide for Graphiti MCP Server
This guide helps you deploy the Graphiti MCP Server to Railway for use with ChatGPT and other MCP clients.
## Prerequisites
- Railway account connected to GitHub
- OpenAI API key
- Neo4j database (local, Neo4j Aura, or Railway Neo4j service)
## Railway Deployment Steps
### 1. Environment Variables
Set these environment variables in your Railway project:
**Required:**
- `OPENAI_API_KEY`: Your OpenAI API key (starts with `sk-proj-` or `sk-`)
- `MODEL_NAME`: `gpt-4.1-mini` (recommended)
- `SMALL_MODEL_NAME`: `gpt-4.1-nano` (recommended)
**Database Configuration (choose one option):**
**Option A: Local Neo4j (for development/testing)**
- `NEO4J_URI`: `bolt://localhost:7687`
- `NEO4J_USER`: `neo4j`
- `NEO4J_PASSWORD`: `password`
**Option B: Neo4j Aura Cloud (recommended for production)**
- `NEO4J_URI`: `neo4j+s://your-instance.databases.neo4j.io`
- `NEO4J_USER`: `neo4j`
- `NEO4J_PASSWORD`: Your Aura password
**Option C: Railway Neo4j Service (if available)**
- Use Railway's internal connection variables
**Optional:**
- `LLM_TEMPERATURE`: `0.0` (default)
- `SEMAPHORE_LIMIT`: `10` (concurrent operations limit)
- `GRAPHITI_TELEMETRY_ENABLED`: `false` (to disable telemetry)
### 2. Deploy to Railway
1. **Connect Repository**: Link your GitHub repository to Railway
2. **Service Configuration**: Railway will auto-detect the Dockerfile
3. **Environment Variables**: Set all required variables in Railway dashboard
4. **Deploy**: Railway will build and deploy automatically
### 3. Get Your Deployment URL
After successful deployment, Railway will provide a URL like:
`https://graphiti-production-xxxx.up.railway.app`
Your MCP SSE endpoint will be:
`https://graphiti-production-xxxx.up.railway.app/sse`
## ChatGPT Integration
### For ChatGPT with MCP Support
Configure your ChatGPT client with:
```json
{
"mcpServers": {
"graphiti-memory": {
"transport": "sse",
"url": "https://your-railway-domain.up.railway.app/sse"
}
}
}
```
### Custom ChatGPT Integration
If using a custom ChatGPT integration, make HTTP requests to:
- **SSE Endpoint**: `https://your-railway-domain.up.railway.app/sse`
- **Tools Available**: `add_memory`, `search_memory_nodes`, `search_memory_facts`, etc.
## Claude Desktop Integration
For Claude Desktop (requires mcp-remote bridge):
```json
{
"mcpServers": {
"graphiti-memory": {
"command": "npx",
"args": [
"mcp-remote",
"https://your-railway-domain.up.railway.app/sse"
]
}
}
}
```
## Testing Your Deployment
### 1. Verify Server Status
Visit: `https://your-railway-domain.up.railway.app/sse`
You should see an SSE connection established.
### 2. Test with MCP Inspector
```bash
npx @modelcontextprotocol/inspector --url https://your-railway-domain.up.railway.app/sse
```
Expected tools:
- `add_memory`
- `search_memory_nodes`
- `search_memory_facts`
- `delete_entity_edge`
- `delete_episode`
- `get_entity_edge`
- `get_episodes`
- `clear_graph`
### 3. Test Memory Operations
**Add Memory:**
```json
{
"name": "test_memory",
"episode_body": "This is a test memory for verification",
"source": "text"
}
```
**Search Memory:**
```json
{
"query": "test memory",
"max_nodes": 5
}
```
## Database Options
### Neo4j Aura Cloud (Recommended)
1. Create a free Neo4j Aura instance at https://neo4j.com/aura/
2. Note the connection URI (starts with `neo4j+s://`)
3. Set environment variables in Railway:
- `NEO4J_URI`: Your Aura connection string
- `NEO4J_USER`: `neo4j`
- `NEO4J_PASSWORD`: Your Aura password
### Local Development Database
For testing with local Neo4j:
- Ensure your local Neo4j is accessible from Railway
- Consider using ngrok or similar for temporary access
- Not recommended for production
## Security Considerations
1. **API Keys**: Never commit API keys to git. Use Railway environment variables.
2. **Database Security**: Use strong passwords and secure connection strings.
3. **Access Control**: Consider implementing authentication if needed.
4. **HTTPS**: Railway provides HTTPS by default.
## Troubleshooting
### Common Issues
**Build Failures:**
- Check Docker cache mount compatibility
- Ensure all dependencies are properly specified
- Review Railway build logs
**Connection Issues:**
- Verify environment variables are set correctly
- Check Neo4j database accessibility
- Ensure OpenAI API key is valid
**Memory/Performance:**
- Adjust `SEMAPHORE_LIMIT` for rate limiting
- Monitor Railway resource usage
- Consider Neo4j Aura for better performance
### Debug Commands
Check server logs in Railway dashboard for:
- Connection status messages
- Environment variable loading
- Database connection status
- MCP server initialization
## Support
For issues:
1. Check Railway deployment logs
2. Verify environment variables
3. Test database connectivity
4. Review MCP client configuration
For Graphiti-specific issues, see the main repository documentation.

View file

@ -1197,8 +1197,14 @@ async def initialize_server() -> MCPConfig:
)
parser.add_argument(
'--host',
default=os.environ.get('MCP_SERVER_HOST'),
help='Host to bind the MCP server to (default: MCP_SERVER_HOST environment variable)',
default=os.environ.get('MCP_SERVER_HOST', '0.0.0.0'),
help='Host to bind the MCP server to (default: MCP_SERVER_HOST environment variable or 0.0.0.0)',
)
parser.add_argument(
'--port',
type=int,
default=int(os.environ.get('PORT', '8000')),
help='Port to bind the MCP server to (default: PORT environment variable or 8000)',
)
args = parser.parse_args()
@ -1225,6 +1231,11 @@ async def initialize_server() -> MCPConfig:
logger.info(f'Setting MCP server host to: {args.host}')
# Set MCP server host from CLI or env
mcp.settings.host = args.host
if args.port:
logger.info(f'Setting MCP server port to: {args.port}')
# Set MCP server port from CLI or env
mcp.settings.port = args.port
# Return MCP configuration
return MCPConfig.from_cli(args)

12
railway.json Normal file
View file

@ -0,0 +1,12 @@
{
"$schema": "https://railway.app/railway.schema.json",
"build": {
"builder": "dockerfile",
"dockerfilePath": "Dockerfile"
},
"deploy": {
"startCommand": "uv run graphiti_mcp_server.py --transport sse",
"restartPolicyType": "on_failure",
"restartPolicyMaxRetries": 10
}
}