cognee‑mcp - Run cognee’s memory engine as a Model Context Protocol server
Demo
.
Learn more
·
Join Discord
·
Join r/AIMemory
[](https://GitHub.com/topoteretes/cognee/network/)
[](https://GitHub.com/topoteretes/cognee/stargazers/)
[](https://GitHub.com/topoteretes/cognee/commit/)
[](https://github.com/topoteretes/cognee/tags/)
[](https://pepy.tech/project/cognee)
[](https://github.com/topoteretes/cognee/blob/main/LICENSE)
[](https://github.com/topoteretes/cognee/graphs/contributors)

Build memory for Agents and query from any client that speaks MCP – in your terminal or IDE.
## ✨ Features
- Multiple transports – choose Streamable HTTP --transport http (recommended for web deployments), SSE --transport sse (real‑time streaming), or stdio (classic pipe, default)
- **API Mode** – connect to an already running Cognee FastAPI server instead of using cognee directly (see [API Mode](#-api-mode) below)
- Integrated logging – all actions written to a rotating file (see get_log_file_location()) and mirrored to console in dev
- Local file ingestion – feed .md, source files, Cursor rule‑sets, etc. straight from disk
- Background pipelines – long‑running cognify & codify jobs spawn off‑thread; check progress with status tools
- Developer rules bootstrap – one call indexes .cursorrules, .cursor/rules, AGENT.md, and friends into the developer_rules nodeset
- Prune & reset – wipe memory clean with a single prune call when you want to start fresh
Please refer to our documentation [here](https://docs.cognee.ai/how-to-guides/deployment/mcp) for further information.
## 🚀 Quick Start
1. Clone cognee repo
```
git clone https://github.com/topoteretes/cognee.git
```
2. Navigate to cognee-mcp subdirectory
```
cd cognee/cognee-mcp
```
3. Install uv if you don't have one
```
pip install uv
```
4. Install all the dependencies you need for cognee mcp server with uv
```
uv sync --dev --all-extras --reinstall
```
5. Activate the virtual environment in cognee mcp directory
```
source .venv/bin/activate
```
6. Set up your OpenAI API key in .env for a quick setup with the default cognee configurations
```
LLM_API_KEY="YOUR_OPENAI_API_KEY"
```
7. Run cognee mcp server with stdio (default)
```
python src/server.py
```
or stream responses over SSE
```
python src/server.py --transport sse
```
or run with Streamable HTTP transport (recommended for web deployments)
```
python src/server.py --transport http --host 127.0.0.1 --port 8000 --path /mcp
```
You can do more advanced configurations by creating .env file using our