4.9 KiB
Setup Configuration
Configure Cognee to use your preferred LLM, embedding engine, and storage backends
Configure Cognee to use your preferred LLM, embedding engine, relational database, vector store, and graph store via environment variables in a local .env file.
This section provides beginner-friendly guides for setting up different backends, with detailed technical information available in expandable sections.
What You Can Configure
Cognee uses a flexible architecture that lets you choose the best tools for your needs. We recommend starting with the defaults to get familiar with Cognee, then customizing each component as needed:
- LLM Providers — Choose from OpenAI, Azure OpenAI, Google Gemini, Anthropic, Ollama, or custom providers (like vLLM) for text generation and reasoning tasks
- Structured Output Backends — Configure LiteLLM + Instructor or BAML for reliable data extraction from LLM responses
- Embedding Providers — Select from OpenAI, Azure OpenAI, Google Gemini, Mistral, Ollama, Fastembed, or custom embedding services to create vector representations for semantic search
- Relational Databases — Use SQLite for local development or Postgres for production to store metadata, documents, and system state
- Vector Stores — Store embeddings in LanceDB, PGVector, ChromaDB, FalkorDB, or Neptune Analytics for similarity search
- Graph Stores — Build knowledge graphs with Kuzu, Kuzu-remote, Neo4j, Neptune, or Neptune Analytics to manage relationships and reasoning
- Dataset Separation & Access Control — Configure dataset-level permissions and isolation
- Sessions & Caching — Enable conversational memory with Redis or filesystem cache adapters
Observability & Telemetry
Cognee includes built-in telemetry to help you monitor and debug your knowledge graph operations. You can control telemetry behavior with environment variables:
TELEMETRY_DISABLED(boolean, optional): Set totrueto disable all telemetry collection (default:false)
When telemetry is enabled, Cognee automatically collects:
- Search query performance metrics
- Processing pipeline execution times
- Error rates and debugging information
- System resource usage
Configuration Workflow
- Install Cognee with all optional dependencies:
- Local setup:
uv sync --all-extras - Library:
pip install "cognee[all]"
- Local setup:
- Create a
.envfile in your project root (if you haven't already) — see Installation for details - Choose your preferred providers and follow the configuration instructions from the guides below
To find navigation and other pages in this documentation, fetch the llms.txt file at: https://docs.cognee.ai/llms.txt