| .. | ||
| .env.example | ||
| app.py | ||
| constants.py | ||
| custom_stack.py | ||
| README.md | ||
| tools.py | ||
Personal Assistant with Memory
An executive assistant for travel planning that remembers your preferences and plans itineraries accordingly using Cognee (Semantic memory system) and Agno (Agent framework).
Overview
This example demonstrates how to build a personal assistant that:
- Remembers your travel preferences and dietary restrictions
- Plans personalized itineraries based on stored preferences
- Searches memory to retrieve relevant information when making recommendations
- Adapts recommendations based on your specific needs (e.g., vegetarian meals, hotel preferences, beach locations)
The assistant uses Cognee's Semantic Memory layer to store and retrieve user preferences, enabling context-aware travel planning.
Architecture
- Agent Framework: Agno - For building intelligent agents
- Memory Layer: Cognee - Semantic knowledge graph for persistent memory
- Two Implementation Options:
app.py: Uses OpenAI (default stack)custom_stack.py: Uses Gemini + Qdrant + FastEmbed (custom stack)
Prerequisites
- Python >= 3.10 and < 3.14
- uv package manager
- API keys for your chosen LLM provider (OpenAI or Google)
Installation
1. Create Virtual Environment
uv venv
source .venv/bin/activate
2. Install Dependencies
Choose one of the following based on which implementation you want to use:
Option A: Default Stack (app.py)
Uses OpenAI for both agent execution and memory layer.
uv add cognee agno
Option B: Custom Stack (custom_stack.py)
Uses Gemini (LLM), Qdrant (vector store), and FastEmbed (embeddings).
uv add google-genai cognee-community-vector-adapter-qdrant fastembed
Configuration
Create a .env file in the project root with the appropriate environment variables.
For app.py (OpenAI Stack)
# OpenAI API key for Agent execution
OPENAI_API_KEY=<your-openai-api-key>
# OpenAI API key for Cognee memory layer
LLM_API_KEY=<your-openai-api-key>
For custom_stack.py (Gemini + Qdrant + FastEmbed)
# Google API key for agent execution
GOOGLE_API_KEY=<your-google-api-key>
# Cognee LLM configuration
LLM_PROVIDER=gemini
LLM_MODEL=gemini/gemini-2.5-flash
LLM_API_KEY=<your-google-api-key>
# Cognee embedding configuration
EMBEDDING_PROVIDER=fastembed
EMBEDDING_MODEL=jinaai/jina-embeddings-v2-base-en
EMBEDDING_DIMENSIONS=768
# Qdrant vector store configuration
QDRANT_URL=<your-qdrant-cluster-endpoint>
QDRANT_API_KEY=<your-qdrant-api-key>
Stack Components:
- Vector Store: Qdrant
- LLM: Gemini 2.5 Flash
- Embedding: FastEmbed (Jina embeddings v2)
Usage
Running the Default Stack (app.py)
python app.py
The agent will:
- Store your preferences in Cognee's memory
- Demonstrate retrieval by planning a restaurant itinerary for Rome
Running the Custom Stack (custom_stack.py)
uv run custom_stack.py
Same functionality as above, but using the custom stack configuration.
How It Works
- Memory Storage: When you provide preferences, the agent uses
add_memoryto store them in Cognee's semantic knowledge graph - Memory Retrieval: When planning itineraries, the agent uses
search_memoryto find relevant preferences - Context-Aware Planning: The agent applies retrieved preferences to generate personalized recommendations
Example Interaction
# Store preferences
agent.print_response(MY_PREFERENCE, stream=True)
# Get recommendations based on stored preferences
agent.print_response("I am visiting Rome, give me restaurants list to stop by", stream=True)
Project Structure
agno-agents/
├── app.py # Default implementation (OpenAI)
├── custom_stack.py # Custom implementation (Gemini + Qdrant)
├── tools.py # CogneeTools integration for Agno
├── constants.py # Agent instructions and user preferences
├── README.md # This file
└── .env # Environment variables (create this)
Customization
You can customize the assistant by modifying:
constants.py: UpdateMY_PREFERENCEwith your own preferences andINSTRUCTIONSto change agent behaviortools.py: ExtendCogneeToolsto add more memory operations- Environment variables: Switch between different LLM providers, vector stores, or embedding models
Troubleshooting
- Import errors: Ensure all dependencies are installed with
uv sync - API key errors: Verify your
.envfile contains valid API keys - Qdrant connection: For
custom_stack.py, ensure your Qdrant cluster is accessible and credentials are correct