6.6 KiB
Search
Query your AI memory with vectors, graphs, and LLMs
What is search
search lets you ask questions over everything you've ingested and cognified.
Under the hood, Cognee blends vector similarity, graph structure, and LLM reasoning to return answers with context and provenance.
The big picture
- Dataset-aware: searches run against one or more datasets you can read (requires
ENABLE_BACKEND_ACCESS_CONTROL=true) - Multiple modes: from simple chunk lookup to graph-aware Q&A
- Hybrid retrieval: vectors find relevant pieces; graphs provide structure; LLMs compose answers
- Conversational memory: use
session_idto maintain conversation history across searches (requires caching enabled) - Safe by default: permissions are checked before any retrieval
- Observability: telemetry is emitted for query start/completion
Where search fits
Use search after you've run .add and .cognify.
At that point, your dataset has chunks, summaries, embeddings, and a knowledge graph—so queries can leverage both similarity and structure.
How it works (conceptually)
-
Scope & permissions
Resolve target datasets (by name or id) and enforce read access. -
Mode dispatch
Pick a search mode (default: graph-aware completion) and route to its retriever. -
Retrieve → (optional) generate
Collect context via vectors and/or graph traversal; some modes then ask an LLM to compose a final answer. -
Return results
Depending on mode: answers, chunks/summaries with metadata, graph records, Cypher results, or code contexts.
For a practical guide to using search with examples and detailed parameter explanations, see Search Basics.
Graph-aware question answering.- What it does: Finds relevant graph triplets using vector hints across indexed fields, resolves them into readable context, and asks an LLM to answer your question grounded in that context.
- Why it’s useful: Combines fuzzy matching (vectors) with precise structure (graph) so answers reflect relationships, not just nearby text.
- Typical output: A natural-language answer with references to the supporting graph context.
- What it does: Pulls top-k chunks via vector search, stitches a context window, then asks an LLM to answer.
- When to use: You want fast, text-only RAG without graph structure.
- Output: An LLM answer grounded in retrieved chunks.
- What it does: Returns the most similar text chunks to your query via vector search.
- When to use: You want raw passages/snippets to display or post-process.
- Output: Chunk objects with metadata.
- What it does: Vector search on
TextSummarycontent for concise, high-signal hits. - When to use: You prefer short summaries instead of full chunks.
- Output: Summary objects with provenance.
- What it does: Builds graph context like GRAPH_COMPLETION, then condenses it before answering.
- When to use: You want a tighter, summary-first response.
- Output: A concise answer grounded in graph context.
- What it does: Iterative rounds of graph retrieval and LLM reasoning to refine the answer.
- When to use: Complex questions that benefit from stepwise reasoning.
- Output: A refined answer produced through multiple reasoning steps.
- What it does: Starts with initial graph context, lets the LLM suggest follow-ups, fetches more graph context, repeats.
- When to use: Open-ended queries that need broader exploration.
- Output: An answer assembled after expanding the relevant subgraph.
- What it does: Infers a Cypher query from your question using the graph schema, runs it, returns the results.
- When to use: You want structured graph answers without writing Cypher.
- Output: Executed graph results.
- What it does: Executes your Cypher query against the graph database.
- When to use: You know the schema and want full control.
- Output: Raw query results.
- What it does: Interprets your intent (files/snippets), searches code embeddings and related graph nodes, and assembles relevant source.
- When to use: Codebases indexed by Cognee.
- Output: Structured code contexts and related graph information.
- What it does: Uses an LLM to pick the most suitable search mode for your query, then runs it.
- When to use: You’re not sure which mode fits best.
- Output: Results from the selected mode.
- What it does: Records user feedback on recent answers and links it to the associated graph elements for future tuning.
- When to use: Closing the loop on quality and relevance.
- Output: A feedback record tied to recent interactions.
To find navigation and other pages in this documentation, fetch the llms.txt file at: https://docs.cognee.ai/llms.txt