No description
Find a file
2024-10-18 12:05:06 +02:00
.data Intermidiate commit 2024-04-24 19:35:36 +02:00
.dlt fix: remove obsolete code 2024-03-13 10:19:03 +01:00
.github Delete .github/workflows/daily_twitter_stats.yaml 2024-10-17 14:51:04 +02:00
alembic fix: add default user via migration 2024-10-16 22:33:04 +02:00
assets Added star history 2024-05-25 08:27:49 +02:00
bin chore: enable all origins in cors settings 2024-09-25 14:34:14 +02:00
cognee Merge branch 'main' of github.com:topoteretes/cognee into COG-170-PGvector-adapter 2024-10-18 12:05:06 +02:00
cognee-frontend feat: improve API request and response models and docs (#154) 2024-10-14 13:38:36 +02:00
docs Update templates.md 2024-10-17 17:33:32 +05:30
evals feat: migrate search to tasks (#144) 2024-10-07 14:41:35 +02:00
notebooks refactor: Remove architecture overview 2024-10-11 17:57:51 +02:00
tests test: add github action running weaviate integration test 2024-06-12 22:36:57 +02:00
tools Add tasks for segment sync and posthog sync 2024-09-30 19:09:05 +02:00
.dockerignore chore: add vanilla docker config 2024-06-23 00:36:34 +02:00
.env.template refactor: Remove unused env parameter 2024-10-17 17:13:40 +02:00
.gitignore feat: migrate search to tasks (#144) 2024-10-07 14:41:35 +02:00
.pylintrc fix: enable sqlalchemy adapter 2024-08-04 22:23:28 +02:00
.python-version chore: update python version to 3.11 2024-03-29 14:10:20 +01:00
alembic.ini feat: migrate search to tasks (#144) 2024-10-07 14:41:35 +02:00
CONTRIBUTING.md updated pull request creation step 2024-05-30 14:17:45 +02:00
docker-compose.yml feat: Add config support for pgvector 2024-10-11 13:23:11 +02:00
Dockerfile chore: run alembic migrations in docker entrypoint 2024-10-16 23:32:32 +02:00
entrypoint-old.sh fix: run frontend in a container 2024-06-23 13:24:58 +02:00
entrypoint.sh chore: run alembic migrations in docker entrypoint 2024-10-16 23:32:32 +02:00
LICENSE Update LICENSE 2024-03-30 11:57:07 +01:00
mkdocs.yml Rewrite cognee documentation and apply theme (#130) 2024-08-22 13:38:16 +02:00
mypy.ini Improve processing, update networkx client, and Neo4j, and dspy (#69) 2024-04-20 19:05:40 +02:00
poetry.lock feat: migrate search to tasks (#144) 2024-10-07 14:41:35 +02:00
pyproject.toml chore: update to 0.1.17 2024-10-07 14:42:51 +02:00
pytest.ini Updates and fixes for the lib 2024-05-25 13:51:38 +02:00
README.md Merge branch 'main' of github.com:topoteretes/cognee into COG-170-PGvector-adapter 2024-10-18 12:05:06 +02:00

cognee

GitHub forks GitHub stars GitHub commits Github tag Downloads GitHub license

We build for developers who need a reliable, production-ready data layer for AI applications

What is cognee?

Cognee implements scalable, modular ECL (Extract, Cognify, Load) pipelines that allow you to interconnect and retrieve past conversations, documents, and audio transcriptions while reducing hallucinations, developer effort, and cost. Try it in a Google Colab notebook or have a look at our documentation

If you have questions, join our Discord community

📦 Installation

With pip

pip install cognee

With poetry

poetry add cognee

💻 Basic Usage

Setup

import os

os.environ["LLM_API_KEY"] = "YOUR OPENAI_API_KEY"

or

import cognee
cognee.config.llm_api_key = "YOUR_OPENAI_API_KEY"

You can also set the variables by creating .env file, here is our template. To use different LLM providers, for more info check out our documentation

If you are using Networkx, create an account on Graphistry to visualize results:

cognee.config.set_graphistry_config({
    "username": "YOUR_USERNAME",
    "password": "YOUR_PASSWORD"
})

(Optional) To run the UI, go to cognee-frontend directory and run:

npm run dev

or run everything in a docker container:

docker-compose up

Then navigate to localhost:3000

Simple example

Run the default cognee pipeline:

import cognee

text = """Natural language processing (NLP) is an interdisciplinary
       subfield of computer science and information retrieval"""

await cognee.add(text) # Add a new piece of information

await cognee.cognify() # Use LLMs and cognee to create a knowledge graph

search_results = await cognee.search("INSIGHTS", {'query': 'NLP'}) # Query cognee for the insights

for result in search_results:
    do_something_with_result(result)

Create your own memory store

cognee framework consists of tasks that can be grouped into pipelines. Each task can be an independent part of business logic, that can be tied to other tasks to form a pipeline. These tasks persist data into your memory store enabling you to search for relevant context of past conversations, documents, or any other data you have stored.

Example: Classify your documents

Here is an example of how it looks for a default cognify pipeline:

  1. To prepare the data for the pipeline run, first we need to add it to our metastore and normalize it:

Start with:

text = """Natural language processing (NLP) is an interdisciplinary
       subfield of computer science and information retrieval"""

await cognee.add(text) # Add a new piece of information
  1. In the next step we make a task. The task can be any business logic we need, but the important part is that it should be encapsulated in one function.

Here we show an example of creating a naive LLM classifier that takes a Pydantic model and then stores the data in both the graph and vector stores after analyzing each chunk. We provided just a snippet for reference, but feel free to check out the implementation in our repo.

async def chunk_naive_llm_classifier(
    data_chunks: list[DocumentChunk],
    classification_model: Type[BaseModel]
):
    # Extract classifications asynchronously
    chunk_classifications = await asyncio.gather(
        *(extract_categories(chunk.text, classification_model) for chunk in data_chunks)
    )

    # Collect classification data points using a set to avoid duplicates
    classification_data_points = {
        uuid5(NAMESPACE_OID, cls.label.type)
        for cls in chunk_classifications
    } | {
        uuid5(NAMESPACE_OID, subclass.value)
        for cls in chunk_classifications
        for subclass in cls.label.subclass
    }

    vector_engine = get_vector_engine()
    collection_name = "classification"

    # Define the payload schema
    class Keyword(BaseModel):
        uuid: str
        text: str
        chunk_id: str
        document_id: str

    # Ensure the collection exists and retrieve existing data points
    if not await vector_engine.has_collection(collection_name):
        await vector_engine.create_collection(collection_name, payload_schema=Keyword)
        existing_points_map = {}
    else:
        existing_points_map = {}
    return data_chunks

...

We have a large number of tasks that can be used in your pipelines, and you can also create your own tasks to fit your business logic.

  1. Once we have our tasks, it is time to group them into a pipeline. This simplified snippet demonstrates how tasks can be added to a pipeline, and how they can pass the information forward from one to another.
            

Task(
    chunk_naive_llm_classifier,
    classification_model = cognee_config.classification_model,
)

pipeline = run_tasks(tasks, documents)

To see the working code, check cognee.api.v1.cognify default pipeline in our repo.

Vector retrieval, Graphs and LLMs

Cognee supports a variety of tools and services for different operations:

  • Modular: Cognee is modular by nature, using tasks grouped into pipelines

  • Local Setup: By default, LanceDB runs locally with NetworkX and OpenAI.

  • Vector Stores: Cognee supports LanceDB, Qdrant, PGVector and Weaviate for vector storage.

  • Language Models (LLMs): You can use either Anyscale or Ollama as your LLM provider.

  • Graph Stores: In addition to NetworkX, Neo4j is also supported for graph storage.

  • User management: Create individual user graphs and manage permissions

Demo

Check out our demo notebook here

Get Started

Install Server

Please see the cognee Quick Start Guide for important configuration information.

docker compose up

Install SDK

Please see the cognee Development Guide for important beta information and usage instructions.

pip install cognee

Star History

Star History Chart

💫 Contributors

contributors