cognee/docs/api_reference.md
Vasilije bb679c2dd7
Improve processing, update networkx client, and Neo4j, and dspy (#69)
* Update cognify and the networkx client to prepare for running in Neo4j

* Fix for openai model

* Add the fix to the infra so that the models can be passed to the library. Enable llm_provider to be passed.

* Auto graph generation now works with neo4j

* Added fixes for both neo4j and networkx

* Explicitly name semantic node connections

* Added updated docs, readme, chunkers and updates to cognify

* Make docs build trigger only when changes on it happen

* Update docs, test git actions

* Separate cognify logic into tasks

* Introduce dspy knowledge graph extraction

---------
Co-authored-by: Boris Arzentar <borisarzentar@gmail.com>
2024-04-20 19:05:40 +02:00

4.3 KiB

cognee API Reference

Overview

The Cognee API has

  1. via Python library
  2. FastAPI server

Python Library

Module: cognee.config

This module provides functionalities to configure various aspects of the system's operation in the cognee library. It interfaces with the cognee.infrastructure.infrastructure_config module to set configurations for system directories, machine learning models, and other components essential for system performance.

Overview

The config class in this module offers a series of static methods to configure the system's directories, various machine learning models, and other parameters.

Usage

Import the module as follows:

from cognee.config import config

Methods

system_root_directory(system_root_directory: str)

Sets the root directory of the system where essential system files and operations are managed. Parameters:
system_root_directory (str): The path to set as the system's root directory. Example:

config.system_root_directory('/path/to/system/root')

data_root_directory(data_root_directory: str)

Sets the directory for storing data used and generated by the system.
Parameters:
data_root_directory (str): The path to set as the data root directory.

Example:


config.data_root_directory('/path/to/data/root')

set_classification_model(classification_model: object)

Assigns a machine learning model for classification tasks within the system.
Parameters:
classification_model (object): The Pydantic model to use for classification. Check cognee.shared.data_models for existing models. Example:

config.set_classification_model(model)

set_summarization_model(summarization_model: object) Sets the Pydantic model to be used for summarization tasks.
Parameters:
summarization_model (object): The model to use for summarization. Check cognee.shared.data_models for existing models. Example:

config.set_summarization_model(my_summarization_model)

set_llm_model(llm_model: object)

Determines the model to handle LLMs. Parameters:
llm_model (object): The model to use for LLMs. Example:

config.set_llm_model("OpenAI")

set_graph_engine(graph_engine: object)

Sets the engine to manage graph processing tasks. Parameters:
graph_engine (object): The engine for graph tasks. Example:

from cognee.shared.data_models import GraphDBType

config.set_graph_engine(GraphDBType.NEO4J)

API

For each API endpoint, provide the following details:

Endpoint 1: Root

  • URL: /add
  • Method: POST
  • Auth Required: No
  • Description: Root endpoint that returns a welcome message.

Response

{
  "message": "Hello, World, I am alive!"
}

Endpoint 1: Health Check

  • URL: /health
  • Method: GET
  • Auth Required: No
  • Description: Health check endpoint that returns the server status.

Response

{
  "status": "OK"
}

Endpoint 2: Add

  • URL: /Add
  • Method: POST
  • Auth Required: Yes | No
  • Description: This endpoint is responsible for adding data to the graph.

Parameters

Name Type Required Description
data Union[str, BinaryIO, List[Union[str, BinaryIO]]] Yes The data to be added
dataset_id UUID Yes The ID of the dataset.
dataset_name String Yes The name of the dataset.

Response

{
  "response": "data"
}

Endpoint 3: Cognify

  • URL: /cognify
  • Method: POST
  • Auth Required: Yes | No
  • Description: This endpoint is responsible for the cognitive processing of the content.

Parameters

Name Type Required Description
datasets Union[str, List[str]] Yes The data to be added

Response

{
  "response": "data"
}
  • URL: /search
  • Method: POST
  • Auth Required: No
  • Description: This endpoint is responsible for searching for nodes in the graph.

Parameters

Name Type Required Description
query_params Dict[str, Any] Yes Description of the parameter.

Response

{
  "response": "data"
}