Swagger endpoint docstrings (#1087)

<!-- .github/pull_request_template.md -->

## Description
<!-- Provide a clear description of the changes in this PR -->

## DCO Affirmation
I affirm that all code in every commit of this pull request conforms to
the terms of the Topoteretes Developer Certificate of Origin.

---------

Co-authored-by: vasilije <vas.markovic@gmail.com>
This commit is contained in:
Igor Ilic 2025-07-14 15:24:31 +02:00 committed by GitHub
parent a2d16c99a1
commit 219db2f03d
No known key found for this signature in database
GPG key ID: B5690EEEBB952194
10 changed files with 302 additions and 266 deletions

View file

@ -30,36 +30,34 @@ def get_add_router() -> APIRouter:
This endpoint accepts various types of data (files, URLs, GitHub repositories) This endpoint accepts various types of data (files, URLs, GitHub repositories)
and adds them to a specified dataset for processing. The data is ingested, and adds them to a specified dataset for processing. The data is ingested,
analyzed, and integrated into the knowledge graph. Either datasetName or analyzed, and integrated into the knowledge graph.
datasetId must be provided to specify the target dataset.
Args: ## Request Parameters
data (List[UploadFile]): List of files to upload. Can also include: - **data** (List[UploadFile]): List of files to upload. Can also include:
- HTTP URLs (if ALLOW_HTTP_REQUESTS is enabled) - HTTP URLs (if ALLOW_HTTP_REQUESTS is enabled)
- GitHub repository URLs (will be cloned and processed) - GitHub repository URLs (will be cloned and processed)
- Regular file uploads - Regular file uploads
datasetName (Optional[str]): Name of the dataset to add data to - **datasetName** (Optional[str]): Name of the dataset to add data to
datasetId (Optional[UUID]): UUID of the dataset to add data to - **datasetId** (Optional[UUID]): UUID of the dataset to add data to
user: The authenticated user adding the data
Returns: Either datasetName or datasetId must be provided.
dict: Information about the add operation containing:
- Status of the operation
- Details about the processed data
- Any relevant metadata from the ingestion process
Raises: ## Response
ValueError: If neither datasetId nor datasetName is provided Returns information about the add operation containing:
HTTPException: If there's an error during the add operation - Status of the operation
PermissionDeniedError: If the user doesn't have permission to add to the dataset - Details about the processed data
- Any relevant metadata from the ingestion process
Note: ## Error Codes
- To add data to a datasets not owned by the user and for which the user has write permission for - **400 Bad Request**: Neither datasetId nor datasetName provided
the dataset_id must be used (when ENABLE_BACKEND_ACCESS_CONTROL is set to True) - **409 Conflict**: Error during add operation
- GitHub repositories are cloned and all files are processed - **403 Forbidden**: User doesn't have permission to add to dataset
- HTTP URLs are fetched and their content is processed
- Regular files are uploaded and processed directly ## Notes
- The ALLOW_HTTP_REQUESTS environment variable controls URL processing - To add data to datasets not owned by the user, use dataset_id (when ENABLE_BACKEND_ACCESS_CONTROL is set to True)
- GitHub repositories are cloned and all files are processed
- HTTP URLs are fetched and their content is processed
- The ALLOW_HTTP_REQUESTS environment variable controls URL processing
""" """
from cognee.api.v1.add import add as cognee_add from cognee.api.v1.add import add as cognee_add

View file

@ -31,7 +31,22 @@ def get_code_pipeline_router() -> APIRouter:
@router.post("/index", response_model=None) @router.post("/index", response_model=None)
async def code_pipeline_index(payload: CodePipelineIndexPayloadDTO): async def code_pipeline_index(payload: CodePipelineIndexPayloadDTO):
"""This endpoint is responsible for running the indexation on code repo.""" """
Run indexation on a code repository.
This endpoint processes a code repository to create a knowledge graph
of the codebase structure, dependencies, and relationships.
## Request Parameters
- **repo_path** (str): Path to the code repository
- **include_docs** (bool): Whether to include documentation files (default: false)
## Response
No content returned. Processing results are logged.
## Error Codes
- **409 Conflict**: Error during indexation process
"""
from cognee.api.v1.cognify.code_graph_pipeline import run_code_graph_pipeline from cognee.api.v1.cognify.code_graph_pipeline import run_code_graph_pipeline
try: try:
@ -42,7 +57,22 @@ def get_code_pipeline_router() -> APIRouter:
@router.post("/retrieve", response_model=list[dict]) @router.post("/retrieve", response_model=list[dict])
async def code_pipeline_retrieve(payload: CodePipelineRetrievePayloadDTO): async def code_pipeline_retrieve(payload: CodePipelineRetrievePayloadDTO):
"""This endpoint is responsible for retrieving the context.""" """
Retrieve context from the code knowledge graph.
This endpoint searches the indexed code repository to find relevant
context based on the provided query.
## Request Parameters
- **query** (str): Search query for code context
- **full_input** (str): Full input text for processing
## Response
Returns a list of relevant code files and context as JSON.
## Error Codes
- **409 Conflict**: Error during retrieval process
"""
try: try:
query = ( query = (
payload.full_input.replace("cognee ", "") payload.full_input.replace("cognee ", "")

View file

@ -48,7 +48,7 @@ def get_cognify_router() -> APIRouter:
raw text, documents, and data added through the add endpoint into semantic knowledge graphs. raw text, documents, and data added through the add endpoint into semantic knowledge graphs.
It performs deep analysis to extract entities, relationships, and insights from ingested content. It performs deep analysis to extract entities, relationships, and insights from ingested content.
The processing pipeline includes: ## Processing Pipeline
1. Document classification and permission validation 1. Document classification and permission validation
2. Text chunking and semantic segmentation 2. Text chunking and semantic segmentation
3. Entity extraction using LLM-powered analysis 3. Entity extraction using LLM-powered analysis
@ -56,55 +56,34 @@ def get_cognify_router() -> APIRouter:
5. Vector embeddings generation for semantic search 5. Vector embeddings generation for semantic search
6. Content summarization and indexing 6. Content summarization and indexing
Args: ## Request Parameters
payload (CognifyPayloadDTO): Request payload containing processing parameters: - **datasets** (Optional[List[str]]): List of dataset names to process. Dataset names are resolved to datasets owned by the authenticated user.
- datasets (Optional[List[str]]): List of dataset names to process. - **dataset_ids** (Optional[List[UUID]]): List of dataset UUIDs to process. UUIDs allow processing of datasets not owned by the user (if permitted).
Dataset names are resolved to datasets owned by the authenticated user. - **graph_model** (Optional[BaseModel]): Custom Pydantic model defining the knowledge graph schema. Defaults to KnowledgeGraph for general-purpose processing.
- dataset_ids (Optional[List[UUID]]): List of dataset UUIDs to process. - **run_in_background** (Optional[bool]): Whether to execute processing asynchronously. Defaults to False (blocking).
UUIDs allow processing of datasets not owned by the user (if permitted).
- graph_model (Optional[BaseModel]): Custom Pydantic model defining the
knowledge graph schema. Defaults to KnowledgeGraph for general-purpose
processing. Custom models enable domain-specific entity extraction.
- run_in_background (Optional[bool]): Whether to execute processing
asynchronously. Defaults to False (blocking).
user (User): Authenticated user context injected via dependency injection. ## Response
Used for permission validation and data access control. - **Blocking execution**: Complete pipeline run information with entity counts, processing duration, and success/failure status
- **Background execution**: Pipeline run metadata including pipeline_run_id for status monitoring via WebSocket subscription
Returns: ## Error Codes
dict: Processing results containing: - **400 Bad Request**: When neither datasets nor dataset_ids are provided, or when specified datasets don't exist
- For blocking execution: Complete pipeline run information with - **409 Conflict**: When processing fails due to system errors, missing LLM API keys, database connection failures, or corrupted content
entity counts, processing duration, and success/failure status
- For background execution: Pipeline run metadata including
pipeline_run_id for status monitoring via WebSocket subscription
Raises: ## Example Request
HTTPException 400: Bad Request ```json
- When neither datasets nor dataset_ids are provided {
- When specified datasets don't exist or are inaccessible "datasets": ["research_papers", "documentation"],
"run_in_background": false
}
```
HTTPException 409: Conflict ## Notes
- When processing fails due to system errors To cognify data in datasets not owned by the user and for which the current user has write permission,
- When LLM API keys are missing or invalid the dataset_id must be used (when ENABLE_BACKEND_ACCESS_CONTROL is set to True).
- When database connections fail
- When content cannot be processed (corrupted files, unsupported formats)
Example Usage: ## Next Steps
```python After successful processing, use the search endpoints to query the generated knowledge graph for insights, relationships, and semantic search.
# Process specific datasets synchronously
POST /api/v1/cognify
{
"datasets": ["research_papers", "documentation"],
"run_in_background": false
}
```
Notes:
To cognify data in a datasets not owned by the user and for which the current user has write permission for
the dataset_id must be used (when ENABLE_BACKEND_ACCESS_CONTROL is set to True)
Next Steps:
After successful processing, use the search endpoints to query the
generated knowledge graph for insights, relationships, and semantic search.
""" """
if not payload.datasets and not payload.dataset_ids: if not payload.datasets and not payload.dataset_ids:
return JSONResponse( return JSONResponse(

View file

@ -81,19 +81,16 @@ def get_datasets_router() -> APIRouter:
read permissions for. The datasets are returned with their metadata read permissions for. The datasets are returned with their metadata
including ID, name, creation time, and owner information. including ID, name, creation time, and owner information.
Args: ## Response
user: The authenticated user requesting the datasets Returns a list of dataset objects containing:
- **id**: Unique dataset identifier
- **name**: Dataset name
- **created_at**: When the dataset was created
- **updated_at**: When the dataset was last updated
- **owner_id**: ID of the dataset owner
Returns: ## Error Codes
List[DatasetDTO]: A list of dataset objects containing: - **418 I'm a teapot**: Error retrieving datasets
- id: Unique dataset identifier
- name: Dataset name
- created_at: When the dataset was created
- updated_at: When the dataset was last updated
- owner_id: ID of the dataset owner
Raises:
HTTPException: If there's an error retrieving the datasets
""" """
try: try:
datasets = await get_all_user_permission_datasets(user, "read") datasets = await get_all_user_permission_datasets(user, "read")
@ -118,21 +115,20 @@ def get_datasets_router() -> APIRouter:
dataset instead of creating a duplicate. The user is automatically granted dataset instead of creating a duplicate. The user is automatically granted
all permissions (read, write, share, delete) on the created dataset. all permissions (read, write, share, delete) on the created dataset.
Args: ## Request Parameters
dataset_data (DatasetCreationPayload): Dataset creation parameters containing: - **dataset_data** (DatasetCreationPayload): Dataset creation parameters containing:
- name: The name for the new dataset - **name**: The name for the new dataset
user: The authenticated user creating the dataset
Returns: ## Response
DatasetDTO: The created or existing dataset object containing: Returns the created or existing dataset object containing:
- id: Unique dataset identifier - **id**: Unique dataset identifier
- name: Dataset name - **name**: Dataset name
- created_at: When the dataset was created - **created_at**: When the dataset was created
- updated_at: When the dataset was last updated - **updated_at**: When the dataset was last updated
- owner_id: ID of the dataset owner - **owner_id**: ID of the dataset owner
Raises: ## Error Codes
HTTPException: If there's an error creating the dataset - **418 I'm a teapot**: Error creating dataset
""" """
try: try:
datasets = await get_datasets_by_name([dataset_data.name], user.id) datasets = await get_datasets_by_name([dataset_data.name], user.id)
@ -169,16 +165,15 @@ def get_datasets_router() -> APIRouter:
This endpoint permanently deletes a dataset and all its associated data. This endpoint permanently deletes a dataset and all its associated data.
The user must have delete permissions on the dataset to perform this operation. The user must have delete permissions on the dataset to perform this operation.
Args: ## Path Parameters
dataset_id (UUID): The unique identifier of the dataset to delete - **dataset_id** (UUID): The unique identifier of the dataset to delete
user: The authenticated user requesting the deletion
Returns: ## Response
None: No content returned on successful deletion No content returned on successful deletion.
Raises: ## Error Codes
DatasetNotFoundError: If the dataset doesn't exist or user doesn't have access - **404 Not Found**: Dataset doesn't exist or user doesn't have access
HTTPException: If there's an error during deletion - **500 Internal Server Error**: Error during deletion
""" """
from cognee.modules.data.methods import get_dataset, delete_dataset from cognee.modules.data.methods import get_dataset, delete_dataset
@ -204,18 +199,16 @@ def get_datasets_router() -> APIRouter:
the dataset itself intact. The user must have delete permissions on the the dataset itself intact. The user must have delete permissions on the
dataset to perform this operation. dataset to perform this operation.
Args: ## Path Parameters
dataset_id (UUID): The unique identifier of the dataset containing the data - **dataset_id** (UUID): The unique identifier of the dataset containing the data
data_id (UUID): The unique identifier of the data item to delete - **data_id** (UUID): The unique identifier of the data item to delete
user: The authenticated user requesting the deletion
Returns: ## Response
None: No content returned on successful deletion No content returned on successful deletion.
Raises: ## Error Codes
DatasetNotFoundError: If the dataset doesn't exist or user doesn't have access - **404 Not Found**: Dataset or data item doesn't exist, or user doesn't have access
DataNotFoundError: If the data item doesn't exist in the dataset - **500 Internal Server Error**: Error during deletion
HTTPException: If there's an error during deletion
""" """
from cognee.modules.data.methods import get_data, delete_data from cognee.modules.data.methods import get_data, delete_data
from cognee.modules.data.methods import get_dataset from cognee.modules.data.methods import get_dataset
@ -242,18 +235,17 @@ def get_datasets_router() -> APIRouter:
including nodes and edges that represent the relationships between entities including nodes and edges that represent the relationships between entities
in the dataset. The graph data is formatted for visualization purposes. in the dataset. The graph data is formatted for visualization purposes.
Args: ## Path Parameters
dataset_id (UUID): The unique identifier of the dataset - **dataset_id** (UUID): The unique identifier of the dataset
user: The authenticated user requesting the graph data
Returns: ## Response
GraphDTO: The graph data containing: Returns the graph data containing:
- nodes: List of graph nodes with id, label, and properties - **nodes**: List of graph nodes with id, label, and properties
- edges: List of graph edges with source, target, and label - **edges**: List of graph edges with source, target, and label
Raises: ## Error Codes
DatasetNotFoundError: If the dataset doesn't exist or user doesn't have access - **404 Not Found**: Dataset doesn't exist or user doesn't have access
HTTPException: If there's an error retrieving the graph data - **500 Internal Server Error**: Error retrieving graph data
""" """
from cognee.modules.data.methods import get_dataset from cognee.modules.data.methods import get_dataset
@ -279,23 +271,22 @@ def get_datasets_router() -> APIRouter:
to a specific dataset. Each data item includes metadata such as name, type, to a specific dataset. Each data item includes metadata such as name, type,
creation time, and storage location. creation time, and storage location.
Args: ## Path Parameters
dataset_id (UUID): The unique identifier of the dataset - **dataset_id** (UUID): The unique identifier of the dataset
user: The authenticated user requesting the data
Returns: ## Response
List[DataDTO]: A list of data objects containing: Returns a list of data objects containing:
- id: Unique data item identifier - **id**: Unique data item identifier
- name: Data item name - **name**: Data item name
- created_at: When the data was added - **created_at**: When the data was added
- updated_at: When the data was last updated - **updated_at**: When the data was last updated
- extension: File extension - **extension**: File extension
- mime_type: MIME type of the data - **mime_type**: MIME type of the data
- raw_data_location: Storage location of the raw data - **raw_data_location**: Storage location of the raw data
Raises: ## Error Codes
DatasetNotFoundError: If the dataset doesn't exist or user doesn't have access - **404 Not Found**: Dataset doesn't exist or user doesn't have access
HTTPException: If there's an error retrieving the data - **500 Internal Server Error**: Error retrieving data
""" """
from cognee.modules.data.methods import get_dataset_data, get_dataset from cognee.modules.data.methods import get_dataset_data, get_dataset
@ -327,16 +318,18 @@ def get_datasets_router() -> APIRouter:
indicating whether they are being processed, have completed processing, or indicating whether they are being processed, have completed processing, or
encountered errors during pipeline execution. encountered errors during pipeline execution.
Args: ## Query Parameters
datasets: List of dataset UUIDs to check status for (query parameter "dataset") - **dataset** (List[UUID]): List of dataset UUIDs to check status for
user: The authenticated user requesting the status
Returns: ## Response
Dict[str, PipelineRunStatus]: A dictionary mapping dataset IDs to their Returns a dictionary mapping dataset IDs to their processing status:
processing status (e.g., "pending", "running", "completed", "failed") - **pending**: Dataset is queued for processing
- **running**: Dataset is currently being processed
- **completed**: Dataset processing completed successfully
- **failed**: Dataset processing encountered an error
Raises: ## Error Codes
HTTPException: If there's an error retrieving the status information - **500 Internal Server Error**: Error retrieving status information
""" """
from cognee.modules.data.methods import get_dataset_status from cognee.modules.data.methods import get_dataset_status
@ -355,18 +348,16 @@ def get_datasets_router() -> APIRouter:
for a specific data item within a dataset. The file is returned as a direct for a specific data item within a dataset. The file is returned as a direct
download with appropriate headers. download with appropriate headers.
Args: ## Path Parameters
dataset_id (UUID): The unique identifier of the dataset containing the data - **dataset_id** (UUID): The unique identifier of the dataset containing the data
data_id (UUID): The unique identifier of the data item to download - **data_id** (UUID): The unique identifier of the data item to download
user: The authenticated user requesting the download
Returns: ## Response
FileResponse: The raw data file as a downloadable response Returns the raw data file as a downloadable response.
Raises: ## Error Codes
DatasetNotFoundError: If the dataset doesn't exist or user doesn't have access - **404 Not Found**: Dataset or data item doesn't exist, or user doesn't have access
DataNotFoundError: If the data item doesn't exist in the dataset - **500 Internal Server Error**: Error accessing the raw data file
HTTPException: If there's an error accessing the raw data file
""" """
from cognee.modules.data.methods import get_data from cognee.modules.data.methods import get_data
from cognee.modules.data.methods import get_dataset_data from cognee.modules.data.methods import get_dataset_data

View file

@ -24,13 +24,29 @@ def get_delete_router() -> APIRouter:
mode: str = Form("soft"), mode: str = Form("soft"),
user: User = Depends(get_authenticated_user), user: User = Depends(get_authenticated_user),
): ):
"""This endpoint is responsible for deleting data from the graph. """
Delete data from the knowledge graph.
Args: This endpoint removes specified data from the knowledge graph. It supports
data: The data to delete (files, URLs, or text) both soft deletion (preserving related entities) and hard deletion (removing
dataset_name: Name of the dataset to delete from (default: "main_dataset") degree-one entity nodes as well).
mode: "soft" (default) or "hard" - hard mode also deletes degree-one entity nodes
user: Authenticated user ## Request Parameters
- **data** (List[UploadFile]): The data to delete (files, URLs, or text)
- **dataset_name** (str): Name of the dataset to delete from (default: "main_dataset")
- **dataset_id** (UUID): UUID of the dataset to delete from
- **mode** (str): Deletion mode - "soft" (default) or "hard"
## Response
No content returned on successful deletion.
## Error Codes
- **409 Conflict**: Error during deletion process
- **403 Forbidden**: User doesn't have permission to delete from dataset
## Notes
- **Soft mode**: Preserves related entities and relationships
- **Hard mode**: Also deletes degree-one entity nodes
""" """
from cognee.api.v1.delete import delete as cognee_delete from cognee.api.v1.delete import delete as cognee_delete

View file

@ -25,18 +25,20 @@ def get_permissions_router() -> APIRouter:
to a principal (which can be a user or role). The authenticated user must to a principal (which can be a user or role). The authenticated user must
have appropriate permissions to grant access to the specified datasets. have appropriate permissions to grant access to the specified datasets.
Args: ## Path Parameters
permission_name (str): The name of the permission to grant (e.g., "read", "write", "delete") - **principal_id** (UUID): The UUID of the principal (user or role) to grant permission to
dataset_ids (List[UUID]): List of dataset UUIDs to grant permission on
principal_id (UUID): The UUID of the principal (user or role) to grant permission to
user: The authenticated user granting the permission
Returns: ## Request Parameters
JSONResponse: Success message indicating permission was assigned - **permission_name** (str): The name of the permission to grant (e.g., "read", "write", "delete")
- **dataset_ids** (List[UUID]): List of dataset UUIDs to grant permission on
Raises: ## Response
HTTPException: If there's an error granting the permission Returns a success message indicating permission was assigned.
PermissionDeniedError: If the user doesn't have permission to grant access
## Error Codes
- **400 Bad Request**: Invalid request parameters
- **403 Forbidden**: User doesn't have permission to grant access
- **500 Internal Server Error**: Error granting permission
""" """
from cognee.modules.users.permissions.methods import authorized_give_permission_on_datasets from cognee.modules.users.permissions.methods import authorized_give_permission_on_datasets
@ -60,16 +62,15 @@ def get_permissions_router() -> APIRouter:
to group permissions and can be assigned to users to manage access control to group permissions and can be assigned to users to manage access control
more efficiently. The authenticated user becomes the owner of the created role. more efficiently. The authenticated user becomes the owner of the created role.
Args: ## Request Parameters
role_name (str): The name of the role to create - **role_name** (str): The name of the role to create
user: The authenticated user creating the role
Returns: ## Response
JSONResponse: Success message indicating the role was created Returns a success message indicating the role was created.
Raises: ## Error Codes
HTTPException: If there's an error creating the role - **400 Bad Request**: Invalid role name or role already exists
ValidationError: If the role name is invalid or already exists - **500 Internal Server Error**: Error creating the role
""" """
from cognee.modules.users.roles.methods import create_role as create_role_method from cognee.modules.users.roles.methods import create_role as create_role_method
@ -88,18 +89,20 @@ def get_permissions_router() -> APIRouter:
permissions associated with that role. The authenticated user must be permissions associated with that role. The authenticated user must be
the owner of the role or have appropriate administrative permissions. the owner of the role or have appropriate administrative permissions.
Args: ## Path Parameters
user_id (UUID): The UUID of the user to add to the role - **user_id** (UUID): The UUID of the user to add to the role
role_id (UUID): The UUID of the role to assign the user to
user: The authenticated user performing the role assignment
Returns: ## Request Parameters
JSONResponse: Success message indicating the user was added to the role - **role_id** (UUID): The UUID of the role to assign the user to
Raises: ## Response
HTTPException: If there's an error adding the user to the role Returns a success message indicating the user was added to the role.
PermissionDeniedError: If the user doesn't have permission to assign roles
ValidationError: If the user or role doesn't exist ## Error Codes
- **400 Bad Request**: Invalid user or role ID
- **403 Forbidden**: User doesn't have permission to assign roles
- **404 Not Found**: User or role doesn't exist
- **500 Internal Server Error**: Error adding user to role
""" """
from cognee.modules.users.roles.methods import add_user_to_role as add_user_to_role_method from cognee.modules.users.roles.methods import add_user_to_role as add_user_to_role_method
@ -118,18 +121,20 @@ def get_permissions_router() -> APIRouter:
resources and data associated with that tenant. The authenticated user must resources and data associated with that tenant. The authenticated user must
be the owner of the tenant or have appropriate administrative permissions. be the owner of the tenant or have appropriate administrative permissions.
Args: ## Path Parameters
user_id (UUID): The UUID of the user to add to the tenant - **user_id** (UUID): The UUID of the user to add to the tenant
tenant_id (UUID): The UUID of the tenant to assign the user to
user: The authenticated user performing the tenant assignment
Returns: ## Request Parameters
JSONResponse: Success message indicating the user was added to the tenant - **tenant_id** (UUID): The UUID of the tenant to assign the user to
Raises: ## Response
HTTPException: If there's an error adding the user to the tenant Returns a success message indicating the user was added to the tenant.
PermissionDeniedError: If the user doesn't have permission to assign tenants
ValidationError: If the user or tenant doesn't exist ## Error Codes
- **400 Bad Request**: Invalid user or tenant ID
- **403 Forbidden**: User doesn't have permission to assign tenants
- **404 Not Found**: User or tenant doesn't exist
- **500 Internal Server Error**: Error adding user to tenant
""" """
from cognee.modules.users.tenants.methods import add_user_to_tenant from cognee.modules.users.tenants.methods import add_user_to_tenant
@ -146,16 +151,15 @@ def get_permissions_router() -> APIRouter:
to organize users and resources in multi-tenant environments, providing to organize users and resources in multi-tenant environments, providing
isolation and access control between different groups or organizations. isolation and access control between different groups or organizations.
Args: ## Request Parameters
tenant_name (str): The name of the tenant to create - **tenant_name** (str): The name of the tenant to create
user: The authenticated user creating the tenant
Returns: ## Response
JSONResponse: Success message indicating the tenant was created Returns a success message indicating the tenant was created.
Raises: ## Error Codes
HTTPException: If there's an error creating the tenant - **400 Bad Request**: Invalid tenant name or tenant already exists
ValidationError: If the tenant name is invalid or already exists - **500 Internal Server Error**: Error creating the tenant
""" """
from cognee.modules.users.tenants.methods import create_tenant as create_tenant_method from cognee.modules.users.tenants.methods import create_tenant as create_tenant_method

View file

@ -74,7 +74,29 @@ def get_responses_router() -> APIRouter:
user: User = Depends(get_authenticated_user), user: User = Depends(get_authenticated_user),
) -> ResponseBody: ) -> ResponseBody:
""" """
OpenAI-compatible responses endpoint with function calling support OpenAI-compatible responses endpoint with function calling support.
This endpoint provides OpenAI-compatible API responses with integrated
function calling capabilities for Cognee operations.
## Request Parameters
- **input** (str): The input text to process
- **model** (str): The model to use for processing
- **tools** (Optional[List[Dict]]): Available tools for function calling
- **tool_choice** (Any): Tool selection strategy (default: "auto")
- **temperature** (float): Response randomness (default: 1.0)
## Response
Returns an OpenAI-compatible response body with function call results.
## Error Codes
- **400 Bad Request**: Invalid request parameters
- **500 Internal Server Error**: Error processing request
## Notes
- Compatible with OpenAI API format
- Supports function calling with Cognee tools
- Uses default tools if none provided
""" """
# Use default tools if none provided # Use default tools if none provided
tools = request.tools or DEFAULT_TOOLS tools = request.tools or DEFAULT_TOOLS

View file

@ -38,15 +38,15 @@ def get_search_router() -> APIRouter:
This endpoint retrieves the search history for the authenticated user, This endpoint retrieves the search history for the authenticated user,
returning a list of previously executed searches with their timestamps. returning a list of previously executed searches with their timestamps.
Returns: ## Response
List[SearchHistoryItem]: A list of search history items containing: Returns a list of search history items containing:
- id: Unique identifier for the search - **id**: Unique identifier for the search
- text: The search query text - **text**: The search query text
- user: User who performed the search - **user**: User who performed the search
- created_at: When the search was performed - **created_at**: When the search was performed
Raises: ## Error Codes
HTTPException: If there's an error retrieving the search history - **500 Internal Server Error**: Error retrieving search history
""" """
try: try:
history = await get_history(user.id, limit=0) history = await get_history(user.id, limit=0)
@ -64,26 +64,24 @@ def get_search_router() -> APIRouter:
relevant nodes based on the provided query. It supports different search relevant nodes based on the provided query. It supports different search
types and can be scoped to specific datasets. types and can be scoped to specific datasets.
Args: ## Request Parameters
payload (SearchPayloadDTO): Search parameters containing: - **search_type** (SearchType): Type of search to perform
- search_type: Type of search to perform (SearchType) - **datasets** (Optional[List[str]]): List of dataset names to search within
- datasets: Optional list of dataset names to search within - **dataset_ids** (Optional[List[UUID]]): List of dataset UUIDs to search within
- dataset_ids: Optional list of dataset UUIDs to search within - **query** (str): The search query string
- query: The search query string - **top_k** (Optional[int]): Maximum number of results to return (default: 10)
- top_k: Maximum number of results to return (default: 10)
user: The authenticated user performing the search
Returns: ## Response
List: A list of search results containing relevant nodes from the graph Returns a list of search results containing relevant nodes from the graph.
Raises: ## Error Codes
HTTPException: If there's an error during the search operation - **409 Conflict**: Error during search operation
PermissionDeniedError: If user doesn't have permission to search datasets - **403 Forbidden**: User doesn't have permission to search datasets (returns empty list)
Note: ## Notes
- Datasets sent by name will only map to datasets owned by the request sender - Datasets sent by name will only map to datasets owned by the request sender
- To search datasets not owned by the request sender, dataset UUID is needed - To search datasets not owned by the request sender, dataset UUID is needed
- If permission is denied, returns empty list instead of error - If permission is denied, returns empty list instead of error
""" """
from cognee.api.v1.search import search as cognee_search from cognee.api.v1.search import search as cognee_search

View file

@ -55,16 +55,13 @@ def get_settings_router() -> APIRouter:
including LLM (Large Language Model) configuration and vector database including LLM (Large Language Model) configuration and vector database
configuration. These settings determine how the system processes and stores data. configuration. These settings determine how the system processes and stores data.
Args: ## Response
user: The authenticated user requesting the settings Returns the current system settings containing:
- **llm**: LLM configuration (provider, model, API key)
- **vector_db**: Vector database configuration (provider, URL, API key)
Returns: ## Error Codes
SettingsDTO: The current system settings containing: - **500 Internal Server Error**: Error retrieving settings
- llm: LLM configuration (provider, model, API key)
- vector_db: Vector database configuration (provider, URL, API key)
Raises:
HTTPException: If there's an error retrieving the settings
""" """
from cognee.modules.settings import get_settings as get_cognee_settings from cognee.modules.settings import get_settings as get_cognee_settings
@ -81,18 +78,16 @@ def get_settings_router() -> APIRouter:
update either the LLM configuration, vector database configuration, or both. update either the LLM configuration, vector database configuration, or both.
Only provided settings will be updated; others remain unchanged. Only provided settings will be updated; others remain unchanged.
Args: ## Request Parameters
new_settings (SettingsPayloadDTO): The settings to update containing: - **llm** (Optional[LLMConfigInputDTO]): LLM configuration (provider, model, API key)
- llm: Optional LLM configuration (provider, model, API key) - **vector_db** (Optional[VectorDBConfigInputDTO]): Vector database configuration (provider, URL, API key)
- vector_db: Optional vector database configuration (provider, URL, API key)
user: The authenticated user making the changes
Returns: ## Response
None: No content returned on successful save No content returned on successful save.
Raises: ## Error Codes
HTTPException: If there's an error saving the settings - **400 Bad Request**: Invalid settings provided
ValidationError: If the provided settings are invalid - **500 Internal Server Error**: Error saving settings
""" """
from cognee.modules.settings import save_llm_config, save_vector_db_config from cognee.modules.settings import save_llm_config, save_vector_db_config

View file

@ -22,19 +22,22 @@ def get_visualize_router() -> APIRouter:
This endpoint creates an interactive HTML visualization of the knowledge graph This endpoint creates an interactive HTML visualization of the knowledge graph
for a specific dataset. The visualization displays nodes and edges representing for a specific dataset. The visualization displays nodes and edges representing
entities and their relationships, allowing users to explore the graph structure entities and their relationships, allowing users to explore the graph structure
visually. The user must have read permissions on the dataset. visually.
Args: ## Query Parameters
dataset_id (UUID): The unique identifier of the dataset to visualize - **dataset_id** (UUID): The unique identifier of the dataset to visualize
user: The authenticated user requesting the visualization
Returns: ## Response
HTMLResponse: An HTML page containing the interactive graph visualization Returns an HTML page containing the interactive graph visualization.
Raises: ## Error Codes
HTTPException: If there's an error generating the visualization - **404 Not Found**: Dataset doesn't exist
PermissionDeniedError: If the user doesn't have permission to read the dataset - **403 Forbidden**: User doesn't have permission to read the dataset
DatasetNotFoundError: If the dataset doesn't exist - **500 Internal Server Error**: Error generating visualization
## Notes
- User must have read permissions on the dataset
- Visualization is interactive and allows graph exploration
""" """
from cognee.api.v1.visualize import visualize_graph from cognee.api.v1.visualize import visualize_graph