Swagger endpoint docstrings (#1087)

<!-- .github/pull_request_template.md -->

## Description
<!-- Provide a clear description of the changes in this PR -->

## DCO Affirmation
I affirm that all code in every commit of this pull request conforms to
the terms of the Topoteretes Developer Certificate of Origin.

---------

Co-authored-by: vasilije <vas.markovic@gmail.com>
This commit is contained in:
Igor Ilic 2025-07-14 15:24:31 +02:00 committed by GitHub
parent a2d16c99a1
commit 219db2f03d
No known key found for this signature in database
GPG key ID: B5690EEEBB952194
10 changed files with 302 additions and 266 deletions

View file

@ -30,36 +30,34 @@ def get_add_router() -> APIRouter:
This endpoint accepts various types of data (files, URLs, GitHub repositories)
and adds them to a specified dataset for processing. The data is ingested,
analyzed, and integrated into the knowledge graph. Either datasetName or
datasetId must be provided to specify the target dataset.
analyzed, and integrated into the knowledge graph.
Args:
data (List[UploadFile]): List of files to upload. Can also include:
- HTTP URLs (if ALLOW_HTTP_REQUESTS is enabled)
- GitHub repository URLs (will be cloned and processed)
- Regular file uploads
datasetName (Optional[str]): Name of the dataset to add data to
datasetId (Optional[UUID]): UUID of the dataset to add data to
user: The authenticated user adding the data
## Request Parameters
- **data** (List[UploadFile]): List of files to upload. Can also include:
- HTTP URLs (if ALLOW_HTTP_REQUESTS is enabled)
- GitHub repository URLs (will be cloned and processed)
- Regular file uploads
- **datasetName** (Optional[str]): Name of the dataset to add data to
- **datasetId** (Optional[UUID]): UUID of the dataset to add data to
Returns:
dict: Information about the add operation containing:
- Status of the operation
- Details about the processed data
- Any relevant metadata from the ingestion process
Either datasetName or datasetId must be provided.
Raises:
ValueError: If neither datasetId nor datasetName is provided
HTTPException: If there's an error during the add operation
PermissionDeniedError: If the user doesn't have permission to add to the dataset
## Response
Returns information about the add operation containing:
- Status of the operation
- Details about the processed data
- Any relevant metadata from the ingestion process
Note:
- To add data to a datasets not owned by the user and for which the user has write permission for
the dataset_id must be used (when ENABLE_BACKEND_ACCESS_CONTROL is set to True)
- GitHub repositories are cloned and all files are processed
- HTTP URLs are fetched and their content is processed
- Regular files are uploaded and processed directly
- The ALLOW_HTTP_REQUESTS environment variable controls URL processing
## Error Codes
- **400 Bad Request**: Neither datasetId nor datasetName provided
- **409 Conflict**: Error during add operation
- **403 Forbidden**: User doesn't have permission to add to dataset
## Notes
- To add data to datasets not owned by the user, use dataset_id (when ENABLE_BACKEND_ACCESS_CONTROL is set to True)
- GitHub repositories are cloned and all files are processed
- HTTP URLs are fetched and their content is processed
- The ALLOW_HTTP_REQUESTS environment variable controls URL processing
"""
from cognee.api.v1.add import add as cognee_add

View file

@ -31,7 +31,22 @@ def get_code_pipeline_router() -> APIRouter:
@router.post("/index", response_model=None)
async def code_pipeline_index(payload: CodePipelineIndexPayloadDTO):
"""This endpoint is responsible for running the indexation on code repo."""
"""
Run indexation on a code repository.
This endpoint processes a code repository to create a knowledge graph
of the codebase structure, dependencies, and relationships.
## Request Parameters
- **repo_path** (str): Path to the code repository
- **include_docs** (bool): Whether to include documentation files (default: false)
## Response
No content returned. Processing results are logged.
## Error Codes
- **409 Conflict**: Error during indexation process
"""
from cognee.api.v1.cognify.code_graph_pipeline import run_code_graph_pipeline
try:
@ -42,7 +57,22 @@ def get_code_pipeline_router() -> APIRouter:
@router.post("/retrieve", response_model=list[dict])
async def code_pipeline_retrieve(payload: CodePipelineRetrievePayloadDTO):
"""This endpoint is responsible for retrieving the context."""
"""
Retrieve context from the code knowledge graph.
This endpoint searches the indexed code repository to find relevant
context based on the provided query.
## Request Parameters
- **query** (str): Search query for code context
- **full_input** (str): Full input text for processing
## Response
Returns a list of relevant code files and context as JSON.
## Error Codes
- **409 Conflict**: Error during retrieval process
"""
try:
query = (
payload.full_input.replace("cognee ", "")

View file

@ -48,7 +48,7 @@ def get_cognify_router() -> APIRouter:
raw text, documents, and data added through the add endpoint into semantic knowledge graphs.
It performs deep analysis to extract entities, relationships, and insights from ingested content.
The processing pipeline includes:
## Processing Pipeline
1. Document classification and permission validation
2. Text chunking and semantic segmentation
3. Entity extraction using LLM-powered analysis
@ -56,55 +56,34 @@ def get_cognify_router() -> APIRouter:
5. Vector embeddings generation for semantic search
6. Content summarization and indexing
Args:
payload (CognifyPayloadDTO): Request payload containing processing parameters:
- datasets (Optional[List[str]]): List of dataset names to process.
Dataset names are resolved to datasets owned by the authenticated user.
- dataset_ids (Optional[List[UUID]]): List of dataset UUIDs to process.
UUIDs allow processing of datasets not owned by the user (if permitted).
- graph_model (Optional[BaseModel]): Custom Pydantic model defining the
knowledge graph schema. Defaults to KnowledgeGraph for general-purpose
processing. Custom models enable domain-specific entity extraction.
- run_in_background (Optional[bool]): Whether to execute processing
asynchronously. Defaults to False (blocking).
## Request Parameters
- **datasets** (Optional[List[str]]): List of dataset names to process. Dataset names are resolved to datasets owned by the authenticated user.
- **dataset_ids** (Optional[List[UUID]]): List of dataset UUIDs to process. UUIDs allow processing of datasets not owned by the user (if permitted).
- **graph_model** (Optional[BaseModel]): Custom Pydantic model defining the knowledge graph schema. Defaults to KnowledgeGraph for general-purpose processing.
- **run_in_background** (Optional[bool]): Whether to execute processing asynchronously. Defaults to False (blocking).
user (User): Authenticated user context injected via dependency injection.
Used for permission validation and data access control.
## Response
- **Blocking execution**: Complete pipeline run information with entity counts, processing duration, and success/failure status
- **Background execution**: Pipeline run metadata including pipeline_run_id for status monitoring via WebSocket subscription
Returns:
dict: Processing results containing:
- For blocking execution: Complete pipeline run information with
entity counts, processing duration, and success/failure status
- For background execution: Pipeline run metadata including
pipeline_run_id for status monitoring via WebSocket subscription
## Error Codes
- **400 Bad Request**: When neither datasets nor dataset_ids are provided, or when specified datasets don't exist
- **409 Conflict**: When processing fails due to system errors, missing LLM API keys, database connection failures, or corrupted content
Raises:
HTTPException 400: Bad Request
- When neither datasets nor dataset_ids are provided
- When specified datasets don't exist or are inaccessible
## Example Request
```json
{
"datasets": ["research_papers", "documentation"],
"run_in_background": false
}
```
HTTPException 409: Conflict
- When processing fails due to system errors
- When LLM API keys are missing or invalid
- When database connections fail
- When content cannot be processed (corrupted files, unsupported formats)
## Notes
To cognify data in datasets not owned by the user and for which the current user has write permission,
the dataset_id must be used (when ENABLE_BACKEND_ACCESS_CONTROL is set to True).
Example Usage:
```python
# Process specific datasets synchronously
POST /api/v1/cognify
{
"datasets": ["research_papers", "documentation"],
"run_in_background": false
}
```
Notes:
To cognify data in a datasets not owned by the user and for which the current user has write permission for
the dataset_id must be used (when ENABLE_BACKEND_ACCESS_CONTROL is set to True)
Next Steps:
After successful processing, use the search endpoints to query the
generated knowledge graph for insights, relationships, and semantic search.
## Next Steps
After successful processing, use the search endpoints to query the generated knowledge graph for insights, relationships, and semantic search.
"""
if not payload.datasets and not payload.dataset_ids:
return JSONResponse(

View file

@ -81,19 +81,16 @@ def get_datasets_router() -> APIRouter:
read permissions for. The datasets are returned with their metadata
including ID, name, creation time, and owner information.
Args:
user: The authenticated user requesting the datasets
## Response
Returns a list of dataset objects containing:
- **id**: Unique dataset identifier
- **name**: Dataset name
- **created_at**: When the dataset was created
- **updated_at**: When the dataset was last updated
- **owner_id**: ID of the dataset owner
Returns:
List[DatasetDTO]: A list of dataset objects containing:
- id: Unique dataset identifier
- name: Dataset name
- created_at: When the dataset was created
- updated_at: When the dataset was last updated
- owner_id: ID of the dataset owner
Raises:
HTTPException: If there's an error retrieving the datasets
## Error Codes
- **418 I'm a teapot**: Error retrieving datasets
"""
try:
datasets = await get_all_user_permission_datasets(user, "read")
@ -118,21 +115,20 @@ def get_datasets_router() -> APIRouter:
dataset instead of creating a duplicate. The user is automatically granted
all permissions (read, write, share, delete) on the created dataset.
Args:
dataset_data (DatasetCreationPayload): Dataset creation parameters containing:
- name: The name for the new dataset
user: The authenticated user creating the dataset
## Request Parameters
- **dataset_data** (DatasetCreationPayload): Dataset creation parameters containing:
- **name**: The name for the new dataset
Returns:
DatasetDTO: The created or existing dataset object containing:
- id: Unique dataset identifier
- name: Dataset name
- created_at: When the dataset was created
- updated_at: When the dataset was last updated
- owner_id: ID of the dataset owner
## Response
Returns the created or existing dataset object containing:
- **id**: Unique dataset identifier
- **name**: Dataset name
- **created_at**: When the dataset was created
- **updated_at**: When the dataset was last updated
- **owner_id**: ID of the dataset owner
Raises:
HTTPException: If there's an error creating the dataset
## Error Codes
- **418 I'm a teapot**: Error creating dataset
"""
try:
datasets = await get_datasets_by_name([dataset_data.name], user.id)
@ -169,16 +165,15 @@ def get_datasets_router() -> APIRouter:
This endpoint permanently deletes a dataset and all its associated data.
The user must have delete permissions on the dataset to perform this operation.
Args:
dataset_id (UUID): The unique identifier of the dataset to delete
user: The authenticated user requesting the deletion
## Path Parameters
- **dataset_id** (UUID): The unique identifier of the dataset to delete
Returns:
None: No content returned on successful deletion
## Response
No content returned on successful deletion.
Raises:
DatasetNotFoundError: If the dataset doesn't exist or user doesn't have access
HTTPException: If there's an error during deletion
## Error Codes
- **404 Not Found**: Dataset doesn't exist or user doesn't have access
- **500 Internal Server Error**: Error during deletion
"""
from cognee.modules.data.methods import get_dataset, delete_dataset
@ -204,18 +199,16 @@ def get_datasets_router() -> APIRouter:
the dataset itself intact. The user must have delete permissions on the
dataset to perform this operation.
Args:
dataset_id (UUID): The unique identifier of the dataset containing the data
data_id (UUID): The unique identifier of the data item to delete
user: The authenticated user requesting the deletion
## Path Parameters
- **dataset_id** (UUID): The unique identifier of the dataset containing the data
- **data_id** (UUID): The unique identifier of the data item to delete
Returns:
None: No content returned on successful deletion
## Response
No content returned on successful deletion.
Raises:
DatasetNotFoundError: If the dataset doesn't exist or user doesn't have access
DataNotFoundError: If the data item doesn't exist in the dataset
HTTPException: If there's an error during deletion
## Error Codes
- **404 Not Found**: Dataset or data item doesn't exist, or user doesn't have access
- **500 Internal Server Error**: Error during deletion
"""
from cognee.modules.data.methods import get_data, delete_data
from cognee.modules.data.methods import get_dataset
@ -242,18 +235,17 @@ def get_datasets_router() -> APIRouter:
including nodes and edges that represent the relationships between entities
in the dataset. The graph data is formatted for visualization purposes.
Args:
dataset_id (UUID): The unique identifier of the dataset
user: The authenticated user requesting the graph data
## Path Parameters
- **dataset_id** (UUID): The unique identifier of the dataset
Returns:
GraphDTO: The graph data containing:
- nodes: List of graph nodes with id, label, and properties
- edges: List of graph edges with source, target, and label
## Response
Returns the graph data containing:
- **nodes**: List of graph nodes with id, label, and properties
- **edges**: List of graph edges with source, target, and label
Raises:
DatasetNotFoundError: If the dataset doesn't exist or user doesn't have access
HTTPException: If there's an error retrieving the graph data
## Error Codes
- **404 Not Found**: Dataset doesn't exist or user doesn't have access
- **500 Internal Server Error**: Error retrieving graph data
"""
from cognee.modules.data.methods import get_dataset
@ -279,23 +271,22 @@ def get_datasets_router() -> APIRouter:
to a specific dataset. Each data item includes metadata such as name, type,
creation time, and storage location.
Args:
dataset_id (UUID): The unique identifier of the dataset
user: The authenticated user requesting the data
## Path Parameters
- **dataset_id** (UUID): The unique identifier of the dataset
Returns:
List[DataDTO]: A list of data objects containing:
- id: Unique data item identifier
- name: Data item name
- created_at: When the data was added
- updated_at: When the data was last updated
- extension: File extension
- mime_type: MIME type of the data
- raw_data_location: Storage location of the raw data
## Response
Returns a list of data objects containing:
- **id**: Unique data item identifier
- **name**: Data item name
- **created_at**: When the data was added
- **updated_at**: When the data was last updated
- **extension**: File extension
- **mime_type**: MIME type of the data
- **raw_data_location**: Storage location of the raw data
Raises:
DatasetNotFoundError: If the dataset doesn't exist or user doesn't have access
HTTPException: If there's an error retrieving the data
## Error Codes
- **404 Not Found**: Dataset doesn't exist or user doesn't have access
- **500 Internal Server Error**: Error retrieving data
"""
from cognee.modules.data.methods import get_dataset_data, get_dataset
@ -327,16 +318,18 @@ def get_datasets_router() -> APIRouter:
indicating whether they are being processed, have completed processing, or
encountered errors during pipeline execution.
Args:
datasets: List of dataset UUIDs to check status for (query parameter "dataset")
user: The authenticated user requesting the status
## Query Parameters
- **dataset** (List[UUID]): List of dataset UUIDs to check status for
Returns:
Dict[str, PipelineRunStatus]: A dictionary mapping dataset IDs to their
processing status (e.g., "pending", "running", "completed", "failed")
## Response
Returns a dictionary mapping dataset IDs to their processing status:
- **pending**: Dataset is queued for processing
- **running**: Dataset is currently being processed
- **completed**: Dataset processing completed successfully
- **failed**: Dataset processing encountered an error
Raises:
HTTPException: If there's an error retrieving the status information
## Error Codes
- **500 Internal Server Error**: Error retrieving status information
"""
from cognee.modules.data.methods import get_dataset_status
@ -355,18 +348,16 @@ def get_datasets_router() -> APIRouter:
for a specific data item within a dataset. The file is returned as a direct
download with appropriate headers.
Args:
dataset_id (UUID): The unique identifier of the dataset containing the data
data_id (UUID): The unique identifier of the data item to download
user: The authenticated user requesting the download
## Path Parameters
- **dataset_id** (UUID): The unique identifier of the dataset containing the data
- **data_id** (UUID): The unique identifier of the data item to download
Returns:
FileResponse: The raw data file as a downloadable response
## Response
Returns the raw data file as a downloadable response.
Raises:
DatasetNotFoundError: If the dataset doesn't exist or user doesn't have access
DataNotFoundError: If the data item doesn't exist in the dataset
HTTPException: If there's an error accessing the raw data file
## Error Codes
- **404 Not Found**: Dataset or data item doesn't exist, or user doesn't have access
- **500 Internal Server Error**: Error accessing the raw data file
"""
from cognee.modules.data.methods import get_data
from cognee.modules.data.methods import get_dataset_data

View file

@ -24,13 +24,29 @@ def get_delete_router() -> APIRouter:
mode: str = Form("soft"),
user: User = Depends(get_authenticated_user),
):
"""This endpoint is responsible for deleting data from the graph.
"""
Delete data from the knowledge graph.
Args:
data: The data to delete (files, URLs, or text)
dataset_name: Name of the dataset to delete from (default: "main_dataset")
mode: "soft" (default) or "hard" - hard mode also deletes degree-one entity nodes
user: Authenticated user
This endpoint removes specified data from the knowledge graph. It supports
both soft deletion (preserving related entities) and hard deletion (removing
degree-one entity nodes as well).
## Request Parameters
- **data** (List[UploadFile]): The data to delete (files, URLs, or text)
- **dataset_name** (str): Name of the dataset to delete from (default: "main_dataset")
- **dataset_id** (UUID): UUID of the dataset to delete from
- **mode** (str): Deletion mode - "soft" (default) or "hard"
## Response
No content returned on successful deletion.
## Error Codes
- **409 Conflict**: Error during deletion process
- **403 Forbidden**: User doesn't have permission to delete from dataset
## Notes
- **Soft mode**: Preserves related entities and relationships
- **Hard mode**: Also deletes degree-one entity nodes
"""
from cognee.api.v1.delete import delete as cognee_delete

View file

@ -25,18 +25,20 @@ def get_permissions_router() -> APIRouter:
to a principal (which can be a user or role). The authenticated user must
have appropriate permissions to grant access to the specified datasets.
Args:
permission_name (str): The name of the permission to grant (e.g., "read", "write", "delete")
dataset_ids (List[UUID]): List of dataset UUIDs to grant permission on
principal_id (UUID): The UUID of the principal (user or role) to grant permission to
user: The authenticated user granting the permission
## Path Parameters
- **principal_id** (UUID): The UUID of the principal (user or role) to grant permission to
Returns:
JSONResponse: Success message indicating permission was assigned
## Request Parameters
- **permission_name** (str): The name of the permission to grant (e.g., "read", "write", "delete")
- **dataset_ids** (List[UUID]): List of dataset UUIDs to grant permission on
Raises:
HTTPException: If there's an error granting the permission
PermissionDeniedError: If the user doesn't have permission to grant access
## Response
Returns a success message indicating permission was assigned.
## Error Codes
- **400 Bad Request**: Invalid request parameters
- **403 Forbidden**: User doesn't have permission to grant access
- **500 Internal Server Error**: Error granting permission
"""
from cognee.modules.users.permissions.methods import authorized_give_permission_on_datasets
@ -60,16 +62,15 @@ def get_permissions_router() -> APIRouter:
to group permissions and can be assigned to users to manage access control
more efficiently. The authenticated user becomes the owner of the created role.
Args:
role_name (str): The name of the role to create
user: The authenticated user creating the role
## Request Parameters
- **role_name** (str): The name of the role to create
Returns:
JSONResponse: Success message indicating the role was created
## Response
Returns a success message indicating the role was created.
Raises:
HTTPException: If there's an error creating the role
ValidationError: If the role name is invalid or already exists
## Error Codes
- **400 Bad Request**: Invalid role name or role already exists
- **500 Internal Server Error**: Error creating the role
"""
from cognee.modules.users.roles.methods import create_role as create_role_method
@ -88,18 +89,20 @@ def get_permissions_router() -> APIRouter:
permissions associated with that role. The authenticated user must be
the owner of the role or have appropriate administrative permissions.
Args:
user_id (UUID): The UUID of the user to add to the role
role_id (UUID): The UUID of the role to assign the user to
user: The authenticated user performing the role assignment
## Path Parameters
- **user_id** (UUID): The UUID of the user to add to the role
Returns:
JSONResponse: Success message indicating the user was added to the role
## Request Parameters
- **role_id** (UUID): The UUID of the role to assign the user to
Raises:
HTTPException: If there's an error adding the user to the role
PermissionDeniedError: If the user doesn't have permission to assign roles
ValidationError: If the user or role doesn't exist
## Response
Returns a success message indicating the user was added to the role.
## Error Codes
- **400 Bad Request**: Invalid user or role ID
- **403 Forbidden**: User doesn't have permission to assign roles
- **404 Not Found**: User or role doesn't exist
- **500 Internal Server Error**: Error adding user to role
"""
from cognee.modules.users.roles.methods import add_user_to_role as add_user_to_role_method
@ -118,18 +121,20 @@ def get_permissions_router() -> APIRouter:
resources and data associated with that tenant. The authenticated user must
be the owner of the tenant or have appropriate administrative permissions.
Args:
user_id (UUID): The UUID of the user to add to the tenant
tenant_id (UUID): The UUID of the tenant to assign the user to
user: The authenticated user performing the tenant assignment
## Path Parameters
- **user_id** (UUID): The UUID of the user to add to the tenant
Returns:
JSONResponse: Success message indicating the user was added to the tenant
## Request Parameters
- **tenant_id** (UUID): The UUID of the tenant to assign the user to
Raises:
HTTPException: If there's an error adding the user to the tenant
PermissionDeniedError: If the user doesn't have permission to assign tenants
ValidationError: If the user or tenant doesn't exist
## Response
Returns a success message indicating the user was added to the tenant.
## Error Codes
- **400 Bad Request**: Invalid user or tenant ID
- **403 Forbidden**: User doesn't have permission to assign tenants
- **404 Not Found**: User or tenant doesn't exist
- **500 Internal Server Error**: Error adding user to tenant
"""
from cognee.modules.users.tenants.methods import add_user_to_tenant
@ -146,16 +151,15 @@ def get_permissions_router() -> APIRouter:
to organize users and resources in multi-tenant environments, providing
isolation and access control between different groups or organizations.
Args:
tenant_name (str): The name of the tenant to create
user: The authenticated user creating the tenant
## Request Parameters
- **tenant_name** (str): The name of the tenant to create
Returns:
JSONResponse: Success message indicating the tenant was created
## Response
Returns a success message indicating the tenant was created.
Raises:
HTTPException: If there's an error creating the tenant
ValidationError: If the tenant name is invalid or already exists
## Error Codes
- **400 Bad Request**: Invalid tenant name or tenant already exists
- **500 Internal Server Error**: Error creating the tenant
"""
from cognee.modules.users.tenants.methods import create_tenant as create_tenant_method

View file

@ -74,7 +74,29 @@ def get_responses_router() -> APIRouter:
user: User = Depends(get_authenticated_user),
) -> ResponseBody:
"""
OpenAI-compatible responses endpoint with function calling support
OpenAI-compatible responses endpoint with function calling support.
This endpoint provides OpenAI-compatible API responses with integrated
function calling capabilities for Cognee operations.
## Request Parameters
- **input** (str): The input text to process
- **model** (str): The model to use for processing
- **tools** (Optional[List[Dict]]): Available tools for function calling
- **tool_choice** (Any): Tool selection strategy (default: "auto")
- **temperature** (float): Response randomness (default: 1.0)
## Response
Returns an OpenAI-compatible response body with function call results.
## Error Codes
- **400 Bad Request**: Invalid request parameters
- **500 Internal Server Error**: Error processing request
## Notes
- Compatible with OpenAI API format
- Supports function calling with Cognee tools
- Uses default tools if none provided
"""
# Use default tools if none provided
tools = request.tools or DEFAULT_TOOLS

View file

@ -38,15 +38,15 @@ def get_search_router() -> APIRouter:
This endpoint retrieves the search history for the authenticated user,
returning a list of previously executed searches with their timestamps.
Returns:
List[SearchHistoryItem]: A list of search history items containing:
- id: Unique identifier for the search
- text: The search query text
- user: User who performed the search
- created_at: When the search was performed
## Response
Returns a list of search history items containing:
- **id**: Unique identifier for the search
- **text**: The search query text
- **user**: User who performed the search
- **created_at**: When the search was performed
Raises:
HTTPException: If there's an error retrieving the search history
## Error Codes
- **500 Internal Server Error**: Error retrieving search history
"""
try:
history = await get_history(user.id, limit=0)
@ -64,26 +64,24 @@ def get_search_router() -> APIRouter:
relevant nodes based on the provided query. It supports different search
types and can be scoped to specific datasets.
Args:
payload (SearchPayloadDTO): Search parameters containing:
- search_type: Type of search to perform (SearchType)
- datasets: Optional list of dataset names to search within
- dataset_ids: Optional list of dataset UUIDs to search within
- query: The search query string
- top_k: Maximum number of results to return (default: 10)
user: The authenticated user performing the search
## Request Parameters
- **search_type** (SearchType): Type of search to perform
- **datasets** (Optional[List[str]]): List of dataset names to search within
- **dataset_ids** (Optional[List[UUID]]): List of dataset UUIDs to search within
- **query** (str): The search query string
- **top_k** (Optional[int]): Maximum number of results to return (default: 10)
Returns:
List: A list of search results containing relevant nodes from the graph
## Response
Returns a list of search results containing relevant nodes from the graph.
Raises:
HTTPException: If there's an error during the search operation
PermissionDeniedError: If user doesn't have permission to search datasets
## Error Codes
- **409 Conflict**: Error during search operation
- **403 Forbidden**: User doesn't have permission to search datasets (returns empty list)
Note:
- Datasets sent by name will only map to datasets owned by the request sender
- To search datasets not owned by the request sender, dataset UUID is needed
- If permission is denied, returns empty list instead of error
## Notes
- Datasets sent by name will only map to datasets owned by the request sender
- To search datasets not owned by the request sender, dataset UUID is needed
- If permission is denied, returns empty list instead of error
"""
from cognee.api.v1.search import search as cognee_search

View file

@ -55,16 +55,13 @@ def get_settings_router() -> APIRouter:
including LLM (Large Language Model) configuration and vector database
configuration. These settings determine how the system processes and stores data.
Args:
user: The authenticated user requesting the settings
## Response
Returns the current system settings containing:
- **llm**: LLM configuration (provider, model, API key)
- **vector_db**: Vector database configuration (provider, URL, API key)
Returns:
SettingsDTO: The current system settings containing:
- llm: LLM configuration (provider, model, API key)
- vector_db: Vector database configuration (provider, URL, API key)
Raises:
HTTPException: If there's an error retrieving the settings
## Error Codes
- **500 Internal Server Error**: Error retrieving settings
"""
from cognee.modules.settings import get_settings as get_cognee_settings
@ -81,18 +78,16 @@ def get_settings_router() -> APIRouter:
update either the LLM configuration, vector database configuration, or both.
Only provided settings will be updated; others remain unchanged.
Args:
new_settings (SettingsPayloadDTO): The settings to update containing:
- llm: Optional LLM configuration (provider, model, API key)
- vector_db: Optional vector database configuration (provider, URL, API key)
user: The authenticated user making the changes
## Request Parameters
- **llm** (Optional[LLMConfigInputDTO]): LLM configuration (provider, model, API key)
- **vector_db** (Optional[VectorDBConfigInputDTO]): Vector database configuration (provider, URL, API key)
Returns:
None: No content returned on successful save
## Response
No content returned on successful save.
Raises:
HTTPException: If there's an error saving the settings
ValidationError: If the provided settings are invalid
## Error Codes
- **400 Bad Request**: Invalid settings provided
- **500 Internal Server Error**: Error saving settings
"""
from cognee.modules.settings import save_llm_config, save_vector_db_config

View file

@ -22,19 +22,22 @@ def get_visualize_router() -> APIRouter:
This endpoint creates an interactive HTML visualization of the knowledge graph
for a specific dataset. The visualization displays nodes and edges representing
entities and their relationships, allowing users to explore the graph structure
visually. The user must have read permissions on the dataset.
visually.
Args:
dataset_id (UUID): The unique identifier of the dataset to visualize
user: The authenticated user requesting the visualization
## Query Parameters
- **dataset_id** (UUID): The unique identifier of the dataset to visualize
Returns:
HTMLResponse: An HTML page containing the interactive graph visualization
## Response
Returns an HTML page containing the interactive graph visualization.
Raises:
HTTPException: If there's an error generating the visualization
PermissionDeniedError: If the user doesn't have permission to read the dataset
DatasetNotFoundError: If the dataset doesn't exist
## Error Codes
- **404 Not Found**: Dataset doesn't exist
- **403 Forbidden**: User doesn't have permission to read the dataset
- **500 Internal Server Error**: Error generating visualization
## Notes
- User must have read permissions on the dataset
- Visualization is interactive and allows graph exploration
"""
from cognee.api.v1.visualize import visualize_graph