<!-- .github/pull_request_template.md -->
## Description
Adds a Neptune Analytics 'hybrid' integration layer to the Cognee.ai
memory storage layer. The following configuration will use Amazon
Neptune Analytics to store all nodes, edges, and vector embeddings for
the Cognee.ai memory conversation.
```
cognee.config.set_graph_db_config(
{
"graph_database_provider": "neptune_analytics", # Specify Neptune Analytics as provider
"graph_database_url": graph_endpoint_url, # Neptune Analytics endpoint with the format neptune-graph://<GRAPH_ID>
}
)
cognee.config.set_vector_db_config(
{
"vector_db_provider": "neptune_analytics", # Specify Neptune Analytics as provider
"vector_db_url": graph_endpoint_url, # Neptune Analytics endpoint with the format neptune-graph://<GRAPH_ID>
}
)
```
For example, see
[neptune_analytics_example.py](
|
||
|---|---|---|
| .. | ||
| data | ||
| database_examples | ||
| low_level | ||
| python | ||
| relational_db_with_dlt | ||