Merge remote-tracking branch 'origin/feat/COG-118-remove-unused-code' into feat/COG-118-remove-unused-code

# Conflicts:
#	cognee - Get Started.ipynb
This commit is contained in:
Vasilije 2024-03-17 15:36:48 +01:00
commit d70b93f9a1
32 changed files with 955 additions and 679 deletions

View file

@ -1,32 +1,32 @@
name: Python Linting
on:
push:
branches: [ main ] # This will trigger the workflow on pushes to the main branch
pull_request: # This will trigger the workflow on any pull request to any branch
jobs:
lint:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3
- name: Set up Python
uses: actions/setup-python@v4
with:
python-version: '3.11' # Specify the Python version you want to use
- name: Install Poetry
run: |
curl -sSL https://install.python-poetry.org | python3 - # Install Poetry
- name: Configure Poetry
run: |
poetry config virtualenvs.create false # Configure poetry to not create a new virtual environment
- name: Install dependencies
run: |
poetry install # Install the dependencies specified in pyproject.toml
- name: Run pylint
run: |
pylint $(git ls-files '*.py') # Run pylint on all Python files in the repository
#name: Python Linting
#
#on:
# push:
# branches: [ main ] # This will trigger the workflow on pushes to the main branch
# pull_request: # This will trigger the workflow on any pull request to any branch
#
#jobs:
# lint:
# runs-on: ubuntu-latest
# steps:
# - uses: actions/checkout@v3
# - name: Set up Python
# uses: actions/setup-python@v4
# with:
# python-version: '3.11' # Specify the Python version you want to use
#
# - name: Install Poetry
# run: |
# curl -sSL https://install.python-poetry.org | python3 - # Install Poetry
#
# - name: Configure Poetry
# run: |
# poetry config virtualenvs.create false # Configure poetry to not create a new virtual environment
#
# - name: Install dependencies
# run: |
# poetry install # Install the dependencies specified in pyproject.toml
#
# - name: Run pylint
# run: |
# pylint $(git ls-files '*.py') # Run pylint on all Python files in the repository

View file

@ -90,6 +90,7 @@ Make data processing for LLMs easy
<p>
Try it yourself on Whatsapp with one of our <a href="https://keepi.ai">partners</a> by typing `/save {content you want to save}` followed by `/query {knowledge you saved previously}`
For more info here are the <a href="https://topoteretes.github.io/cognee/">docs</a>
</p>
@ -112,6 +113,17 @@ poetry add cognee
Check out our demo notebook [here](cognee%20-%20Get%20Started.ipynb)
- Set OpenAI API Key as an environment variable
```
import os
# Setting an environment variable
os.environ['OPENAI_API_KEY'] = ''
```
- Add a new piece of information to storage
```
import cognee
@ -145,6 +157,7 @@ cognee.search(graph, query_params)
[<img src="https://i3.ytimg.com/vi/yjParvJVgPI/maxresdefault.jpg" width="100%">](https://www.youtube.com/watch?v=yjParvJVgPI "Learn about cognee: 55")
## Architecture
### How Cognee Enhances Your Contextual Memory

View file

@ -194,21 +194,10 @@
},
{
"cell_type": "code",
"execution_count": 1,
"execution_count": null,
"id": "5b3954c1-f537-4be7-a578-1d5037c21374",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Pipeline file_load_from_filesystem load step completed in 0.30 seconds\n",
"1 load package(s) were loaded to destination duckdb and into dataset izmene\n",
"The duckdb destination used duckdb:///:external: location to store data\n",
"Load package 1710664582.887609 is LOADED and contains no failed jobs\n"
]
}
],
"outputs": [],
"source": [
"from os import listdir, path\n",
"from uuid import uuid5, UUID\n",
@ -223,19 +212,10 @@
},
{
"cell_type": "code",
"execution_count": 2,
"execution_count": null,
"id": "39df49ca-06f0-4b86-ae27-93c68ddceac3",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"/Users/vasa/Projects/cognee/cognee/data/cognee/cognee.duckdb\n",
"['izmene']\n"
]
}
],
"outputs": [],
"source": [
"import duckdb\n",
"from cognee.root_dir import get_absolute_path\n",
@ -264,35 +244,10 @@
},
{
"cell_type": "code",
"execution_count": 3,
"execution_count": null,
"id": "97e8647a-052c-4689-b1de-8d81765462e0",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"['izmene']\n",
"Processing document (881ecb36-2819-54c3-8147-ed80293084d6)\n",
"name 'label_content' is not defined\n"
]
},
{
"ename": "NameError",
"evalue": "name 'label_content' is not defined",
"output_type": "error",
"traceback": [
"\u001b[0;31m---------------------------------------------------------------------------\u001b[0m",
"\u001b[0;31mNameError\u001b[0m Traceback (most recent call last)",
"Cell \u001b[0;32mIn[3], line 7\u001b[0m\n\u001b[1;32m 3\u001b[0m \u001b[38;5;28;01mfrom\u001b[39;00m \u001b[38;5;21;01mcognee\u001b[39;00m\u001b[38;5;21;01m.\u001b[39;00m\u001b[38;5;21;01mutils\u001b[39;00m \u001b[38;5;28;01mimport\u001b[39;00m render_graph\n\u001b[1;32m 5\u001b[0m \u001b[38;5;28mprint\u001b[39m(list_datasets())\n\u001b[0;32m----> 7\u001b[0m graph \u001b[38;5;241m=\u001b[39m \u001b[38;5;28;01mawait\u001b[39;00m cognify()\n\u001b[1;32m 9\u001b[0m graph_url \u001b[38;5;241m=\u001b[39m \u001b[38;5;28;01mawait\u001b[39;00m render_graph(graph, graph_type \u001b[38;5;241m=\u001b[39m \u001b[38;5;124m\"\u001b[39m\u001b[38;5;124mnetworkx\u001b[39m\u001b[38;5;124m\"\u001b[39m)\n\u001b[1;32m 10\u001b[0m \u001b[38;5;28mprint\u001b[39m(graph_url)\n",
"File \u001b[0;32m~/Projects/cognee/cognee/api/v1/cognify/cognify.py:53\u001b[0m, in \u001b[0;36mcognify\u001b[0;34m(datasets, graphdatamodel)\u001b[0m\n\u001b[1;32m 50\u001b[0m \u001b[38;5;28;01mfor\u001b[39;00m dataset \u001b[38;5;129;01min\u001b[39;00m datasets:\n\u001b[1;32m 51\u001b[0m awaitables\u001b[38;5;241m.\u001b[39mappend(cognify(dataset))\n\u001b[0;32m---> 53\u001b[0m graphs \u001b[38;5;241m=\u001b[39m \u001b[38;5;28;01mawait\u001b[39;00m asyncio\u001b[38;5;241m.\u001b[39mgather(\u001b[38;5;241m*\u001b[39mawaitables)\n\u001b[1;32m 54\u001b[0m \u001b[38;5;28;01mreturn\u001b[39;00m graphs[\u001b[38;5;241m0\u001b[39m]\n\u001b[1;32m 56\u001b[0m files_metadata \u001b[38;5;241m=\u001b[39m db\u001b[38;5;241m.\u001b[39mget_files_metadata(datasets)\n",
"File \u001b[0;32m~/Projects/cognee/cognee/api/v1/cognify/cognify.py:69\u001b[0m, in \u001b[0;36mcognify\u001b[0;34m(datasets, graphdatamodel)\u001b[0m\n\u001b[1;32m 65\u001b[0m text \u001b[38;5;241m=\u001b[39m \u001b[38;5;124m\"\u001b[39m\u001b[38;5;130;01m\\n\u001b[39;00m\u001b[38;5;124m\"\u001b[39m\u001b[38;5;241m.\u001b[39mjoin(\u001b[38;5;28mmap\u001b[39m(\u001b[38;5;28;01mlambda\u001b[39;00m element: clean(element\u001b[38;5;241m.\u001b[39mtext), elements))\n\u001b[1;32m 67\u001b[0m awaitables\u001b[38;5;241m.\u001b[39mappend(process_text(text, file_metadata))\n\u001b[0;32m---> 69\u001b[0m graphs \u001b[38;5;241m=\u001b[39m \u001b[38;5;28;01mawait\u001b[39;00m asyncio\u001b[38;5;241m.\u001b[39mgather(\u001b[38;5;241m*\u001b[39mawaitables)\n\u001b[1;32m 71\u001b[0m \u001b[38;5;28;01mreturn\u001b[39;00m graphs[\u001b[38;5;241m0\u001b[39m]\n",
"File \u001b[0;32m~/Projects/cognee/cognee/api/v1/cognify/cognify.py:112\u001b[0m, in \u001b[0;36mprocess_text\u001b[0;34m(input_text, file_metadata)\u001b[0m\n\u001b[1;32m 110\u001b[0m \u001b[38;5;28;01mexcept\u001b[39;00m \u001b[38;5;167;01mException\u001b[39;00m \u001b[38;5;28;01mas\u001b[39;00m e:\n\u001b[1;32m 111\u001b[0m \u001b[38;5;28mprint\u001b[39m(e)\n\u001b[0;32m--> 112\u001b[0m \u001b[38;5;28;01mraise\u001b[39;00m e\n\u001b[1;32m 114\u001b[0m \u001b[38;5;28;01mawait\u001b[39;00m add_document_node(\u001b[38;5;124mf\u001b[39m\u001b[38;5;124m\"\u001b[39m\u001b[38;5;124mDefaultGraphModel:\u001b[39m\u001b[38;5;132;01m{\u001b[39;00mUSER_ID\u001b[38;5;132;01m}\u001b[39;00m\u001b[38;5;124m\"\u001b[39m, file_metadata)\n\u001b[1;32m 115\u001b[0m \u001b[38;5;28mprint\u001b[39m(\u001b[38;5;124mf\u001b[39m\u001b[38;5;124m\"\u001b[39m\u001b[38;5;124mDocument (\u001b[39m\u001b[38;5;132;01m{\u001b[39;00mfile_metadata[\u001b[38;5;124m'\u001b[39m\u001b[38;5;124mid\u001b[39m\u001b[38;5;124m'\u001b[39m]\u001b[38;5;132;01m}\u001b[39;00m\u001b[38;5;124m) categorized: \u001b[39m\u001b[38;5;132;01m{\u001b[39;00mfile_metadata[\u001b[38;5;124m'\u001b[39m\u001b[38;5;124mcategories\u001b[39m\u001b[38;5;124m'\u001b[39m]\u001b[38;5;132;01m}\u001b[39;00m\u001b[38;5;124m\"\u001b[39m)\n",
"File \u001b[0;32m~/Projects/cognee/cognee/api/v1/cognify/cognify.py:104\u001b[0m, in \u001b[0;36mprocess_text\u001b[0;34m(input_text, file_metadata)\u001b[0m\n\u001b[1;32m 100\u001b[0m \u001b[38;5;28;01mraise\u001b[39;00m e\n\u001b[1;32m 102\u001b[0m \u001b[38;5;28;01mtry\u001b[39;00m:\n\u001b[1;32m 103\u001b[0m \u001b[38;5;66;03m# Classify the content into categories\u001b[39;00m\n\u001b[0;32m--> 104\u001b[0m content_labels \u001b[38;5;241m=\u001b[39m \u001b[38;5;28;01mawait\u001b[39;00m \u001b[43mlabel_content\u001b[49m(\n\u001b[1;32m 105\u001b[0m input_text,\n\u001b[1;32m 106\u001b[0m \u001b[38;5;124m\"\u001b[39m\u001b[38;5;124mlabel_content.txt\u001b[39m\u001b[38;5;124m\"\u001b[39m,\n\u001b[1;32m 107\u001b[0m SummarizedContent\n\u001b[1;32m 108\u001b[0m )\n\u001b[1;32m 109\u001b[0m file_metadata[\u001b[38;5;124m\"\u001b[39m\u001b[38;5;124msummary\u001b[39m\u001b[38;5;124m\"\u001b[39m] \u001b[38;5;241m=\u001b[39m content_summary[\u001b[38;5;124m\"\u001b[39m\u001b[38;5;124msummary\u001b[39m\u001b[38;5;124m\"\u001b[39m]\n\u001b[1;32m 110\u001b[0m \u001b[38;5;28;01mexcept\u001b[39;00m \u001b[38;5;167;01mException\u001b[39;00m \u001b[38;5;28;01mas\u001b[39;00m e:\n",
"\u001b[0;31mNameError\u001b[0m: name 'label_content' is not defined"
]
}
],
"outputs": [],
"source": [
"from os import path, listdir\n",
"from cognee import cognify, list_datasets\n",
@ -306,91 +261,9 @@
"print(graph_url)\n"
]
},
{
"cell_type": "code",
"execution_count": 1,
"id": "a0918362-e864-414f-902c-57ce7da6c319",
"metadata": {},
"outputs": [],
"source": [
" from cognee.shared.data_models import GraphDBType\n",
" from cognee.infrastructure.databases.graph.get_graph_client import get_graph_client\n",
" graph_client = get_graph_client(GraphDBType.NETWORKX)\n"
]
},
{
"cell_type": "code",
"execution_count": 3,
"id": "1878572f-fa96-4953-b1d2-f2b0614a7d8f",
"metadata": {},
"outputs": [],
"source": [
"from cognee.utils import render_graph"
]
},
{
"cell_type": "code",
"execution_count": 8,
"id": "497ab9c0-2db9-4d5c-b140-bd17226712df",
"metadata": {},
"outputs": [
{
"ename": "TypeError",
"evalue": "'MultiDiGraph' object is not callable",
"output_type": "error",
"traceback": [
"\u001b[0;31m---------------------------------------------------------------------------\u001b[0m",
"\u001b[0;31mTypeError\u001b[0m Traceback (most recent call last)",
"Cell \u001b[0;32mIn[8], line 1\u001b[0m\n\u001b[0;32m----> 1\u001b[0m \u001b[38;5;28;01mfor\u001b[39;00m nodes, edges \u001b[38;5;129;01min\u001b[39;00m \u001b[43mgraph_client\u001b[49m\u001b[38;5;241;43m.\u001b[39;49m\u001b[43mgraph\u001b[49m\u001b[43m(\u001b[49m\u001b[43m)\u001b[49m:\n\u001b[1;32m 2\u001b[0m \u001b[38;5;28mprint\u001b[39m(nodes)\n",
"\u001b[0;31mTypeError\u001b[0m: 'MultiDiGraph' object is not callable"
]
}
],
"source": [
"\n",
"for nodes, edges in graph_client.graph():\n",
" print(nodes)"
]
},
{
"cell_type": "code",
"execution_count": 5,
"id": "0919cd73-6ff9-40a7-90c6-a97f53d08364",
"metadata": {},
"outputs": [
{
"name": "stderr",
"output_type": "stream",
"text": [
"/Users/vasa/Projects/cognee/.venv/lib/python3.10/site-packages/graphistry/util.py:276: RuntimeWarning: Graph has no edges, may have rendering issues\n",
" warnings.warn(RuntimeWarning(msg))\n"
]
},
{
"name": "stdout",
"output_type": "stream",
"text": [
"Graph is visualized at: https://hub.graphistry.com/graph/graph.html?dataset=fde6391bff1a4b00af5cb631a4e2d48e&type=arrow&viztoken=bff2ce55-63ee-4671-926a-8166c32ef44c&usertag=1daaf574-pygraphistry-0.33.5&splashAfter=1710666472&info=true\n",
"None\n"
]
}
],
"source": [
"graph_url = await render_graph(graph_client.graph, graph_type = \"networkx\")\n",
"print(graph_url)"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "da6c866c-8150-4b8c-9857-3f0bfe434a97",
"metadata": {},
"outputs": [],
"source": []
},
{
"cell_type": "code",
"execution_count": 4,
"id": "a228fb2c-5bbc-48b4-af3d-4a26e840a79e",
"metadata": {},
"outputs": [],
@ -401,508 +274,13 @@
},
{
"cell_type": "code",
"execution_count": 8,
"execution_count": null,
"id": "4ee34d29-f75d-4766-8906-b851b57bd5ef",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Ministry of Construction, Transport, and Infrastructure\n",
"Rulebook on Amendments and Supplements to the Rulebook on the Content, Manner, and Procedure of Preparation and the Method of Technical Documentation Control According to the Class and Purpose of Buildings\n",
"Official Gazette of the Republic of Serbia\n",
"Law on Planning and Construction\n",
"Construction Permit\n",
"Conceptual Project\n",
"Execution Project\n",
"Completed building project\n",
"Publication Date\n",
"Effective Date\n",
"Law Publication Dates\n",
"Prof. Dr. Zorana Mihajlovic\n",
"Ministry of Construction, Transport and Infrastructure\n",
"The amendment of the Rule on the Content, Mode, and Procedure of Preparation and Means of Control of Technical Documentation According to the Class and Purpose of Buildings\n",
"Publication in the Official Gazette of the Republic of Serbia\n",
"Law on Planning and Construction\n",
"Main project for construction permit, conceptual project, and execution project\n",
"Amendment and supplementation regulation to the technical documentation process and control according to the class and purpose of buildings\n",
"Official Gazette of the Republic of Serbia\n",
"Law on Planning and Construction\n",
"Minister of Construction, Transport, and Infrastructure\n",
"Pravilnik o izmenama i dopunama Pravilnika o sadržini, načinu i postupku izrade i način vršenja kontrole tehničke dokumentacije prema klasi i nameni objekata\n",
"Službeni glasnik RS, br. 77/2015\n",
"9.9.2015\n",
"prof. dr Zorana Mihajlović\n",
"4.9.2015\n",
"Ministry of Construction, Transport and Infrastructure\n",
"Regulation on Amendments and Supplements to the Regulation on the content, method, and procedure for preparing and controlling technical documentation according to class and purpose of buildings\n",
"Regulation on the content, method, and procedure for preparing and controlling technical documentation according to class and purpose of buildings\n",
"Main Project\n",
"Law on Planning and Construction\n",
"Regulation on amendments and supplements to the Regulation on the content, method, and procedure for the preparation and manner of controlling technical documentation according to the class and purpose of objects\n",
"Published in Official Gazette of RS, No. 77/2015 on September 9, 2015\n",
"Law on Planning and Construction (\"Official Gazette of RS\", Nos. 72/09, 81/09 - correction, 64/10 - Constitutional Court, 24/11, 121/12, 42/13 - Constitutional Court, 50/13, 98/13 - Constitutional Court)\n",
"Minister, Prof. Dr. Zorana Mihajlovic\n",
"Ministar građevinarstva, saobraćaja i infrastrukture\n",
"Pravilnik o izmenama i dopunama Pravilnika o sadržini, načinu i postupku izrade i način vršenja kontrole tehničke dokumentacije prema klasi i nameni objekata\n",
"Publication in Službeni glasnik RS, No. 77/2015\n",
"Glavni projekat\n",
"Zakon o planiranju i izgradnji\n",
"građevinska dozvola\n",
"idejni projekat\n",
"projekat za izvođenje\n",
"projekat izvedenog objekta\n",
"upotrebna dozvola\n",
"Pravilnik o izmenama i dopunama Pravilnika o sadržini, načinu i postupku izrade i način vršenja kontrole tehničke dokumentacije prema klasi i nameni objekata\n",
"Zakon o planiranju i izgradnji\n",
"\"Službeni glasnik RS\"\n",
"prof. dr Zorana Mihajlovic\n",
"Ministry of Construction, Transport, and Infrastructure\n",
"Rulebook on Amendments and Supplements to the Rulebook on the Content, Manner, and Procedure of Preparation and the Method of Technical Documentation Control According to the Class and Purpose of Buildings\n",
"Official Gazette of the Republic of Serbia\n",
"Law on Planning and Construction\n",
"Construction Permit\n",
"Conceptual Project\n",
"Execution Project\n",
"Completed building project\n",
"Publication Date\n",
"Effective Date\n",
"Law Publication Dates\n",
"Prof. Dr. Zorana Mihajlovic\n",
"Ministry of Construction, Transport and Infrastructure\n",
"The amendment of the Rule on the Content, Mode, and Procedure of Preparation and Means of Control of Technical Documentation According to the Class and Purpose of Buildings\n",
"Publication in the Official Gazette of the Republic of Serbia\n",
"Law on Planning and Construction\n",
"Main project for construction permit, conceptual project, and execution project\n",
"Amendment and supplementation regulation to the technical documentation process and control according to the class and purpose of buildings\n",
"Official Gazette of the Republic of Serbia\n",
"Law on Planning and Construction\n",
"Minister of Construction, Transport, and Infrastructure\n",
"Pravilnik o izmenama i dopunama Pravilnika o sadržini, načinu i postupku izrade i način vršenja kontrole tehničke dokumentacije prema klasi i nameni objekata\n",
"Službeni glasnik RS, br. 77/2015\n",
"9.9.2015\n",
"prof. dr Zorana Mihajlović\n",
"4.9.2015\n",
"Ministry of Construction, Transport and Infrastructure\n",
"Regulation on Amendments and Supplements to the Regulation on the content, method, and procedure for preparing and controlling technical documentation according to class and purpose of buildings\n",
"Regulation on the content, method, and procedure for preparing and controlling technical documentation according to class and purpose of buildings\n",
"Main Project\n",
"Law on Planning and Construction\n",
"Regulation on amendments and supplements to the Regulation on the content, method, and procedure for the preparation and manner of controlling technical documentation according to the class and purpose of objects\n",
"Published in Official Gazette of RS, No. 77/2015 on September 9, 2015\n",
"Law on Planning and Construction (\"Official Gazette of RS\", Nos. 72/09, 81/09 - correction, 64/10 - Constitutional Court, 24/11, 121/12, 42/13 - Constitutional Court, 50/13, 98/13 - Constitutional Court)\n",
"Minister, Prof. Dr. Zorana Mihajlovic\n",
"Ministar građevinarstva, saobraćaja i infrastrukture\n",
"Pravilnik o izmenama i dopunama Pravilnika o sadržini, načinu i postupku izrade i način vršenja kontrole tehničke dokumentacije prema klasi i nameni objekata\n",
"Publication in Službeni glasnik RS, No. 77/2015\n",
"Glavni projekat\n",
"Zakon o planiranju i izgradnji\n",
"građevinska dozvola\n",
"idejni projekat\n",
"projekat za izvođenje\n",
"projekat izvedenog objekta\n",
"upotrebna dozvola\n",
"Pravilnik o izmenama i dopunama Pravilnika o sadržini, načinu i postupku izrade i način vršenja kontrole tehničke dokumentacije prema klasi i nameni objekata\n",
"Zakon o planiranju i izgradnji\n",
"\"Službeni glasnik RS\"\n",
"prof. dr Zorana Mihajlovic\n",
"Ministry of Construction, Transport, and Infrastructure\n",
"Rulebook on Amendments and Supplements to the Rulebook on the Content, Manner, and Procedure of Preparation and the Method of Technical Documentation Control According to the Class and Purpose of Buildings\n",
"Official Gazette of the Republic of Serbia\n",
"Law on Planning and Construction\n",
"Construction Permit\n",
"Conceptual Project\n",
"Execution Project\n",
"Completed building project\n",
"Publication Date\n",
"Effective Date\n",
"Law Publication Dates\n",
"Prof. Dr. Zorana Mihajlovic\n",
"Ministry of Construction, Transport and Infrastructure\n",
"The amendment of the Rule on the Content, Mode, and Procedure of Preparation and Means of Control of Technical Documentation According to the Class and Purpose of Buildings\n",
"Publication in the Official Gazette of the Republic of Serbia\n",
"Law on Planning and Construction\n",
"Main project for construction permit, conceptual project, and execution project\n",
"Amendment and supplementation regulation to the technical documentation process and control according to the class and purpose of buildings\n",
"Official Gazette of the Republic of Serbia\n",
"Law on Planning and Construction\n",
"Minister of Construction, Transport, and Infrastructure\n",
"Pravilnik o izmenama i dopunama Pravilnika o sadržini, načinu i postupku izrade i način vršenja kontrole tehničke dokumentacije prema klasi i nameni objekata\n",
"Službeni glasnik RS, br. 77/2015\n",
"9.9.2015\n",
"prof. dr Zorana Mihajlović\n",
"4.9.2015\n",
"Ministry of Construction, Transport and Infrastructure\n",
"Regulation on Amendments and Supplements to the Regulation on the content, method, and procedure for preparing and controlling technical documentation according to class and purpose of buildings\n",
"Regulation on the content, method, and procedure for preparing and controlling technical documentation according to class and purpose of buildings\n",
"Main Project\n",
"Law on Planning and Construction\n",
"Regulation on amendments and supplements to the Regulation on the content, method, and procedure for the preparation and manner of controlling technical documentation according to the class and purpose of objects\n",
"Published in Official Gazette of RS, No. 77/2015 on September 9, 2015\n",
"Law on Planning and Construction (\"Official Gazette of RS\", Nos. 72/09, 81/09 - correction, 64/10 - Constitutional Court, 24/11, 121/12, 42/13 - Constitutional Court, 50/13, 98/13 - Constitutional Court)\n",
"Minister, Prof. Dr. Zorana Mihajlovic\n",
"Ministar građevinarstva, saobraćaja i infrastrukture\n",
"Pravilnik o izmenama i dopunama Pravilnika o sadržini, načinu i postupku izrade i način vršenja kontrole tehničke dokumentacije prema klasi i nameni objekata\n",
"Publication in Službeni glasnik RS, No. 77/2015\n",
"Glavni projekat\n",
"Zakon o planiranju i izgradnji\n",
"građevinska dozvola\n",
"idejni projekat\n",
"projekat za izvođenje\n",
"projekat izvedenog objekta\n",
"upotrebna dozvola\n",
"Pravilnik o izmenama i dopunama Pravilnika o sadržini, načinu i postupku izrade i način vršenja kontrole tehničke dokumentacije prema klasi i nameni objekata\n",
"Zakon o planiranju i izgradnji\n",
"\"Službeni glasnik RS\"\n",
"prof. dr Zorana Mihajlovic\n",
"Ministry of Construction, Transport, and Infrastructure\n",
"Rulebook on Amendments and Supplements to the Rulebook on the Content, Manner, and Procedure of Preparation and the Method of Technical Documentation Control According to the Class and Purpose of Buildings\n",
"Official Gazette of the Republic of Serbia\n",
"Law on Planning and Construction\n",
"Construction Permit\n",
"Conceptual Project\n",
"Execution Project\n",
"Completed building project\n",
"Publication Date\n",
"Effective Date\n",
"Law Publication Dates\n",
"Prof. Dr. Zorana Mihajlovic\n",
"Ministry of Construction, Transport and Infrastructure\n",
"The amendment of the Rule on the Content, Mode, and Procedure of Preparation and Means of Control of Technical Documentation According to the Class and Purpose of Buildings\n",
"Publication in the Official Gazette of the Republic of Serbia\n",
"Law on Planning and Construction\n",
"Main project for construction permit, conceptual project, and execution project\n",
"Amendment and supplementation regulation to the technical documentation process and control according to the class and purpose of buildings\n",
"Official Gazette of the Republic of Serbia\n",
"Law on Planning and Construction\n",
"Minister of Construction, Transport, and Infrastructure\n",
"Pravilnik o izmenama i dopunama Pravilnika o sadržini, načinu i postupku izrade i način vršenja kontrole tehničke dokumentacije prema klasi i nameni objekata\n",
"Službeni glasnik RS, br. 77/2015\n",
"9.9.2015\n",
"prof. dr Zorana Mihajlović\n",
"4.9.2015\n",
"Ministry of Construction, Transport and Infrastructure\n",
"Regulation on Amendments and Supplements to the Regulation on the content, method, and procedure for preparing and controlling technical documentation according to class and purpose of buildings\n",
"Regulation on the content, method, and procedure for preparing and controlling technical documentation according to class and purpose of buildings\n",
"Main Project\n",
"Law on Planning and Construction\n",
"Regulation on amendments and supplements to the Regulation on the content, method, and procedure for the preparation and manner of controlling technical documentation according to the class and purpose of objects\n",
"Published in Official Gazette of RS, No. 77/2015 on September 9, 2015\n",
"Law on Planning and Construction (\"Official Gazette of RS\", Nos. 72/09, 81/09 - correction, 64/10 - Constitutional Court, 24/11, 121/12, 42/13 - Constitutional Court, 50/13, 98/13 - Constitutional Court)\n",
"Minister, Prof. Dr. Zorana Mihajlovic\n",
"Ministar građevinarstva, saobraćaja i infrastrukture\n",
"Pravilnik o izmenama i dopunama Pravilnika o sadržini, načinu i postupku izrade i način vršenja kontrole tehničke dokumentacije prema klasi i nameni objekata\n",
"Publication in Službeni glasnik RS, No. 77/2015\n",
"Glavni projekat\n",
"Zakon o planiranju i izgradnji\n",
"građevinska dozvola\n",
"idejni projekat\n",
"projekat za izvođenje\n",
"projekat izvedenog objekta\n",
"upotrebna dozvola\n",
"Pravilnik o izmenama i dopunama Pravilnika o sadržini, načinu i postupku izrade i način vršenja kontrole tehničke dokumentacije prema klasi i nameni objekata\n",
"Zakon o planiranju i izgradnji\n",
"\"Službeni glasnik RS\"\n",
"prof. dr Zorana Mihajlovic\n",
"Ministry of Construction, Transport, and Infrastructure\n",
"Rulebook on Amendments and Supplements to the Rulebook on the Content, Manner, and Procedure of Preparation and the Method of Technical Documentation Control According to the Class and Purpose of Buildings\n",
"Official Gazette of the Republic of Serbia\n",
"Law on Planning and Construction\n",
"Construction Permit\n",
"Conceptual Project\n",
"Execution Project\n",
"Completed building project\n",
"Publication Date\n",
"Effective Date\n",
"Law Publication Dates\n",
"Prof. Dr. Zorana Mihajlovic\n",
"Ministry of Construction, Transport and Infrastructure\n",
"The amendment of the Rule on the Content, Mode, and Procedure of Preparation and Means of Control of Technical Documentation According to the Class and Purpose of Buildings\n",
"Publication in the Official Gazette of the Republic of Serbia\n",
"Law on Planning and Construction\n",
"Main project for construction permit, conceptual project, and execution project\n",
"Amendment and supplementation regulation to the technical documentation process and control according to the class and purpose of buildings\n",
"Official Gazette of the Republic of Serbia\n",
"Law on Planning and Construction\n",
"Minister of Construction, Transport, and Infrastructure\n",
"Pravilnik o izmenama i dopunama Pravilnika o sadržini, načinu i postupku izrade i način vršenja kontrole tehničke dokumentacije prema klasi i nameni objekata\n",
"Službeni glasnik RS, br. 77/2015\n",
"9.9.2015\n",
"prof. dr Zorana Mihajlović\n",
"4.9.2015\n",
"Ministry of Construction, Transport and Infrastructure\n",
"Regulation on Amendments and Supplements to the Regulation on the content, method, and procedure for preparing and controlling technical documentation according to class and purpose of buildings\n",
"Regulation on the content, method, and procedure for preparing and controlling technical documentation according to class and purpose of buildings\n",
"Main Project\n",
"Law on Planning and Construction\n",
"Regulation on amendments and supplements to the Regulation on the content, method, and procedure for the preparation and manner of controlling technical documentation according to the class and purpose of objects\n",
"Published in Official Gazette of RS, No. 77/2015 on September 9, 2015\n",
"Law on Planning and Construction (\"Official Gazette of RS\", Nos. 72/09, 81/09 - correction, 64/10 - Constitutional Court, 24/11, 121/12, 42/13 - Constitutional Court, 50/13, 98/13 - Constitutional Court)\n",
"Minister, Prof. Dr. Zorana Mihajlovic\n",
"Ministar građevinarstva, saobraćaja i infrastrukture\n",
"Pravilnik o izmenama i dopunama Pravilnika o sadržini, načinu i postupku izrade i način vršenja kontrole tehničke dokumentacije prema klasi i nameni objekata\n",
"Publication in Službeni glasnik RS, No. 77/2015\n",
"Glavni projekat\n",
"Zakon o planiranju i izgradnji\n",
"građevinska dozvola\n",
"idejni projekat\n",
"projekat za izvođenje\n",
"projekat izvedenog objekta\n",
"upotrebna dozvola\n",
"Pravilnik o izmenama i dopunama Pravilnika o sadržini, načinu i postupku izrade i način vršenja kontrole tehničke dokumentacije prema klasi i nameni objekata\n",
"Zakon o planiranju i izgradnji\n",
"\"Službeni glasnik RS\"\n",
"prof. dr Zorana Mihajlovic\n",
"Ministry of Construction, Transport, and Infrastructure\n",
"Rulebook on Amendments and Supplements to the Rulebook on the Content, Manner, and Procedure of Preparation and the Method of Technical Documentation Control According to the Class and Purpose of Buildings\n",
"Official Gazette of the Republic of Serbia\n",
"Law on Planning and Construction\n",
"Construction Permit\n",
"Conceptual Project\n",
"Execution Project\n",
"Completed building project\n",
"Publication Date\n",
"Effective Date\n",
"Law Publication Dates\n",
"Prof. Dr. Zorana Mihajlovic\n",
"Ministry of Construction, Transport and Infrastructure\n",
"The amendment of the Rule on the Content, Mode, and Procedure of Preparation and Means of Control of Technical Documentation According to the Class and Purpose of Buildings\n",
"Publication in the Official Gazette of the Republic of Serbia\n",
"Law on Planning and Construction\n",
"Main project for construction permit, conceptual project, and execution project\n",
"Amendment and supplementation regulation to the technical documentation process and control according to the class and purpose of buildings\n",
"Official Gazette of the Republic of Serbia\n",
"Law on Planning and Construction\n",
"Minister of Construction, Transport, and Infrastructure\n",
"Pravilnik o izmenama i dopunama Pravilnika o sadržini, načinu i postupku izrade i način vršenja kontrole tehničke dokumentacije prema klasi i nameni objekata\n",
"Službeni glasnik RS, br. 77/2015\n",
"9.9.2015\n",
"prof. dr Zorana Mihajlović\n",
"4.9.2015\n",
"Ministry of Construction, Transport and Infrastructure\n",
"Regulation on Amendments and Supplements to the Regulation on the content, method, and procedure for preparing and controlling technical documentation according to class and purpose of buildings\n",
"Regulation on the content, method, and procedure for preparing and controlling technical documentation according to class and purpose of buildings\n",
"Main Project\n",
"Law on Planning and Construction\n",
"Regulation on amendments and supplements to the Regulation on the content, method, and procedure for the preparation and manner of controlling technical documentation according to the class and purpose of objects\n",
"Published in Official Gazette of RS, No. 77/2015 on September 9, 2015\n",
"Law on Planning and Construction (\"Official Gazette of RS\", Nos. 72/09, 81/09 - correction, 64/10 - Constitutional Court, 24/11, 121/12, 42/13 - Constitutional Court, 50/13, 98/13 - Constitutional Court)\n",
"Minister, Prof. Dr. Zorana Mihajlovic\n",
"Ministar građevinarstva, saobraćaja i infrastrukture\n",
"Pravilnik o izmenama i dopunama Pravilnika o sadržini, načinu i postupku izrade i način vršenja kontrole tehničke dokumentacije prema klasi i nameni objekata\n",
"Publication in Službeni glasnik RS, No. 77/2015\n",
"Glavni projekat\n",
"Zakon o planiranju i izgradnji\n",
"građevinska dozvola\n",
"idejni projekat\n",
"projekat za izvođenje\n",
"projekat izvedenog objekta\n",
"upotrebna dozvola\n",
"Pravilnik o izmenama i dopunama Pravilnika o sadržini, načinu i postupku izrade i način vršenja kontrole tehničke dokumentacije prema klasi i nameni objekata\n",
"Zakon o planiranju i izgradnji\n",
"\"Službeni glasnik RS\"\n",
"prof. dr Zorana Mihajlovic\n",
"Ministry of Construction, Transport, and Infrastructure\n",
"Rulebook on Amendments and Supplements to the Rulebook on the Content, Manner, and Procedure of Preparation and the Method of Technical Documentation Control According to the Class and Purpose of Buildings\n",
"Official Gazette of the Republic of Serbia\n",
"Law on Planning and Construction\n",
"Construction Permit\n",
"Conceptual Project\n",
"Execution Project\n",
"Completed building project\n",
"Publication Date\n",
"Effective Date\n",
"Law Publication Dates\n",
"Prof. Dr. Zorana Mihajlovic\n",
"Ministry of Construction, Transport and Infrastructure\n",
"The amendment of the Rule on the Content, Mode, and Procedure of Preparation and Means of Control of Technical Documentation According to the Class and Purpose of Buildings\n",
"Publication in the Official Gazette of the Republic of Serbia\n",
"Law on Planning and Construction\n",
"Main project for construction permit, conceptual project, and execution project\n",
"Amendment and supplementation regulation to the technical documentation process and control according to the class and purpose of buildings\n",
"Official Gazette of the Republic of Serbia\n",
"Law on Planning and Construction\n",
"Minister of Construction, Transport, and Infrastructure\n",
"Pravilnik o izmenama i dopunama Pravilnika o sadržini, načinu i postupku izrade i način vršenja kontrole tehničke dokumentacije prema klasi i nameni objekata\n",
"Službeni glasnik RS, br. 77/2015\n",
"9.9.2015\n",
"prof. dr Zorana Mihajlović\n",
"4.9.2015\n",
"Ministry of Construction, Transport and Infrastructure\n",
"Regulation on Amendments and Supplements to the Regulation on the content, method, and procedure for preparing and controlling technical documentation according to class and purpose of buildings\n",
"Regulation on the content, method, and procedure for preparing and controlling technical documentation according to class and purpose of buildings\n",
"Main Project\n",
"Law on Planning and Construction\n",
"Regulation on amendments and supplements to the Regulation on the content, method, and procedure for the preparation and manner of controlling technical documentation according to the class and purpose of objects\n",
"Published in Official Gazette of RS, No. 77/2015 on September 9, 2015\n",
"Law on Planning and Construction (\"Official Gazette of RS\", Nos. 72/09, 81/09 - correction, 64/10 - Constitutional Court, 24/11, 121/12, 42/13 - Constitutional Court, 50/13, 98/13 - Constitutional Court)\n",
"Minister, Prof. Dr. Zorana Mihajlovic\n",
"Ministar građevinarstva, saobraćaja i infrastrukture\n",
"Pravilnik o izmenama i dopunama Pravilnika o sadržini, načinu i postupku izrade i način vršenja kontrole tehničke dokumentacije prema klasi i nameni objekata\n",
"Publication in Službeni glasnik RS, No. 77/2015\n",
"Glavni projekat\n",
"Zakon o planiranju i izgradnji\n",
"građevinska dozvola\n",
"idejni projekat\n",
"projekat za izvođenje\n",
"projekat izvedenog objekta\n",
"upotrebna dozvola\n",
"Pravilnik o izmenama i dopunama Pravilnika o sadržini, načinu i postupku izrade i način vršenja kontrole tehničke dokumentacije prema klasi i nameni objekata\n",
"Zakon o planiranju i izgradnji\n",
"\"Službeni glasnik RS\"\n",
"prof. dr Zorana Mihajlovic\n",
"Ministry of Construction, Transport, and Infrastructure\n",
"Rulebook on Amendments and Supplements to the Rulebook on the Content, Manner, and Procedure of Preparation and the Method of Technical Documentation Control According to the Class and Purpose of Buildings\n",
"Official Gazette of the Republic of Serbia\n",
"Law on Planning and Construction\n",
"Construction Permit\n",
"Conceptual Project\n",
"Execution Project\n",
"Completed building project\n",
"Publication Date\n",
"Effective Date\n",
"Law Publication Dates\n",
"Prof. Dr. Zorana Mihajlovic\n",
"Ministry of Construction, Transport and Infrastructure\n",
"The amendment of the Rule on the Content, Mode, and Procedure of Preparation and Means of Control of Technical Documentation According to the Class and Purpose of Buildings\n",
"Publication in the Official Gazette of the Republic of Serbia\n",
"Law on Planning and Construction\n",
"Main project for construction permit, conceptual project, and execution project\n",
"Amendment and supplementation regulation to the technical documentation process and control according to the class and purpose of buildings\n",
"Official Gazette of the Republic of Serbia\n",
"Law on Planning and Construction\n",
"Minister of Construction, Transport, and Infrastructure\n",
"Pravilnik o izmenama i dopunama Pravilnika o sadržini, načinu i postupku izrade i način vršenja kontrole tehničke dokumentacije prema klasi i nameni objekata\n",
"Službeni glasnik RS, br. 77/2015\n",
"9.9.2015\n",
"prof. dr Zorana Mihajlović\n",
"4.9.2015\n",
"Ministry of Construction, Transport and Infrastructure\n",
"Regulation on Amendments and Supplements to the Regulation on the content, method, and procedure for preparing and controlling technical documentation according to class and purpose of buildings\n",
"Regulation on the content, method, and procedure for preparing and controlling technical documentation according to class and purpose of buildings\n",
"Main Project\n",
"Law on Planning and Construction\n",
"Regulation on amendments and supplements to the Regulation on the content, method, and procedure for the preparation and manner of controlling technical documentation according to the class and purpose of objects\n",
"Published in Official Gazette of RS, No. 77/2015 on September 9, 2015\n",
"Law on Planning and Construction (\"Official Gazette of RS\", Nos. 72/09, 81/09 - correction, 64/10 - Constitutional Court, 24/11, 121/12, 42/13 - Constitutional Court, 50/13, 98/13 - Constitutional Court)\n",
"Minister, Prof. Dr. Zorana Mihajlovic\n",
"Ministar građevinarstva, saobraćaja i infrastrukture\n",
"Pravilnik o izmenama i dopunama Pravilnika o sadržini, načinu i postupku izrade i način vršenja kontrole tehničke dokumentacije prema klasi i nameni objekata\n",
"Publication in Službeni glasnik RS, No. 77/2015\n",
"Glavni projekat\n",
"Zakon o planiranju i izgradnji\n",
"građevinska dozvola\n",
"idejni projekat\n",
"projekat za izvođenje\n",
"projekat izvedenog objekta\n",
"upotrebna dozvola\n",
"Pravilnik o izmenama i dopunama Pravilnika o sadržini, načinu i postupku izrade i način vršenja kontrole tehničke dokumentacije prema klasi i nameni objekata\n",
"Zakon o planiranju i izgradnji\n",
"\"Službeni glasnik RS\"\n",
"prof. dr Zorana Mihajlovic\n",
"Ministry of Construction, Transport, and Infrastructure\n",
"Rulebook on Amendments and Supplements to the Rulebook on the Content, Manner, and Procedure of Preparation and the Method of Technical Documentation Control According to the Class and Purpose of Buildings\n",
"Official Gazette of the Republic of Serbia\n",
"Law on Planning and Construction\n",
"Construction Permit\n",
"Conceptual Project\n",
"Execution Project\n",
"Completed building project\n",
"Publication Date\n",
"Effective Date\n",
"Law Publication Dates\n",
"Prof. Dr. Zorana Mihajlovic\n",
"Ministry of Construction, Transport and Infrastructure\n",
"The amendment of the Rule on the Content, Mode, and Procedure of Preparation and Means of Control of Technical Documentation According to the Class and Purpose of Buildings\n",
"Publication in the Official Gazette of the Republic of Serbia\n",
"Law on Planning and Construction\n",
"Main project for construction permit, conceptual project, and execution project\n",
"Amendment and supplementation regulation to the technical documentation process and control according to the class and purpose of buildings\n",
"Official Gazette of the Republic of Serbia\n",
"Law on Planning and Construction\n",
"Minister of Construction, Transport, and Infrastructure\n",
"Pravilnik o izmenama i dopunama Pravilnika o sadržini, načinu i postupku izrade i način vršenja kontrole tehničke dokumentacije prema klasi i nameni objekata\n",
"Službeni glasnik RS, br. 77/2015\n",
"9.9.2015\n",
"prof. dr Zorana Mihajlović\n",
"4.9.2015\n",
"Ministry of Construction, Transport and Infrastructure\n",
"Regulation on Amendments and Supplements to the Regulation on the content, method, and procedure for preparing and controlling technical documentation according to class and purpose of buildings\n",
"Regulation on the content, method, and procedure for preparing and controlling technical documentation according to class and purpose of buildings\n",
"Main Project\n",
"Law on Planning and Construction\n",
"Regulation on amendments and supplements to the Regulation on the content, method, and procedure for the preparation and manner of controlling technical documentation according to the class and purpose of objects\n",
"Published in Official Gazette of RS, No. 77/2015 on September 9, 2015\n",
"Law on Planning and Construction (\"Official Gazette of RS\", Nos. 72/09, 81/09 - correction, 64/10 - Constitutional Court, 24/11, 121/12, 42/13 - Constitutional Court, 50/13, 98/13 - Constitutional Court)\n",
"Minister, Prof. Dr. Zorana Mihajlovic\n",
"Ministar građevinarstva, saobraćaja i infrastrukture\n",
"Pravilnik o izmenama i dopunama Pravilnika o sadržini, načinu i postupku izrade i način vršenja kontrole tehničke dokumentacije prema klasi i nameni objekata\n",
"Publication in Službeni glasnik RS, No. 77/2015\n",
"Glavni projekat\n",
"Zakon o planiranju i izgradnji\n",
"građevinska dozvola\n",
"idejni projekat\n",
"projekat za izvođenje\n",
"projekat izvedenog objekta\n",
"upotrebna dozvola\n",
"Pravilnik o izmenama i dopunama Pravilnika o sadržini, načinu i postupku izrade i način vršenja kontrole tehničke dokumentacije prema klasi i nameni objekata\n",
"Zakon o planiranju i izgradnji\n",
"\"Službeni glasnik RS\"\n",
"prof. dr Zorana Mihajlovic\n",
"Ministry of Construction, Transport, and Infrastructure\n",
"Rulebook on Amendments and Supplements to the Rulebook on the Content, Manner, and Procedure of Preparation and the Method of Technical Documentation Control According to the Class and Purpose of Buildings\n",
"Official Gazette of the Republic of Serbia\n",
"Law on Planning and Construction\n",
"Construction Permit\n",
"Conceptual Project\n",
"Execution Project\n",
"Completed building project\n",
"Publication Date\n",
"Effective Date\n",
"Law Publication Dates\n",
"Prof. Dr. Zorana Mihajlovic\n",
"Ministry of Construction, Transport and Infrastructure\n",
"The amendment of the Rule on the Content, Mode, and Procedure of Preparation and Means of Control of Technical Documentation According to the Class and Purpose of Buildings\n",
"Publication in the Official Gazette of the Republic of Serbia\n",
"Law on Planning and Construction\n",
"Main project for construction permit, conceptual project, and execution project\n",
"Amendment and supplementation regulation to the technical documentation process and control according to the class and purpose of buildings\n",
"Official Gazette of the Republic of Serbia\n",
"Law on Planning and Construction\n",
"Minister of Construction, Transport, and Infrastructure\n",
"Pravilnik o izmenama i dopunama Pravilnika o sadržini, načinu i postupku izrade i način vršenja kontrole tehničke dokumentacije prema klasi i nameni objekata\n",
"Službeni glasnik RS, br. 77/2015\n",
"9.9.2015\n",
"prof. dr Zorana Mihajlović\n",
"4.9.2015\n",
"Ministry of Construction, Transport and Infrastructure\n",
"Regulation on Amendments and Supplements to the Regulation on the content, method, and procedure for preparing and controlling technical documentation according to class and purpose of buildings\n",
"Regulation on the content, method, and procedure for preparing and controlling technical documentation according to class and purpose of buildings\n",
"Main Project\n",
"Law on Planning and Construction\n",
"Regulation on amendments and supplements to the Regulation on the content, method, and procedure for the preparation and manner of controlling technical documentation according to the class and purpose of objects\n",
"Published in Official Gazette of RS, No. 77/2015 on September 9, 2015\n",
"Law on Planning and Construction (\"Official Gazette of RS\", Nos. 72/09, 81/09 - correction, 64/10 - Constitutional Court, 24/11, 121/12, 42/13 - Constitutional Court, 50/13, 98/13 - Constitutional Court)\n",
"Minister, Prof. Dr. Zorana Mihajlovic\n",
"Ministar građevinarstva, saobraćaja i infrastrukture\n",
"Pravilnik o izmenama i dopunama Pravilnika o sadržini, načinu i postupku izrade i način vršenja kontrole tehničke dokumentacije prema klasi i nameni objekata\n",
"Publication in Službeni glasnik RS, No. 77/2015\n",
"Glavni projekat\n",
"Zakon o planiranju i izgradnji\n",
"građevinska dozvola\n",
"idejni projekat\n",
"projekat za izvođenje\n",
"projekat izvedenog objekta\n",
"upotrebna dozvola\n",
"Pravilnik o izmenama i dopunama Pravilnika o sadržini, načinu i postupku izrade i način vršenja kontrole tehničke dokumentacije prema klasi i nameni objekata\n",
"Zakon o planiranju i izgradnji\n",
"\"Službeni glasnik RS\"\n",
"prof. dr Zorana Mihajlovic\n"
]
}
],
"outputs": [],
"source": [
"from cognee import search\n",
"from cognee.api.v1.search.search import SearchType\n",
"query_params = {\n",
" SearchType.SIMILARITY: {'query': 'your search query here'}\n",
"}\n",

View file

@ -1,4 +1,4 @@
# Welcome to the cognee Blog
# Welcome to the cognee blog
The goal of the blog is to discuss broader topics around the cognee project, including the motivation behind the project, the technical details, and the future of the project.

Binary file not shown.

After

Width:  |  Height:  |  Size: 157 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 749 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 297 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 466 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 481 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 750 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 720 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 684 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 631 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 694 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 104 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 402 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 341 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 224 KiB

View file

@ -12,7 +12,338 @@ authors:
- tricalt
---
# First post
# Going beyond Langchain + Weaviate and towards a production ready modern data platform
### Table of Contents
## **1. Introduction: The Current Generative AI Landscape**
### 1.1. A brief overview
Browsing the [largest AI platform directory](https://theresanaiforthat.com/) available at the moment, we can observe around 7,000 new, mostly semi-finished AI projects — projects whose development is fueled by recent improvements in foundation models and open-source community contributions.
Decades of technological advancements have led to small teams being able to do in 2023 what in 2015 required a team of dozens.
Yet, the AI apps currently being pushed out still mostly feel and perform like demos.
It seems it has never been easier to create a startup, build an AI app, go to market… and fail.
The consensus is, nevertheless, that the AI space is *the* place to be in 2023.
> “The AI Engineer [...] will likely be the **highest-demand engineering job of the [coming] decade.”**
>
**[Swyx](https://www.latent.space/p/ai-engineer)**
The stellar rise of AI engineering as a profession is, perhaps, signaling the need for a unified solution that is not yet there — a platform that is, in its essence, a Large Language Model (LLM), which could be employed as [a powerful general problem solver](https://lilianweng.github.io/posts/2023-06-23-agent/?fbclid=IwAR1p0W-Mg_4WtjOCeE8E6s7pJZlTDCDLmcXqHYVIrEVisz_D_S8LfN6Vv20).
To address this issue, dlthub and [prometh.ai](http://prometh.ai/) will collaborate on productionizing a common use-case, PDF processing, progressing step by step. We will use LLMs, AI frameworks, and services, refining the code until we attain a clearer understanding of what a modern LLM architecture stack might entail.
You can find the code in the [PromethAI-Memory repository](https://github.com/topoteretes/PromethAI-Memory)
### **1.2. The problem of putting code to production**
![infographic (2).png](Going%20beyond%20Langchain%20+%20Weaviate%20and%20towards%20a%20pr%207351d77a1eba40aab4394c24bef3a278/infographic_(2).png)
Despite all the AI startup hype, theres a glaring issue lurking right under the surface: **foundation models do not have production-ready data infrastructure by default**
Everyone seems to be building simple tools, like “Your Sales Agent” or “Your HR helper,” on top of OpenAI — a so-called  “Thin Wrapper” — and selling them as services.
Our intention, however, is not to merely capitalize on this nascent industry — its to use a new technology to catalyze a true digital paradigm shift  — to [paraphrase investor Marc Andreessen](https://www.youtube.com/watch?v=-hxeDjAxvJ8&t=328s&ab_channel=LexFridman), content of the new medium as the content of the previous medium.
What Andreessen meant by this is that each new medium for sharing information must encapsulate the content of the prior medium. For example, the internet encapsulates all books, movies, pictures, and stories from previous mediums.
After a unified AI solution is created, only then will AI agents be able to proactively and competently operate the browsers, apps, and devices we operate by ourselves today.
Intelligent agents in AI are programs capable of [perceiving](https://en.wikipedia.org/wiki/Machine_perception) their environment, acting [autonomously](https://en.wikipedia.org/wiki/Autonomous) in order to achieve goals, and may improve their performance by [learning](https://en.wikipedia.org/wiki/Machine_learning) or acquiring [knowledge](https://en.wikipedia.org/wiki/Knowledge_representation).
The reality is that we now have a set of data platforms and AI agents that are becoming available to the general public, whose content and methods were previously inaccessible to anyone not privy to the tech-heavy languages of data scientists and engineers.
As engineering tools move toward the mainstream, they need to become more intuitive and user friendly, while hiding their complexity with a set of background solutions.
> *Fundamentally, the issue of “Thin wrappers” is not an issue of bad products, but an issue of a lack of robust enough data engineering methods coupled with the general difficulties that come with creating production-ready code that relies on robust data platforms in a new space.*
>
The current lack of production-ready data systems for LLMs and AI Agents opens up a gap we want to fill  by introducing robust data engineering practices to solve this issue.
In this series of texts, our aim will thus be to explore what would constitute:
1. Proper data engineering methods for LLMs
2. A production-ready generative AI data platform that unlocks AI assistants/Agent Networks
Each of the coming blog posts will be followed by Python code, to demonstrate the progress made toward building a modern AI data platform, raise important questions, and facilitate an open-source collaboration.
Lets start by setting an attainable goal. As an example, lets conceptualize a production-ready process that can analyze and process hundreds of PDFs for hundreds of users.
<aside>
💡 As a user, I want an AI Data Platform to enable me to extract, organize, and summarize data from PDF invoices so that it's seamlessly updated in the database and available for further processing.
</aside>
Imagine you're a developer, and you've got a stack of digital invoices in PDF format from various vendors. These PDFs are not just simple text files; they contain logos, varying fonts, perhaps some tables, and even handwritten notes or signatures.
Your goal? To extract relevant information, such as vendor names, invoice dates, total amounts, and line-item details, among others.
This task of analyzing PDFs may help us understand and define what a production-ready AI data platform entails. To perform the task, well be drawing a parallel between Data Engineering concepts and those from Cognitive Sciences which tap into our understanding of how human memory works — this should provide the baseline for the evaluation of the POCs in this post.
We assume that Agent Networks of the future would resemble groups of humans with their own memory and unique contexts, all working and contributing toward a set of common objectives.
In our example of data extraction from PDFs — a modern enterprise may have hundreds of thousands, if not millions of such documents stored in different places, with many people hired to make sense of them.
This data is considered unstructured — you cannot handle it easily with current data engineering practices and database technology. The task to structure it is difficult and, to this day, has always needed to be performed manually.
With the advent of Agent Networks, which mimic human cognitive abilities, we could start realistically structuring this kind of information at scale. As this is still data processing — an engineering task — we need to combine those two approaches.
From an engineering standpoint, the next generation Data Platform needs to be built with the following in mind:
- We need to give Agents access to the data at scale.
- We need our Agents to operate like human minds so we need to provide them with tools to execute tasks and various types of memory for reasoning
- We need to keep the systems under control, meaning that we apply good engineering practices to the whole system
- We need to be able to test, sandbox, and roll back what Agents do and we need to observe them and log every action
In order to conceptualize a new model of data structure and relationships that transcends the traditional Data Warehousing approach, we can start perceiving procedural steps in Agent execution flows as thoughts and interpreting them through the prism of human cognitive processes such as the functioning of our memory system and its memory components.
Human memory can be divided into several distinct categories:
- **Sensory Memory (SM)** → Very short term (15-30s) memory storage unit receiving information from our senses.
- **Short Term Memory (STM)** → Short term memory that processes the information, and coordinates work based on information provided.
- **Long-Term Memory (LTM) →** Stores information long term, and retrieves what it needs for daily life.
The general structure of human memory. Note that [Weng](https://lilianweng.github.io/posts/2023-06-23-agent/) doesnt expand on the STM here in the way we did above :
![Untitled](Going%20beyond%20Langchain%20+%20Weaviate%20and%20towards%20a%20pr%207351d77a1eba40aab4394c24bef3a278/Untitled.png)
Broader, more relevant representation of memory for our context, and the corresponding data processing, based on [Atkinson-Schiffrin memory model](https://en.wikipedia.org/wiki/Atkinson%E2%80%93Shiffrin_memory_model) would be:
![Untitled](Going%20beyond%20Langchain%20+%20Weaviate%20and%20towards%20a%20pr%207351d77a1eba40aab4394c24bef3a278/Untitled%201.png)
## **2. Level 0: The Current State of Affairs**
To understand the current LLM production systems, how they handle data input and processing, and their evolution, we start at Level 0 — the LLMs and their APIs as they are currently — and progress toward Level 7 — AI Agents and complex AI Data Platforms and Agent Networks of the future.
### 2.1. Developer Intent at Level 0
![infographic (2).png](Going%20beyond%20Langchain%20+%20Weaviate%20and%20towards%20a%20pr%207351d77a1eba40aab4394c24bef3a278/infographic_(2)%201.png)
In order to extract relevant data from PDF documents, as an engineer you would turn to a powerful AI model like OpenAI, Anthropic, or Cohere (Layer 0 in our XYZ stack). Not all of them support this functionality, so youd use [Bing](https://www.notion.so/Go-to-market-under-construction-04a750a15c264df4be5c6769289b99a2?pvs=21) or a ChatGPT plugin like [AskPDF](https://plugin.askyourpdf.com/), which do.
In order to "extract nuances," you might provide the model with specific examples or more directive prompts. For instance, "Identify the vendor name positioned near the top of the invoice, usually above the billing details."
Next, you'd "prompt it" with various PDFs to see how it reacts. Based on the outputs, you might notice that it misses handwritten dates or gets confused with certain fonts.
This is where "[prompt engineering](https://www.promptingguide.ai/)" comes in. You might adjust your initial prompt to be more specific or provide additional context. Maybe you now say, "Identify the vendor name and, if you find any handwritten text, treat it as the invoice date."
### 2.2 **Toward the production code from the chatbot UX** - POC at level 0
![Untitled](Going%20beyond%20Langchain%20+%20Weaviate%20and%20towards%20a%20pr%207351d77a1eba40aab4394c24bef3a278/Untitled%202.png)
Our POC at this stage consists of simply uploading a PDF and asking it questions until we have better and better answers based on prompt engineering. This exercise shows what is available with the current production systems, to help us set a baseline for the solutions to come.
- If your goal is to understand the content of a PDF, Bing and OpenAI will enable you to upload documents and get explanations of their contents
- Uses basic natural language processing (NLP) prompts without any schema on output data
- Typically “forgets” the data after a query — no notion of storage (LTM)
- In a production environment, data loss can have significant consequences. It can lead to operational disruptions, inaccurate analytics, and loss of valuable insights
- There is no possibility to test the behavior of the system
Lets break down the Data Platform component at this stage:
| Memory type | State | Description |
| --- | --- | --- |
| Sensory Memory | Chatbot interface | Can be interpreted in this context as the interface used for the human input |
| STM | The context window of the chatbot/search. In essence stateless | The processing layer and a storage of the session/user context |
| LTM | Not present at this stage | The information storage |
Lacks:
- Decoupling: Separating components to reduce interdependency.
- Portability: Ability to run in different environments.
- Modularity: Breaking down into smaller, independent parts.
Extendability: Capability to add features or functionality.
**Next Steps**:
1. Implement a LTM memory component for information retention.
2. Develop an abstraction layer for Sensory Memory input and processing multiple file types.
Addressing these points will enhance flexibility, reusability, and adaptability.
### 2.3 Summary - Ask PDF questions
| Description | Use-Case | Summary | Memory | Maturity | Production readiness |
| --- | --- | --- | --- | --- | --- |
| The Foundational Model | Extract info from your documents | ChatGPT prompt engineering as the only way to optimise outputs | SM, STM are system defined, LTM is not present | Works 15% of time | Lacks Decoupling, Portability, Modularity and Extendability |
### 2.4. Addendum - companies in the space: OpenAI, Anthropic, and Cohere
- A brief on each provider, relevant model and its role in the modern data space.
- The list of models and providers in the [space](https://mindsdb.com/blog/navigating-the-llm-landscape-a-comparative-analysis-of-leading-large-language-models)
| Model | Provider | Structured data | Speed | Params | Fine Tunability |
| --- | --- | --- | --- | --- | --- |
| gpt-4 | OpenAI  | Yes | ★☆☆  |  - | No |
| gpt-3.5-turbo | OpenAI | Yes | ★★☆  |  175B | No |
| gpt-3 | OpenAI | No  |  ★☆☆ |  175B | No |
| ada, babbage, curie |  OpenAI | No | ★★★  |  350M - 7B | Yes |
| claude | Anthropic  | No | ★★☆  |  52B | No  |
| claude-instant | Anthropic  | No | ★★★  |  52B | No |
| command-xlarge | Cohere | No |  ★★☆ |  50B | Yes |
| command-medium | Cohere | No |  ★★★ |  6B | Yes |
| BERT | Google  | No | ★★★  | 345M  | Yes |
|  T5 | Google  | No | ★★☆  |  11B | Yes |
| PaLM  | Google  | No |  ★☆☆ |  540B | Yes |
| LLaMA | Meta AI  | Yes | ★★☆  |  65B | Yes |
|  CTRL | Salesforce  | No | ★★★  | 1.6B  | Yes |
| Dolly 2.0  | Databricks | No | ★★☆  |  12B | Yes  |
## 3**. Level 1: Langchain & Weaviate**
### **3.1.** Developer Intent at Level 1**: Langchain & Weaviate LLM Wrapper**
![infographic (2).png](Going%20beyond%20Langchain%20+%20Weaviate%20and%20towards%20a%20pr%207351d77a1eba40aab4394c24bef3a278/infographic_(2)%202.png)
This step is basically an upgrade to the current state of the art LLM UX/UI where we add:
- Permanent LTM memory (data store)
As a developer, I need to answer questions on large PDFs that I cant simply pass to the LLM due to technical limitations. The primary issue being addressed is the constraint on prompt length. As of now, GPT-4 has a limit of 4k tokens for both the prompt and the response combined. So, if the prompt comprises 3.5k tokens, the response can only be 0.5k tokens long.
- LLM Framework like Langchain to adapt any document type to vector store
Using Langchain provides a neat abstraction for me to get started quickly, connect to VectorDB, and get fast results.
- Some higher level structured storage (dlthub)
![Untitled](Going%20beyond%20Langchain%20+%20Weaviate%20and%20towards%20a%20pr%207351d77a1eba40aab4394c24bef3a278/Untitled%203.png)
### **3.2. Translating Theory into Practice: POC at Level 1**
- LLMs cant process all the data that a large PDF could contain. So, we need a place to store the PDF and a way to retrieve relevant information from it, so it can be passed on to the LLM.
- When trying to build and process documents or user inputs, its important to store them in a Vector Database to be able to retrieve the information when needed, along with the past context.
- A vector database is the optimal solution because it enables efficient storage, retrieval, and processing of high-dimensional data, making it ideal for applications like document search and user input analysis where context and similarity are important.
- For the past several months, there has been a surge of projects that personalize LLMs by storing user settings and information in a VectorDB so they can be easily retrieved and used as input for the LLM.
This can be done by storing data in the Weaviate Vector Database; then, we can process our PDF.
- We start by converting the PDF and translating it
![carbon (5).png](Going%20beyond%20Langchain%20+%20Weaviate%20and%20towards%20a%20pr%207351d77a1eba40aab4394c24bef3a278/carbon_(5).png)
- the next step we store the PDF to Weaviate
![carbon (6).png](Going%20beyond%20Langchain%20+%20Weaviate%20and%20towards%20a%20pr%207351d77a1eba40aab4394c24bef3a278/carbon_(6).png)
- We load the data into some type of database using dlthub
![carbon (9).png](Going%20beyond%20Langchain%20+%20Weaviate%20and%20towards%20a%20pr%207351d77a1eba40aab4394c24bef3a278/carbon_(9).png)
The parallel with our memory components becomes clearer at this stage. We have some way to define inputs which correspond to SM, while STM and LTM are starting to become two separate, clearly distinguishable entities. It becomes evident that we need to separate LTM data according to domains it belongs to but, at this point, a clear structure for how that would work has not yet emerged.
In addition, we can treat GPT as limited working memory and its context size as how much our model can remember during one operation.
Its evident that, if we dont manage the working memory well, we will overload it and fail to retrieve outputs. So, we will need to take a closer look into how humans do the same and how our working memory manages millions of facts, emotions, and senses swirling around our minds.
Lets break down the Data Platform components at this stage:
| Memory type | State | Description |
| --- | --- | --- |
| Sensory Memory | Command line interface + arguments | Can be interpreted in this context as the arguments provided to the script |
| STM | Partially Vector store, partially working memory | The processing layer and a storage of the session/user context |
| LTM | Vector store | The raw document storage |
**Sensory Memory**
Sensory memory can be seen as an input buffer where the information from the environment is stored temporarily. In our case, its the arguments we give to the command line script.
**STM**
STM is often associated with the concept of "working memory," which holds and manipulates information for short periods.
In our case, it is the time during which the process runs.
**LTM**
LTM can be conceptualized as a database in software systems. Databases store, organize, and retrieve data over extended periods. The information in LTM is organized and indexed, similar to how databases use tables, keys, and indexes to categorize and retrieve data efficiently.
**VectorDB: The LTM Storage of Our AI Data Platform**
Unlike traditional relational databases, that store data in tables, and newer NoSQL databases like MongoDB, that use JSON documents, vector databases specifically store and fetch vector embeddings.
Vector databases are crucial for Large Language Models and other modern, resource-hungry applications. They're designed for handling vector data, commonly used in fields like computer graphics, Machine Learning, and Geographic Information Systems.
Vector databases hinge on vector embeddings. These embeddings, packed with semantic details, help AI systems to understand data and retain long-term memory. They're condensed snapshots of training data and act as filters when processing new data in the inference stage of machine learning.
**Problems**:
- Interoperability
- Maintainability
- Fault Tolerance
**Next steps:**
1. Create a standardized data model
2. Dockerize the component
3. Create a FastAPI endpoint
### **3.4. Summary - The thing startup bros pitch to VCs**
| Description | Use-Case | Summary | Knowledge | Maturity | Production readiness |
| --- | --- | --- | --- | --- | --- |
| Interface Endpoint for the Foundational Model | Store data and query it for a particular use-case | Langchain + Weaviate to improve users conversations + prompt engineering to get better outputs | SM is somewhat modifiable, STM is not clearly defined, LTM is a VectorDB | Works 25% of time | Lacks Interoperability, Maintainability, Fault Tolerance Has some: Reusability, Portability, Extendability |
### 3.5. Addendum - Frameworks and Vector DBs in the space: Langchain, Weaviate and others
- A brief on each provider, relevant model and its role in the modern data space.
- The list of models and providers in the space
| Tool/Service | Tool type | Ease of use | Maturity | Docs | Production readiness | |
| --- | --- | --- | --- | --- | --- | --- |
| Langchain | Orchestration framework | ★★☆  | ★☆☆  | ★★☆  | ★☆☆  | |
| Weaviate | VectorDB | ★★☆  | ★★☆  | ★★☆  | ★★☆  | |
| Pinecone | VectorDB | ★★☆  | ★★☆  | ★★☆  | ★★☆  | |
| ChromaDB | VectorDB | ★★☆  | ★☆☆  | ★☆☆  | ★☆☆  | |
| Haystack | Orchestration framework | ★★☆  | ★☆☆  | ★★☆  | ★☆☆  | |
| Huggingface's New Agent System | Orchestration framework | ★★☆  | ★☆☆  | ★★☆  | ★☆☆  | |
| Milvus | VectorDB | ★★☆  | ★☆☆  | ★★☆  | ★☆☆  | |
| https://gpt-index.readthedocs.io/ | Orchestration framework | ★★☆  | ★☆☆  | ★★☆  | ★☆☆  | |
| | | | | | | |
## **Resources**
### **Blog Posts:**
1. **[Large Action Models](https://blog.salesforceairesearch.com/large-action-models/)**
2. **[Making Data Ingestion Production-Ready: A LangChain-Powered Airbyte Destination](https://blog.langchain.dev/making-data-ingestion-production-ready-a-langchain-powered-airbyte-destination/)**
3. **[The Problem with LangChain](https://minimaxir.com/2023/07/langchain-problem/)**
### **Research Papers (ArXiv):**
1. **[Research Paper 1](https://arxiv.org/pdf/2303.17580.pdf)**
2. **[Research Paper 2](https://arxiv.org/abs/2210.03629)**
3. **[Research Paper 3](https://arxiv.org/abs/2302.01560)**
### **Web Comics:**
1. **[xkcd comic](https://xkcd.com/927/)**
### **Reddit Discussions:**
1. **[Reddit Discussion: The Problem with LangChain](https://www.reddit.com/r/MachineLearning/comments/14zlaz6/d_the_problem_with_langchain/)**
### **Developer Blog Posts:**
1. **[Unlocking the Power of Enterprise-Ready LLMS with NeMo](https://developer.nvidia.com/blog/unlocking-the-power-of-enterprise-ready-llms-with-nemo/)**
### **Industry Analysis:**
1. **[Emerging Architectures for LLM Applications](https://a16z.com/2023/06/20/emerging-architectures-for-llm-applications/)**
### **Prompt Engineering:**
1. **[Prompting Guide](https://www.promptingguide.ai/)**
2. **[Tree of Thought Prompting: Walking the Path of Unique Approach to Problem Solving](https://www.promptengineering.org/tree-of-thought-prompting-walking-the-path-of-unique-approach-to-problem-solving/)**
## Conclusion

View file

@ -1,5 +1,173 @@
---
draft: False
date: 2023-10-05
tags:
- pydantic
- langchain
- llm
- openai
- functions
- pdfs
authors:
- tricalt
---
# Going beyond Langchain + Weaviate: Level 2 towards Production
### 1.1. The problem of putting code to production
*This post is a part of a series of texts aiming to discover and understand patterns and practices that would enable building a production-ready AI data infrastructure. The main focus is on how to evolve data modeling and retrieval in order to enable Large Language Model (LLM) apps and Agents to serve millions of users concurrently.*
*For a broad overview of the problem and our understanding of the current state of the LLM landscape, check out [our previous post](https://www.prometh.ai/promethai-memory-blog-post-one)*
![infographic (2).png](Going%20beyond%20Langchain%20+%20Weaviate%20Level%202%20towards%20%2098ad7b915139478992c4c4386b5e5886/infographic_(2).png)
In this text, we continue our inquiry into what would constitute:
1. Proper data engineering methods for LLMs
2. A production-ready generative AI data platform that unlocks AI assistants/Agent Networks
To explore these points, we here at [prometh.ai](http://prometh.ai/) have partnered with dlthub in order to productionize a common use case — complex PDF processing — progressing level by level.
In the previous text, we wrote a simple script that relies on the Weaviate Vector database to turn unstructured data into structured data and help us make sense of it.
In this post, some of the shortcomings from the previous level will be addressed, including::
1. Containerization
2. Data model
3. Data contract
4. Vector Database retrieval strategies
5. LLM context and task generation
6. Dynamic Agent behavior and Agent tooling
## 3. Level 2: Memory Layer + FastAPI + Langchain + Weaviate
### 3.1. Developer Intent at Level 2
This phase enhances the basic script by incorporating:
- Memory Manager
The memory manager facilitates the execution and processing of VectorDB data by:
1. Uniformly applying CRUD (Create, Read, Update, Delete) operations across various classes
2. Representing different business domains or concepts, and
3. Ensuring they adhere to a common data model, which regulates all data points across the system.
- Context Manager
This central component processes and analyzes data from Vector DB, evaluates its significance, and compares the results with user-defined benchmarks.
The primary objective is to establish a mechanism that encourages in-context learning and empowers the Agents adaptive understanding.
As an example, lets assume we uploaded the book *A Call of the Wild* by Jack London to our Vector DB semantic layer, to give our LLM a better understanding of the life of sled dogs in the early 1900s.
Asking a question about the contents of the book will yield a straightforward answer, provided that the book contains an explicit answer to our question.
To enable better question answering and access to additional information such as historical context, summaries, and other documents, we need to introduce different memory stores and a set of **attention modulators**, which are meant to manage the prioritization of data retrieved for the answers.
- Task Manager
Utilizing the tools at hand and guided by the user's prompt, the task manager determines a sequence of actions and their execution order.
For example, lets assume that the user asks: “When was Buck (one of the dogs from *A Call of the Wild*) kidnapped” and to have the answer translated to German”
This query would be broken down by the task manager into a set of atomic tasks that can then be handed over to the Agent.
The ordered task list could be:
1. Retrieve information about the PDF from the database.
2. Translate the information to German.
- The Agent
AI agents can use computers independently. They can browse the web, use apps, read and write files, make credit card payments, and even autonomously execute processes on your personal computer.
In our case, the Agent has only a few tools at its disposal, such as tools to translate text or structure data. Using these tools, it processes and executes tasks in the sequence they are provided by the Task Manager and the Context Manager.
### 3.2 **Toward the memory layer** - POC at level 2
![Untitled](Going%20beyond%20Langchain%20+%20Weaviate%20Level%202%20towards%20%2098ad7b915139478992c4c4386b5e5886/Untitled.png)
At this stage, our proof of concept (POC) allows uploading a PDF document and requesting specific actions on it such as "load to database", "translate to German", or "convert to JSON." Prior task resolutions and potential operations are assessed by the Context Manager and Task Manager services.
The following set of steps explains the workflow of the POC at level 2:
- Initially, we specify the parameters for the document we wish to upload and define our objective in the prompt:
![Untitled](Going%20beyond%20Langchain%20+%20Weaviate%20Level%202%20towards%20%2098ad7b915139478992c4c4386b5e5886/Untitled%201.png)
- The memory manager retrieves the parameters and the attention modulators and creates context based on Episodic and Semantic memory stores (previous runs of the job + raw data):
![carbon (23).png](Going%20beyond%20Langchain%20+%20Weaviate%20Level%202%20towards%20%2098ad7b915139478992c4c4386b5e5886/carbon_(23).png)
- To do this, it starts by filtering user input, in the same way our brains filter important from redundant information. As an example, if there are children playing and talking loudly in the background during our Zoom meeting, we can still pool our attention together and focus on what the person on the other side is saying.
The same principle is applied here:
![carbon (19).png](Going%20beyond%20Langchain%20+%20Weaviate%20Level%202%20towards%20%2098ad7b915139478992c4c4386b5e5886/carbon_(19).png)
- In the next step, we apply a set of attention modulators to process the data obtained from the Vector Store.
*NOTE: In cognitive science, attention modulators can be thought of as factors or mechanisms that influence the direction and intensity of attention.*
*As we have many memory stores, we need to prioritize the data points that we retrieve via semantic search.*
*Since semantic search is not enough by itself, scoring data points happens via a set of functions that replicate how attention modulators work in cognitive science.*
Initially, weve implemented a few attention modulators that we thought could improve the document retrieval process:
**Frequency**: This refers to how often a specific stimulus or event is encountered. Stimuli that are encountered more frequently are more likely to be attended to or remembered.
**Recency**: This refers to how recently a stimulus or event was encountered. Items or events that occurred more recently are typically easier to recall than those that occurred a long time ago.
We have implemented many more, and you can find them in our
[repository](https://github.com/topoteretes/PromethAI-Memory). More are still needed and contributions are more than welcome.
Lets see the modulators in action:
![carbon (20).png](Going%20beyond%20Langchain%20+%20Weaviate%20Level%202%20towards%20%2098ad7b915139478992c4c4386b5e5886/carbon_(20).png)
In the code above we fetch the memories from the Semantic Memory bank where our knowledge of the world is stored (the PDFs). We select the relevant documents by using the handle_modulator function.
- The handle_modulator function is defined below and explains how scoring of memories happens.
![carbon (21).png](Going%20beyond%20Langchain%20+%20Weaviate%20Level%202%20towards%20%2098ad7b915139478992c4c4386b5e5886/carbon_(21).png)
We process the data retrieved with OpenAI functions and store the results for the Task Manager to be able to determine what actions the Agent should take.
The Task Manager then sorts and converts user input into a set of actionable steps based on the tools available.
![carbon (22).png](Going%20beyond%20Langchain%20+%20Weaviate%20Level%202%20towards%20%2098ad7b915139478992c4c4386b5e5886/carbon_(22).png)
Finally, the Agent interprets the context and performs the steps using the tools it has available. We see this as the step where the Agents take over the task, executing it in their own way.
Now, let's look back at what constitutes the Data Platform:
| Memory type | State | Description |
| --- | --- | --- |
| Sensory Memory | API | Can be interpreted in this context as the interface used for the human input |
| STM | Weaviate Class with hardcoded contract | The processing layer and a storage of the session/user context |
| LTM | Weaviate Class with hardcoded contract | The information storage |
Lacks:
- Extendability: Capability to add features or functionality.
- Loading flexibility: Ability to apply different chunking strategies
- Testability: How to test the code and make sure it runs
**Next Steps**:
1. Implement different strategies for vector search
2. Add more tools to process PDFs
3. Add more attention modulators
4. Add a solid test framework

View file

@ -1,6 +1,184 @@
---
draft: False
date: 2023-10-05
tags:
- pydantic
- langchain
- llm
- openai
- functions
- pdfs
authors:
- tricalt
---
# Going beyond Langchain + Weaviate: Level 3 towards production
### **Preface**
This post is part of a series of texts aiming to explore and understand patterns and practices that enable the construction of a production-ready AI data infrastructure. The main focus of the series is on the modeling and retrieval of evolving data, which would empower Large Language Model (LLM) apps and Agents to serve millions of users concurrently.
For a broad overview of the problem and our understanding of the current state of the LLM landscape, check out our initial post [here](https://www.prometh.ai/promethai-memory-blog-post-one).
In this post, we delve into context enrichment and testing in Retrieval Augmented Generation (RAG) applications.
RAG applications can retrieve relevant information from a knowledge base and generate detailed, context-aware answers to user queries.
As we are trying to improve on the base information LLMs are giving us, we need to be able to retrieve and understand more complex data, which can be stored in various data stores, in many formats, and using different techniques.
All of this leads to a lot of opportunities, but also creates a lot of confusion in generating and using RAG applications and extending the existing context of LLMs with new knowledge.
### **1. Context Enrichment and Testing in RAG Applications**
In navigating the complexities of RAG applications, the first challenge we face is the need for robust testing. Determining whether augmenting a LLM's context with additional information will yield better results is far from straightforward and often relies on subjective assessments.
Imagine, for instance, adding the digital version of the book *The Adventures of Tom Sawyer* to the LLM's database in order to enrich its context and obtain more detailed answers about the book's content for a paper we're writing. To evaluate this enhancement, we need a way to measure the accuracy of the responses before and after adding the book while considering the variations of every adjustable parameter.
### **2. Adjustable Parameters in RAG Applications**
The end-to-end process of enhancing RAG applications involves various adjustable parameters, which offer multiple paths toward achieving similar goals with varying outcomes. These parameters include:
1. Number of documents loaded into memory.
2. Size of each sub-document chunk uploaded.
3. Overlap between documents uploaded.
4. Relationship between documents (Parent-Son etc.)
5. Type of embedding used for data-to-vector conversion (OpenAI, Cohere, or any other embedding method).
6. Metadata structure for data navigation.
7. Indexes and data structures.
8. Search methods (text, semantic, or fusion search).
9. Output retrieval and scoring methods.
10. Integration of outputs with other data for in-context learning.
11. Structure of the final output.
### **3. The Role of Memory Manager at Level 3**
**Memory Layer + FastAPI + Langchain + Weaviate**
**3.1. Developer Intent at Level 3**
The goal we set for our system in our [initial post](https://www.prometh.ai/promethai-memory-blog-post-one) — processing and creating structured data from PDFs — presented an interesting set of problems to solve. OpenAI functions and [dlthub](https://dlthub.com/) allowed us to accomplish this task relatively quickly.
The real issue arises when we try to scale this task — this is what our [second post](https://www.notion.so/Going-beyond-Langchain-Weaviate-Level-2-towards-Production-98ad7b915139478992c4c4386b5e5886?pvs=21) tried to address. In addition, retrieving meaningful data from the Vector Databases turned out to be much more challenging than initially imagined.
In this post, well discuss how we can establish a testing method, improve our ability to retrieve the information we've processed, and make the codebase more robust and production-ready.
Well primarily focus on the following:
1. Memory Manager
The Memory Manager is a set of functions and tools for creating dynamic memory objects. In our previous blog posts, we explored the application of concepts from cognitive science —  Short-Term Memory, Long-Term Memory, and Cognitive Buffer — on Agent Network development.
We might need to add more memory domains to the process, as sticking to just these three can pose limitations. Changes in the codebase now enable real-time creation of dynamic memory objects, which have hierarchical relationships and can relate to each other.
2. RAG test tool
The RAG test tool allows us to control critical parameters for optimizing and testing RAG applications, including chunk size, chunk overlap, search type, metadata structure, and more.
The Memory Manager is a crucial component of any cognitive architecture platform. In our previous posts, weve discussed how to turn unstructured data to structured, how to relate concepts to each other in the vector store, and which problems can arise when productionizing these systems.
While weve addressed many open questions, many still remain. Based on our surveys and interviews with field experts, applications utilizing Memory components face the following challenges:
1. Inability to reliably link between Memories
Relying solely on semantic search or its derivatives to recognize the similarities between terms like "pair" and "combine" is a step forward. However, actually defining, capturing, and quantifying the relationships between any two objects would aid future memory access.
Solution: Graphs/Traditional DB
2. Failure to structure and organize Memories
We used OpenAI functions to structure and organize different Memory elements and convert them into understandable JSONs. Nevertheless, our surveys indicate that many people struggle with metadata management and the structure of retrievals. Ideally, these aspects should all be managed and organized in one place.
Solution: OpenAI functions/Data contracting/Metadata management
3. Hierarchy, size, and relationships of individual Memory elements
Although semantic search helps us understand the same concepts, we need to add more abstract concepts and ideas and link them. The ultimate goal is to emulate human understanding of the world, which comprises basic concepts that, when combined, create higher complexity objects.
Solution: Graphs/Custom solutions
4. Evaluation possibilities of memory components (can they be distilled to True/False)
Based on the [psycholinguistic theories proposed by Walter Kintsch](https://www.colorado.edu/ics/sites/default/files/attached-files/90-15.pdf), any cognitive system should be able to provide True/False evaluations. Kintsch defines a basic memory component, a proposition, which can be evaluated as True or False and can interlink with other Memory components.
A proposition could be, for example, "The sky is blue," and its evaluation to True/False could lead to actions such as "Do not bring an umbrella" or "Wear a t-shirt."
Potential solution: Particular memory structure
### Testability of Memory components
We should have a reliable method to test Memory components, at scale, for any number of use-cases. We need benchmarks across every level of testing to capture and define predicted behavior.
Suppose we need to test if Memory data from six months ago can be retrieved by our system and measure how much it contributes to a response that spans memories that are years old.
Solution: RAG testing framework
![Dashboard_example.png](Going%20beyond%20Langchain%20+%20Weaviate%20Level%203%20towards%20%20e62946c272bf412584b12fbbf92d35b0/Dashboard_example.png)
Lets look at the RAG testing framework:
It allows to you to test and combine all variations of:
1. Number of documents loaded into memory. ✅
2. Size of each sub-document chunk uploaded. ✅
3. Overlap between documents uploaded. ✅
4. Relationship between documents (Parent-Son etc.) 👷🏻‍♂️
5. Type of embedding used for data-to-vector conversion (OpenAI, Cohere, or any other embedding method). ✅
6. Metadata structure for data navigation. ✅
7. Indexes and data structures. ✅
8. Search methods (text, semantic, or fusion search). ✅
9. Output retrieval and scoring methods. 👷🏻‍♂️
10. Integration of outputs with other data for in-context learning. 👷🏻‍♂️
11. Structure of the final output. ✅
These parameters and results of the tests will be stored in Postgres database and can be visualized using Superset
To try it, navigate to: https://github.com/topoteretes/PromethAI-Memory
Copy the .env.template to .env and fill in the variables
Specify the environment variable in the .env file to "local"
Use the poetry environment:
`poetry shell`
Change the .env file Environment variable to "local"
Launch the postgres DB
`docker compose up postgres`
Launch the superset
`docker compose up superset`
Open the superset in your browser
`http://localhost:8088` Add the Postgres datasource to the Superset with the following connection string:
`postgres://bla:bla@postgres:5432/bubu`
Make sure to run to initialize DB tables
`python scripts/create_database.py`
After that, you can run the RAG test manager from your command line.
```
python rag_test_manager.py \
--file ".data" \
--test_set "example_data/test_set.json" \
--user_id "97980cfea0067" \
--params "chunk_size" "search_type" \
--metadata "example_data/metadata.json" \
--retriever_type "single_document_context"
```
Examples of metadata structure and test set are in the folder "example_data"

View file

@ -1,7 +1,109 @@
---
draft: False
date: 2023-12-05
tags:
- pydantic
- langchain
- llm
- openai
- functions
- pdfs
authors:
- tricalt
---
# Going beyond Langchain + Weaviate: Level 4 towards production
### **Preface**
This post is part of a series of texts aiming to explore and understand patterns and practices that enable the construction of a production-ready AI data infrastructure. The series mainly focuses on the modeling and retrieval of evolving data, which would empower Large Language Model (LLM) apps and Agents to serve millions of users concurrently.
For a broad overview of the problem and our understanding of the current state of the LLM landscape, check out our initial post [here](https://www.prometh.ai/promethai-memory-blog-post-one).
![infographic (2).png](Topoteretes%20-%20General%20d6a605ab1d8243e489146b82eca935a1/PromethAI%20-%20long-term%20vision%20cf4f1d9b21d04239905d02322f0609c5/Berlin%20meetup%20-%20product%20demo%201283443e7b204c71a3ba8d291cf11f68/Blog%20post%20b6bd59a859fe4b4cb954760c94548ff2/Going%20beyond%20Langchain%20+%20Weaviate%20Level%202%20towards%20%2098ad7b915139478992c4c4386b5e5886/infographic_(2).png)
In this post, we delve into creating an initial data platform that can represent the core component of the future MlOps stack. Building a data platform is a big challenge in itself, and many solutions are available to help automate data tracking, ingestion, data contracting, monitoring, and warehousing.
In the last decade, data analytics and engineering fields have undergone significant transformations, shifting from storing data in centralized, siloed Oracle and SQL Server warehouses to a more agile, modular approach involving real-time data and cloud solutions like BigQuery and Snowflake.
Data processing evolved from an inessential activity, whose value would be inflated to please investors during the startup valuation phase, to a fundamental component of product development.
As we enter a new paradigm of interacting with systems through natural language, it's important to recognize that, while this method promises efficiency, it also comes with the challenges inherent in the imperfections of human language.
Suppose we want to use natural language as a new programming tool. In that case, we will need to either impose more constraints on it or make our systems more flexible so that they can adapt to the equivocal nature of language and information.
Our main goal should be to offer consistency, reproducibility and more that would ideally use language as a basic building block for things to come.
In order to come up with a set of solutions that could enable us to move forward, in this series of posts, we call on theoretical models from cognitive science and try to incorporate them into data engineering practices .
## **Level 4: Memory architecture and a first integration with keepi.ai**
In our [initial post](https://www.notion.so/Going-beyond-Langchain-Weaviate-and-towards-a-production-ready-modern-data-platform-7351d77a1eba40aab4394c24bef3a278?pvs=21)**,** we started out conceptualizing a simple retrieval-augmented generation (RAG) model whose aim was to process and understand PDF documents.
We faced many bottlenecks in scaling these tasks, so in our [second post](https://www.notion.so/Going-beyond-Langchain-Weaviate-Level-2-towards-Production-98ad7b915139478992c4c4386b5e5886?pvs=21), we needed to introduce the concept of memory domains..
In the [next step](https://www.notion.so/Going-beyond-Langchain-Weaviate-Level-3-towards-production-e62946c272bf412584b12fbbf92d35b0?pvs=21), the focus was mainly on understanding what makes a good RAG considering all possible variables.
In this post, we address the fundamental question of the feasibility of extending LLMs beyond the data on which they were trained.
As a Microsoft research team recently [stated](https://www.microsoft.com/en-us/research/blog/graphrag-unlocking-llm-discovery-on-narrative-private-data/):
- Baseline RAG struggles to connect the dots when answering a question requires providing synthesized insights by traversing disparate pieces of information through their shared attributes.
- Baseline RAG performs poorly when asked to understand summarized semantic concepts holistically over large data collections or even singular large documents.
To fill these gaps in RAG performance, we built a new framework—[cognee](https://www.notion.so/Change-button-Submit-appearance-when-clicked-on-www-prometh-ai-13e59427636940598a0fd3938a2d2253?pvs=21).
Cognee *combines human-inspired cognitive processes with efficient data management practices, infusing data points with more meaningful relationships to represent the (often messy) natural world in code more accurately.*
Our observations indicate that systems, agents, and interactions often falter due to overextension and haste.
However, given the extensive demands and expectations surrounding Large Language Models (LLMs), addressing every aspect—agents, actions, integrations, and schedulers—is beyond the scope of the frameworks mission.
We've chosen to prioritize data, recognizing that the crux of many issues has already been addressed within the realm of data engineering.
We aim to establish a framework that includes file storage, tracing, and the development of robust AI memory data pipelines to help us manage and structure data more efficiently through its transformation processes.
Subsequently, our goal will be to devise methods for navigating diverse information segments and determine the most effective application of graph databases to store this data.
Our initial hypothesis—enhancing data management in vector stores through manipulative techniques and attention modulators for input and retrieval—proved less effective than anticipated.
Deconstructing and reorganizing data via graph databases emerged as a superior strategy, allowing us to adapt and repurpose existing tools for our needs more effectively.
| AI Memory type | State in Level 2 | State in Level 4 | Description |
| --- | --- | --- | --- |
| Sensory Memory | API | API | Can be interpreted in this context as the interface used for the human input |
| STM | Weaviate Class with hardcoded contract | Neo4j with a connection to a Weaviate class | The processing layer and a storage of the session/user context |
| LTM | Weaviate Class with hardcoded contract | Neo4j with a connection to a Weaviate class | The information storage |
On Level 4, we describe the integration of keepi, a chatGPT-powered WhatsApp bot that collects and summarizes information, via API endpoints.
Then, once weve ensured that we have a robust, scalable infrastructure, we deploy cognee to the cloud.
### **Workflow Overview**
![How_cognee_works.png](Going%20beyond%20Langchain%20+%20Weaviate%20Level%204%20towards%20%20fe90ff40e56e44c4a49f1492d360173c/How_cognee_works.png)
Steps:
1. Users submit queries or documents for storage via the [keepi.ai](http://keepi.ai/) WhatsApp bot. This step integrates with the [keepi.ai](http://keepi.ai/) platform, utilizing Cognee endpoints for processing.
2. The Cognee manager handles the incoming request and collaborates with several components:
1. Relational database: Manages state and metadata related to operations.
2. Classifier: Identifies, organizes, and enhances the content.
3. Loader: Archives data in vector databases.
3. The Graph Manager and Vector Store Manager collaboratively process and organize the input into structured nodes. A key function of the system involves breaking down user input into propositions—basic statements retaining factual content. These propositions are interconnected through relationships and cataloged in the Neo4j database by the Graph Manager, associated with specific user nodes. Users are represented by memory nodes that capture various memory levels, some of which link back to the raw data in vector databases.
### **Whats next**
We're diligently developing our upcoming features, with key objectives including:
1. Numerically defining and organizing the strengths of relationships within graphs.
2. Creating a structured data model with opinions to facilitate document structure and data extraction.
3. Converting Cognee into a Python library for easier integration.
4. Broadening our database compatibility to support a broader range of systems.
Make sure to explore our [implementation](https://github.com/topoteretes/cognee) on GitHub, and, if you find it valuable, consider starring it to show your support.

View file

@ -1,11 +1,19 @@
# cognee, Make data processing for LLMs easy
# cognee
## Make data processing for LLMs easy
_Open-source framework for creating knowledge graphs and data models for LLMs._
---
[![Twitter Follow](https://img.shields.io/twitter/follow/tricalt?style=social)](https://twitter.com/tricalt)
[![Downloads](https://img.shields.io/pypi/dm/cognee.svg)](https://pypi.python.org/pypi/instructor)
[![Downloads](https://img.shields.io/pypi/dm/cognee.svg)](https://pypi.python.org/pypi/cognee)
@ -13,29 +21,97 @@ _Open-source framework for creating knowledge graphs and data models for LLMs._
cognee makes it easy to reliably enrich data for Large Language Models (LLMs) like GPT-3.5, GPT-4, GPT-4-Vision, including in the future the open source models like Mistral/Mixtral from Together, Anyscale, Ollama, and llama-cpp-python.
By leveraging various tools like graph databases, function calling, tool calling and Pydantic; cognee stands out for its aim to emulate human memory for LLM apps and frameworks.
We leverage Neo4j to do the heavy lifting and dlt to load the data, and we've built a simple, easy-to-use API on top of it by helping you manage your context
## Getting Started
```
pip install -U cognee
```
You can also check out our [cookbook](./examples/index.md) to learn more about how to use cognee.
pip install -U cognee
```
Set OpenAI API Key as an environment variable
```
import os
# Setting an environment variable
os.environ['OPENAI_API_KEY'] = ''
```
Import cognee and start using it
```
import cognee
from os import listdir, path
from cognee import add
data_path = path.abspath(".data")
results = await add(data_path, "izmene")
for result in results:
print(result)
```
Run the following command to see the graph.
Make sure to add your Graphistry credentials to .env beforehand
```
from cognee.utils import render_graph
graph = await cognee.cognify("izmene")
graph_url = await render_graph(graph, graph_type = "networkx")
print(graph_url)
```
Search the graph for a piece of information
```
from cognee import search
from cognee.api.v1.search.search import SearchType
query_params = {
SearchType.SIMILARITY: {'query': 'your search query here'}
}
out = await search(graph, query_params)
```
[//]: # (You can also check out our [cookbook](./examples/index.md) to learn more about how to use cognee.)
## Why use cognee?
The question of using cognee is fundamentally a question of why to structure data inputs and outputs for your llm workflows.
1. **Cost effective** — With our upcoming opensource release, cognee will extend the capabilities of your LLMs without the need for expensive data processing tools.
1. **Cost effective** — cognee extends the capabilities of your LLMs without the need for expensive data processing tools.
2. **Self contained** — cognee runs as a library and is simple to use
3. **Interpretable** — Navigate graphs instead of embeddings to understand your data.
4. **User Guided** cognee lets you control your input and provide your own Pydantic data models
## License
This project is licensed under the terms of the MIT License.

View file

@ -16,6 +16,4 @@ us on
{% include ".icons/fontawesome/brands/github.svg" %}
</span>
<strong>GitHub</strong> </a
>. If you don't like python, too bad. JS, Elixir, and Rust are coming soon. {% endblock %} {% block content %} <h1>Cognee</h1>
>
{% endblock %}
>. If you don't like python, too bad. JS, Elixir, and Rust are coming soon. {% endblock %}

28
docs/why.md Normal file
View file

@ -0,0 +1,28 @@
# Why use cognee?
LLMs don't have a semantic layer, and they don't have a way to understand the data they are processing. This is where cognee comes in.
We let you define logical structures for your data and then use these structures to guide the LLMs to process the data in a way that makes sense to you.
??? note "Why use cognee?"
Its hard to answer the question of why use cognee without answering why you need thin LLM frameworks in the first place.:
- **Cost effective** — cognee extends the capabilities of your LLMs without the need for expensive data processing tools.
- **Self contained** — cognee runs as a library and is simple to use
- **Easy to use** — cognee is simple to use and can be used by anyone with a basic understanding of Python
- **Flexible** — cognee can be used to structure data in any way you want, and can be used to structure data in any way you want. We rely on the work done by Pydantic and are inspired by the Instructor library, which is a simple way to structure data for LLMs.
## Bring your own data model
If you are building an AI vertical, most of the time you will have a specific data model that you want to use. Cognee lets you bring your own data model and use it to structure your data in a way that makes sense to you.
## Data processing
With dlt you can avoid all the boilerplate code that comes with data processing. We let you define logical structures for your data and then use these structures, deduplicated, incremental and replayable

View file

@ -2,8 +2,8 @@ site_name: cognee
site_author: Vasilije Markovic
site_description: desc
repo_name: cognee
repo_url: https://topoteretes.github.io
site_url: http://topoteretes.github.io
repo_url: https://topoteretes.github.io/cognee
site_url: https://www.congee.ai
edit_uri: edit/main/docs/
copyright: Copyright &copy; 2024 cognee
theme:
@ -99,8 +99,8 @@ markdown_extensions:
- pymdownx.magiclink:
normalize_issue_symbols: true
repo_url_shorthand: true
user: jxnl
repo: instructor
user: tricalt
repo: cognee
- pymdownx.mark
- pymdownx.smartsymbols
- pymdownx.snippets:
@ -119,3 +119,7 @@ markdown_extensions:
nav:
- Introduction:
- Welcome to cognee: 'index.md'
- Blog:
- "blog/index.md"
- Why cognee:
- "why.md"