No description
Find a file
2023-10-10 14:19:13 +02:00
.github update flow for the docker image 2023-09-04 20:51:15 +02:00
assets update flow for the docker image 2023-09-04 20:49:59 +02:00
bin update flow for the docker image 2023-09-03 10:58:43 +02:00
level_1 quick fix to the bug 2023-08-17 08:41:30 +02:00
level_2 Fixes and added command line tool to run RAG 2023-10-10 14:16:08 +02:00
level_3 Fixes and added command line tool to run RAG 2023-10-10 11:17:15 +02:00
.gitignore Merge branch 'main' into code_review 2023-08-25 12:12:46 +02:00
infographic_final.png Add files via upload 2023-08-16 18:23:40 +02:00
LICENSE Initial commit 2023-08-16 18:16:33 +02:00
README.md Merge branch 'main' into fix_docker 2023-10-10 14:19:13 +02:00

PromethAI-Memory

Memory management and testing for the AI Applications and RAGs

promethAI logo

Open-source framework that manages memory for AI Agents and LLM apps

promethAI forks promethAI stars promethAI pull-requests

Share promethAI Repository

Follow _promethAI Share on Telegram Share on Reddit Buy Me A Coffee


Infographic Image

Production-ready modern data platform

Browsing the database of theresanaiforthat.com, we can observe around 7000 new, mostly semi-finished projects in the field of applied AI. It seems it has never been easier to create a startup, build an app, and go to market… and fail.

Decades of technological advancements have led to small teams being able to do in 2023 what in 2015 required a team of dozens. Yet, the AI apps currently being pushed out still mostly feel and perform like demos. The rise of this new profession is perhaps signaling the need for a solution that is not yet there — a solution that in its essence represents a Large Language Model (LLM) — a powerful general problem solver — available in the palm of your hand 24/7/365.

To address this issue, dlthub and prometh.ai will collaborate on a productionizing a common use-case, progressing step by step. We will utilize the LLMs, frameworks, and services, refining the code until we attain a clearer understanding of what a modern LLM architecture stack might entail.

Read more on our blog post prometh.ai

Project Structure

Level 1 - OpenAI functions + Pydantic + DLTHub

Scope: Give PDFs to the model and get the output in a structured format We introduce the following concepts:

  • Structured output with Pydantic
  • CMD script to process custom PDFs

Level 2 - Memory Manager + Metadata management

Scope: Give PDFs to the model and consolidate with the previous user activity and more We introduce the following concepts:

  • Long Term Memory -> store and format the data
  • Episodic Buffer -> isolate the working memory
  • Attention Modulators -> improve semantic search
  • Docker
  • API

Level 3 - Dynamic Memory Manager + DB + Rag Test Manager

Scope: Store the data in N stores and test the retrieval with the Rag Test Manager

  • Dynamic Memory Manager -> store the data in N stores
  • Auto-generation of tests
  • Multiple file formats supported
  • Postgres DB to manage state
  • Docker
  • API

Run the level 3

docker compose up promethai_mem

poetry shell

Make sure to run

python scripts/create_database.py

After that, you can run:

    python rag_test_manager.py \
    --url "https://www.ibiblio.org/ebooks/London/Call%20of%20Wild.pdf" \
    --test_set "example_data/test_set.json" \
    --user_id "666" \
    --metadata "example_data/metadata.json"