diff --git a/README.md b/README.md
index 9d454faff..b7b1caaa4 100644
--- a/README.md
+++ b/README.md
@@ -1,20 +1,20 @@
# PromethAI-Memory
-Memory management and testing for the AI Applications and RAGs
-Dynamic Graph Memory Manager + DB + Rag Test Manager
+AI Applications and RAGs - Cognitive Architecture, Testability, Production Ready Apps
-
+
+
-
+
-Open-source framework that manages memory for AI Agents and LLM apps
+Open-source framework for building and testing RAGs and Cognitive Architectures, designed for accuracy, transparency, and control.
-
+
@@ -52,9 +52,9 @@ Dynamic Graph Memory Manager + DB + Rag Test Manager
[//]: # (
)
-Share promethAI Repository
+Share promethAI Repository
-
+
@@ -71,33 +71,40 @@ Dynamic Graph Memory Manager + DB + Rag Test Manager
-
+
+This repo is built to test and evolve RAG architecture, inspired by human cognitive processes, using Python. It's aims to be production ready, testable, but give great visibility in how we build RAG applications.
+
+This project is a part of the [PromethAI](https://prometh.ai/) ecosystem.
+
+It runs in iterations, with each iteration building on the previous one.
+
+_Keep Ithaka always in your mind.
+Arriving there is what you’re destined for.
+But don’t hurry the journey at all.
+Better if it lasts for years_
-## Production-ready modern data platform
+### Installation
+To get started with PromethAI Memory, start with the latest iteration, and follow the instructions in the README.md file
-Browsing the database of theresanaiforthat.com, we can observe around [7000 new, mostly semi-finished projects](https://theresanaiforthat.com/) in the field of applied AI.
-It seems it has never been easier to create a startup, build an app, and go to market… and fail.
+### Current Focus
-Decades of technological advancements have led to small teams being able to do in 2023 what in 2015 required a team of dozens.
-Yet, the AI apps currently being pushed out still mostly feel and perform like demos.
-The rise of this new profession is perhaps signaling the need for a solution that is not yet there — a solution that in its essence represents a Large Language Model (LLM) — [a powerful general problem solver](https://lilianweng.github.io/posts/2023-06-23-agent/?fbclid=IwAR1p0W-Mg_4WtjOCeE8E6s7pJZlTDCDLmcXqHYVIrEVisz_D_S8LfN6Vv20) — available in the palm of your hand 24/7/365.
+RAG test manager can be used via API or via the CLI
-To address this issue, [dlthub](https://dlthub.com/) and [prometh.ai](http://prometh.ai/) will collaborate on a productionizing a common use-case, progressing step by step. We will utilize the LLMs, frameworks, and services, refining the code until we attain a clearer understanding of what a modern LLM architecture stack might entail.
+
-## Read more on our blog post [prometh.ai](http://prometh.ai/promethai-memory-blog-post-on)
+### Project Structure
-
-## Project Structure
-
-### Level 1 - OpenAI functions + Pydantic + DLTHub
+#### Level 1 - OpenAI functions + Pydantic + DLTHub
Scope: Give PDFs to the model and get the output in a structured format
+Blog post: https://prometh.ai/promethai-memory-blog-post-one
We introduce the following concepts:
- Structured output with Pydantic
- CMD script to process custom PDFs
-### Level 2 - Memory Manager + Metadata management
+#### Level 2 - Memory Manager + Metadata management
Scope: Give PDFs to the model and consolidate with the previous user activity and more
+Blog post: https://prometh.ai/promethai-memory-blog-post-two
We introduce the following concepts:
- Long Term Memory -> store and format the data
@@ -106,8 +113,9 @@ We introduce the following concepts:
- Docker
- API
-### Level 3 - Dynamic Graph Memory Manager + DB + Rag Test Manager
+#### Level 3 - Dynamic Graph Memory Manager + DB + Rag Test Manager
Scope: Store the data in N-related stores and test the retrieval with the Rag Test Manager
+Blog post: https://prometh.ai/promethai-memory-blog-post-three
- Dynamic Memory Manager -> store the data in N hierarchical stores
- Auto-generation of tests
- Multiple file formats supported
@@ -116,35 +124,103 @@ Scope: Store the data in N-related stores and test the retrieval with the Rag Te
- API
-## Run the level 3
+### Run the level 3
Make sure you have Docker, Poetry, and Python 3.11 installed and postgres installed.
-Copy the .env.example to .env and fill the variables
+Copy the .env.example to .env and fill in the variables
-Start the docker:
+Two ways to run the level 3:
-```docker compose up promethai_mem ```
+#### Docker:
+
+Copy the .env.template to .env and fill in the variables
+Specify the environment variable in the .env file to "docker"
+
+
+Launch the docker image:
+
+```docker compose up promethai_mem ```
+
+Send the request to the API:
+
+```
+curl -X POST -H "Content-Type: application/json" -d '{
+ "payload": {
+ "user_id": "681",
+ "data": [".data/3ZCCCW.pdf"],
+ "test_set": "sample",
+ "params": ["chunk_size"],
+ "metadata": "sample",
+ "retriever_type": "single_document_context"
+ }
+}' http://0.0.0.0:8000/rag-test/rag_test_run
+
+```
+Params:
+
+data -> list of URLs or path to the file, located in the .data folder (pdf, docx, txt, html)
+test_set -> sample, manual (list of questions and answers)
+metadata -> sample, manual (json) or version (in progress)
+params -> chunk_size, chunk_overlap, search_type (hybrid, bm25), embeddings
+retriever_type -> llm_context, single_document_context, multi_document_context, cognitive_architecture(coming soon)
+
+Inspect the results in the DB:
+
+``` docker exec -it postgres psql -U bla ```
+
+``` \c bubu ```
+
+``` select * from test_outputs; ```
+
+Or set up the superset to visualize the results:
+
+
+
+#### Poetry environment:
+
+
+Copy the .env.template to .env and fill in the variables
+Specify the environment variable in the .env file to "local"
Use the poetry environment:
``` poetry shell ```
+Change the .env file Environment variable to "local"
+
+Launch the postgres DB
+
+``` docker compose up postgres ```
+
+Launch the superset
+
+``` docker compose up superset ```
+
+Open the superset in your browser
+
+``` http://localhost:8088 ```
+Add the Postgres datasource to the Superset with the following connection string:
+
+``` postgres://bla:bla@postgres:5432/bubu ```
+
Make sure to run to initialize DB tables
``` python scripts/create_database.py ```
-After that, you can run the RAG test manager.
+After that, you can run the RAG test manager from your command line.
```
python rag_test_manager.py \
- --url "https://www.ibiblio.org/ebooks/London/Call%20of%20Wild.pdf" \
+ --file ".data" \
--test_set "example_data/test_set.json" \
--user_id "666" \
- --metadata "example_data/metadata.json"
+ --metadata "example_data/metadata.json" \
+ --retriever_type "single_document_context"
```
Examples of metadata structure and test set are in the folder "example_data"
+
diff --git a/level_3/.env.template b/level_3/.env.template
index bc9893356..8ca7daed0 100644
--- a/level_3/.env.template
+++ b/level_3/.env.template
@@ -1,3 +1,9 @@
OPENAI_API_KEY=sk
WEAVIATE_URL =
-WEAVIATE_API_KEY =
\ No newline at end of file
+WEAVIATE_API_KEY =
+ENVIRONMENT = docker
+POSTGRES_USER = bla
+POSTGRES_PASSWORD = bla
+POSTGRES_DB = bubu
+POSTGRES_HOST = localhost
+POSTGRES_HOST_DOCKER = postgres
\ No newline at end of file
diff --git a/level_3/Dockerfile b/level_3/Dockerfile
index f2b0a3efa..feb677fe3 100644
--- a/level_3/Dockerfile
+++ b/level_3/Dockerfile
@@ -43,6 +43,7 @@ RUN apt-get update -q && \
WORKDIR /app
COPY . /app
+COPY scripts/ /app
COPY entrypoint.sh /app/entrypoint.sh
COPY scripts/create_database.py /app/create_database.py
RUN chmod +x /app/entrypoint.sh
diff --git a/level_3/api.py b/level_3/api.py
index 9ffae285a..aa200d56a 100644
--- a/level_3/api.py
+++ b/level_3/api.py
@@ -1,9 +1,11 @@
+import json
import logging
import os
+from enum import Enum
from typing import Dict, Any
import uvicorn
-from fastapi import FastAPI
+from fastapi import FastAPI, BackgroundTasks
from fastapi.responses import JSONResponse
from pydantic import BaseModel
@@ -11,6 +13,7 @@ from database.database import AsyncSessionLocal
from database.database_crud import session_scope
from vectorstore_manager import Memory
from dotenv import load_dotenv
+from rag_test_manager import start_test
# Set up logging
logging.basicConfig(
@@ -200,7 +203,100 @@ def memory_factory(memory_type):
memory_list = ["episodic", "buffer", "semantic"]
for memory_type in memory_list:
memory_factory(memory_type)
+class TestSetType(Enum):
+ SAMPLE = "sample"
+ MANUAL = "manual"
+def get_test_set(test_set_type, folder_path="example_data", payload=None):
+ if test_set_type == TestSetType.SAMPLE:
+ file_path = os.path.join(folder_path, "test_set.json")
+ if os.path.isfile(file_path):
+ with open(file_path, "r") as file:
+ return json.load(file)
+ elif test_set_type == TestSetType.MANUAL:
+ # Check if the manual test set is provided in the payload
+ if payload and "manual_test_set" in payload:
+ return payload["manual_test_set"]
+ else:
+ # Attempt to load the manual test set from a file
+ pass
+
+ return None
+
+
+class MetadataType(Enum):
+ SAMPLE = "sample"
+ MANUAL = "manual"
+
+def get_metadata(metadata_type, folder_path="example_data", payload=None):
+ if metadata_type == MetadataType.SAMPLE:
+ file_path = os.path.join(folder_path, "metadata.json")
+ if os.path.isfile(file_path):
+ with open(file_path, "r") as file:
+ return json.load(file)
+ elif metadata_type == MetadataType.MANUAL:
+ # Check if the manual metadata is provided in the payload
+ if payload and "manual_metadata" in payload:
+ return payload["manual_metadata"]
+ else:
+ pass
+
+ return None
+
+@app.post("/rag-test/rag_test_run", response_model=dict)
+async def rag_test_run(
+ payload: Payload,
+ background_tasks: BackgroundTasks,
+):
+ try:
+ logging.info("Starting RAG Test")
+ decoded_payload = payload.payload
+ test_set_type = TestSetType(decoded_payload['test_set'])
+
+ metadata_type = MetadataType(decoded_payload['metadata'])
+
+ metadata = get_metadata(metadata_type, payload=decoded_payload)
+ if metadata is None:
+ return JSONResponse(content={"response": "Invalid metadata value"}, status_code=400)
+
+ test_set = get_test_set(test_set_type, payload=decoded_payload)
+ if test_set is None:
+ return JSONResponse(content={"response": "Invalid test_set value"}, status_code=400)
+
+ async def run_start_test(data, test_set, user_id, params, metadata, retriever_type):
+ result = await start_test(data = data, test_set = test_set, user_id =user_id, params =params, metadata =metadata, retriever_type=retriever_type)
+
+ logging.info("Retriever DATA type", type(decoded_payload['data']))
+
+ background_tasks.add_task(
+ run_start_test,
+ decoded_payload['data'],
+ test_set,
+ decoded_payload['user_id'],
+ decoded_payload['params'],
+ metadata,
+ decoded_payload['retriever_type']
+ )
+
+ logging.info("Retriever type", decoded_payload['retriever_type'])
+ return JSONResponse(content={"response": "Task has been started"}, status_code=200)
+
+ except Exception as e:
+ return JSONResponse(
+
+ content={"response": {"error": str(e)}}, status_code=503
+
+ )
+
+
+# @app.get("/rag-test/{task_id}")
+# async def check_task_status(task_id: int):
+# task_status = task_status_db.get(task_id, "not_found")
+#
+# if task_status == "not_found":
+# return {"status": "Task not found"}
+#
+# return {"status": task_status}
# @app.get("/available-buffer-actions", response_model=dict)
# async def available_buffer_actions(
diff --git a/level_3/create_database.py b/level_3/create_database.py
new file mode 100644
index 000000000..3d8ff426d
--- /dev/null
+++ b/level_3/create_database.py
@@ -0,0 +1,80 @@
+import sys
+import os
+
+# this is needed to import classes from other modules
+script_dir = os.path.dirname(os.path.abspath(__file__))
+# Get the parent directory of your script and add it to sys.path
+parent_dir = os.path.dirname(script_dir)
+sys.path.append(parent_dir)
+
+from database.database import Base, engine
+import models.memory
+import models.metadatas
+import models.operation
+import models.sessions
+import models.testoutput
+import models.testset
+import models.user
+import models.docs
+from sqlalchemy import create_engine, text
+import psycopg2
+from dotenv import load_dotenv
+load_dotenv()
+import os
+
+
+
+
+
+
+def create_admin_engine(username, password, host, database_name):
+ admin_url = f"postgresql://{username}:{password}@{host}:5432/{database_name}"
+ return create_engine(admin_url)
+
+
+def database_exists(username, password, host, db_name):
+ engine = create_admin_engine(username, password, host, db_name)
+ connection = engine.connect()
+ query = text(f"SELECT 1 FROM pg_database WHERE datname='{db_name}'")
+ result = connection.execute(query).fetchone()
+ connection.close()
+ engine.dispose()
+ return result is not None
+
+
+def create_database(username, password, host, db_name):
+ engine = create_admin_engine(username, password, host, db_name)
+ connection = engine.raw_connection()
+ connection.set_isolation_level(psycopg2.extensions.ISOLATION_LEVEL_AUTOCOMMIT)
+ cursor = connection.cursor()
+ cursor.execute(f"CREATE DATABASE {db_name}")
+ cursor.close()
+ connection.close()
+ engine.dispose()
+
+
+def create_tables(engine):
+ Base.metadata.create_all(bind=engine)
+
+if __name__ == "__main__":
+ username = os.getenv('POSTGRES_USER')
+ password = os.getenv('POSTGRES_PASSWORD')
+ database_name = os.getenv('POSTGRES_DB')
+ environment = os.environ.get("ENVIRONMENT")
+
+ if environment == "local":
+ host = os.getenv('POSTGRES_HOST')
+
+ elif environment == "docker":
+ host = os.getenv('POSTGRES_HOST_DOCKER')
+ else:
+ host = os.getenv('POSTGRES_HOST_DOCKER')
+
+ engine = create_admin_engine(username, password, host, database_name)
+
+ if not database_exists(username, password, host, database_name):
+ print(f"Database {database_name} does not exist. Creating...")
+ create_database(username, password, host, database_name)
+ print(f"Database {database_name} created successfully.")
+
+ create_tables(engine)
\ No newline at end of file
diff --git a/level_3/database/database.py b/level_3/database/database.py
index 122efb64e..b100f5c8c 100644
--- a/level_3/database/database.py
+++ b/level_3/database/database.py
@@ -24,7 +24,19 @@ RETRY_DELAY = 5
username = os.getenv('POSTGRES_USER')
password = os.getenv('POSTGRES_PASSWORD')
database_name = os.getenv('POSTGRES_DB')
-host = os.getenv('POSTGRES_HOST')
+import os
+
+environment = os.environ.get("ENVIRONMENT")
+
+if environment == "local":
+ host= os.getenv('POSTGRES_HOST')
+
+elif environment == "docker":
+ host= os.getenv('POSTGRES_HOST_DOCKER')
+else:
+ host= os.getenv('POSTGRES_HOST_DOCKER')
+
+
# Use the asyncpg driver for async operation
SQLALCHEMY_DATABASE_URL = f"postgresql+asyncpg://{username}:{password}@{host}:5432/{database_name}"
diff --git a/level_3/docker-compose.yml b/level_3/docker-compose.yml
index 6c3d8df28..da446e0a7 100644
--- a/level_3/docker-compose.yml
+++ b/level_3/docker-compose.yml
@@ -20,12 +20,22 @@ services:
context: ./
volumes:
- "./:/app"
+ - ./.data:/app/.data
+
environment:
- HOST=0.0.0.0
profiles: ["exclude-from-up"]
ports:
- 8000:8000
- 443:443
+ depends_on:
+ - postgres
+ deploy:
+ resources:
+ limits:
+ cpus: "4.0"
+ memory: 8GB
+
postgres:
image: postgres
@@ -40,23 +50,23 @@ services:
ports:
- "5432:5432"
- superset:
- platform: linux/amd64
- build:
- context: ./superset
- dockerfile: Dockerfile
- container_name: superset
- environment:
- - ADMIN_USERNAME=admin
- - ADMIN_EMAIL=vasilije@topoteretes.com
- - ADMIN_PASSWORD=admin
- - POSTGRES_USER=bla
- - POSTGRES_PASSWORD=bla
- - POSTGRES_DB=bubu
- networks:
- - promethai_mem_backend
- ports:
- - '8088:8088'
+# superset:
+# platform: linux/amd64
+# build:
+# context: ./superset
+# dockerfile: Dockerfile
+# container_name: superset
+# environment:
+# - ADMIN_USERNAME=admin
+# - ADMIN_EMAIL=vasilije@topoteretes.com
+# - ADMIN_PASSWORD=admin
+# - POSTGRES_USER=bla
+# - POSTGRES_PASSWORD=bla
+# - POSTGRES_DB=bubu
+# networks:
+# - promethai_mem_backend
+# ports:
+# - '8088:8088'
networks:
promethai_mem_backend:
diff --git a/level_3/entrypoint.sh b/level_3/entrypoint.sh
index 5a87898d6..4cf551d01 100755
--- a/level_3/entrypoint.sh
+++ b/level_3/entrypoint.sh
@@ -1,7 +1,20 @@
#!/bin/bash
export ENVIRONMENT
+# Run Python scripts with error handling
+echo "Running fetch_secret.py"
python fetch_secret.py
+if [ $? -ne 0 ]; then
+ echo "Error: fetch_secret.py failed"
+ exit 1
+fi
+
+echo "Running create_database.py"
python create_database.py
+if [ $? -ne 0 ]; then
+ echo "Error: create_database.py failed"
+ exit 1
+fi
# Start Gunicorn
-gunicorn -w 2 -k uvicorn.workers.UvicornWorker -t 120 --bind=0.0.0.0:8000 --bind=0.0.0.0:443 --log-level debug api:app
\ No newline at end of file
+echo "Starting Gunicorn"
+gunicorn -w 3 -k uvicorn.workers.UvicornWorker -t 30000 --bind=0.0.0.0:8000 --bind=0.0.0.0:443 --log-level debug api:app
diff --git a/level_3/models/docs.py b/level_3/models/docs.py
new file mode 100644
index 000000000..38166956b
--- /dev/null
+++ b/level_3/models/docs.py
@@ -0,0 +1,19 @@
+
+from datetime import datetime
+from sqlalchemy import Column, Integer, String, DateTime, ForeignKey
+from sqlalchemy.orm import relationship
+import os
+import sys
+sys.path.append(os.path.dirname(os.path.abspath(__file__)))
+from database.database import Base
+class DocsModel(Base):
+ __tablename__ = 'docs'
+
+ id = Column(String, primary_key=True)
+ operation_id = Column(String, ForeignKey('operations.id'), index=True)
+ doc_name = Column(String, nullable=True)
+ created_at = Column(DateTime, default=datetime.utcnow)
+ updated_at = Column(DateTime, onupdate=datetime.utcnow)
+
+
+ operations = relationship("Operation", back_populates="docs")
\ No newline at end of file
diff --git a/level_3/models/operation.py b/level_3/models/operation.py
index 595745b15..1d06657d9 100644
--- a/level_3/models/operation.py
+++ b/level_3/models/operation.py
@@ -14,6 +14,7 @@ class Operation(Base):
id = Column(String, primary_key=True)
user_id = Column(String, ForeignKey('users.id'), index=True) # Link to User
operation_type = Column(String, nullable=True)
+ operation_status = Column(String, nullable=True)
operation_params = Column(String, nullable=True)
number_of_files = Column(Integer, nullable=True)
test_set_id = Column(String, ForeignKey('test_sets.id'), index=True)
@@ -24,6 +25,7 @@ class Operation(Base):
# Relationships
user = relationship("User", back_populates="operations")
test_set = relationship("TestSet", back_populates="operations")
+ docs = relationship("DocsModel", back_populates="operations")
def __repr__(self):
return f""
diff --git a/level_3/models/testoutput.py b/level_3/models/testoutput.py
index ed83decbe..4731a6e46 100644
--- a/level_3/models/testoutput.py
+++ b/level_3/models/testoutput.py
@@ -36,6 +36,7 @@ class TestOutput(Base):
test_output = Column(String, nullable=True)
test_expected_output = Column(String, nullable=True)
test_context = Column(String, nullable=True)
+ number_of_memories = Column(String, nullable=True)
test_results = Column(JSON, nullable=True)
created_at = Column(DateTime, default=datetime.utcnow)
diff --git a/level_3/poetry.lock b/level_3/poetry.lock
index 01a1e36a3..a6c0e2fa6 100644
--- a/level_3/poetry.lock
+++ b/level_3/poetry.lock
@@ -1585,11 +1585,14 @@ requests = "*"
[[package]]
name = "greenlet"
+
version = "3.0.1"
+
description = "Lightweight in-process concurrent programming"
optional = false
python-versions = ">=3.7"
files = [
+
{file = "greenlet-3.0.1-cp310-cp310-macosx_10_9_universal2.whl", hash = "sha256:f89e21afe925fcfa655965ca8ea10f24773a1791400989ff32f467badfe4a064"},
{file = "greenlet-3.0.1-cp310-cp310-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:28e89e232c7593d33cac35425b58950789962011cc274aa43ef8865f2e11f46d"},
{file = "greenlet-3.0.1-cp310-cp310-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:b8ba29306c5de7717b5761b9ea74f9c72b9e2b834e24aa984da99cbfc70157fd"},
@@ -1647,6 +1650,7 @@ files = [
{file = "greenlet-3.0.1-cp39-cp39-win32.whl", hash = "sha256:cf868e08690cb89360eebc73ba4be7fb461cfbc6168dd88e2fbbe6f31812cd57"},
{file = "greenlet-3.0.1-cp39-cp39-win_amd64.whl", hash = "sha256:ac4a39d1abae48184d420aa8e5e63efd1b75c8444dd95daa3e03f6c6310e9619"},
{file = "greenlet-3.0.1.tar.gz", hash = "sha256:816bd9488a94cba78d93e1abb58000e8266fa9cc2aa9ccdd6eb0696acb24005b"},
+
]
[package.extras]
@@ -3032,6 +3036,7 @@ files = [
{file = "orjson-3.9.10-cp39-none-win32.whl", hash = "sha256:fb0b361d73f6b8eeceba47cd37070b5e6c9de5beaeaa63a1cb35c7e1a73ef088"},
{file = "orjson-3.9.10-cp39-none-win_amd64.whl", hash = "sha256:b90f340cb6397ec7a854157fac03f0c82b744abdd1c0941a024c3c29d1340aff"},
{file = "orjson-3.9.10.tar.gz", hash = "sha256:9ebbdbd6a046c304b1845e96fbcc5559cd296b4dfd3ad2509e33c4d9ce07d6a1"},
+
]
[[package]]
@@ -3376,6 +3381,7 @@ urllib3 = ">=1.21.1"
grpc = ["googleapis-common-protos (>=1.53.0)", "grpc-gateway-protoc-gen-openapiv2 (==0.1.0)", "grpcio (>=1.44.0)", "lz4 (>=3.1.3)", "protobuf (>=3.19.5,<3.20.0)"]
[[package]]
+
name = "plotly"
version = "5.18.0"
description = "An open-source, interactive data visualization library for Python"
@@ -3680,6 +3686,7 @@ dotenv = ["python-dotenv (>=0.10.4)"]
email = ["email-validator (>=1.0.3)"]
[[package]]
+
name = "pygments"
version = "2.16.1"
description = "Pygments is a syntax highlighting package written in Python."
diff --git a/level_3/rag_test_manager.py b/level_3/rag_test_manager.py
index 2d5eb38fd..df0c86bfd 100644
--- a/level_3/rag_test_manager.py
+++ b/level_3/rag_test_manager.py
@@ -30,6 +30,7 @@ from models.testset import TestSet
from models.testoutput import TestOutput
from models.metadatas import MetaDatas
from models.operation import Operation
+from models.docs import DocsModel
load_dotenv()
import ast
@@ -73,15 +74,61 @@ async def retrieve_latest_test_case(session, user_id, memory_id):
f"An error occurred while retrieving the latest test case: {str(e)}"
)
return None
+def get_document_names(doc_input):
+ """
+ Get a list of document names.
+ This function takes doc_input, which can be a folder path, a single document file path, or a document name as a string.
+ It returns a list of document names based on the doc_input.
+
+ Args:
+ doc_input (str): The doc_input can be a folder path, a single document file path, or a document name as a string.
+
+ Returns:
+ list: A list of document names.
+
+ Example usage:
+ - Folder path: get_document_names(".data")
+ - Single document file path: get_document_names(".data/example.pdf")
+ - Document name provided as a string: get_document_names("example.docx")
+
+ """
+ if isinstance(doc_input, list):
+ return doc_input
+ if os.path.isdir(doc_input):
+ # doc_input is a folder
+ folder_path = doc_input
+ document_names = []
+ for filename in os.listdir(folder_path):
+ if os.path.isfile(os.path.join(folder_path, filename)):
+ document_names.append(filename)
+ return document_names
+ elif os.path.isfile(doc_input):
+ # doc_input is a single document file
+ return [os.path.basename(doc_input)]
+ elif isinstance(doc_input, str):
+ # doc_input is a document name provided as a string
+ return [doc_input]
+ else:
+ # doc_input is not valid
+ return []
async def add_entity(session, entity):
async with session_scope(session) as s: # Use your async session_scope
s.add(entity) # No need to commit; session_scope takes care of it
- s.commit()
return "Successfully added entity"
+async def update_entity(session, model, entity_id, new_value):
+ async with session_scope(session) as s:
+ # Retrieve the entity from the database
+ entity = await s.get(model, entity_id)
+ if entity:
+ # Update the relevant column and 'updated_at' will be automatically updated
+ entity.operation_status = new_value
+ return "Successfully updated entity"
+ else:
+ return "Entity not found"
async def retrieve_job_by_id(session, user_id, job_id):
try:
@@ -278,10 +325,10 @@ async def eval_test(
test_case = synthetic_test_set
else:
test_case = LLMTestCase(
- query=query,
- output=result_output,
- expected_output=expected_output,
- context=context,
+ input=query,
+ actual_output=result_output,
+ expected_output=[expected_output],
+ context=[context],
)
metric = OverallScoreMetric()
@@ -323,8 +370,22 @@ def count_files_in_data_folder(data_folder_path=".data"):
except Exception as e:
print(f"An error occurred: {str(e)}")
return -1 # Return -1 to indicate an error
+# def data_format_route(data_string: str):
+# @ai_classifier
+# class FormatRoute(Enum):
+# """Represents classifier for the data format"""
+#
+# PDF = "PDF"
+# UNSTRUCTURED_WEB = "UNSTRUCTURED_WEB"
+# GITHUB = "GITHUB"
+# TEXT = "TEXT"
+# CSV = "CSV"
+# WIKIPEDIA = "WIKIPEDIA"
+#
+# return FormatRoute(data_string).name
+
+
def data_format_route(data_string: str):
- @ai_classifier
class FormatRoute(Enum):
"""Represents classifier for the data format"""
@@ -335,20 +396,48 @@ def data_format_route(data_string: str):
CSV = "CSV"
WIKIPEDIA = "WIKIPEDIA"
- return FormatRoute(data_string).name
+ # Convert the input string to lowercase for case-insensitive matching
+ data_string = data_string.lower()
+ # Mapping of keywords to categories
+ keyword_mapping = {
+ "pdf": FormatRoute.PDF,
+ "web": FormatRoute.UNSTRUCTURED_WEB,
+ "github": FormatRoute.GITHUB,
+ "text": FormatRoute.TEXT,
+ "csv": FormatRoute.CSV,
+ "wikipedia": FormatRoute.WIKIPEDIA
+ }
+
+ # Try to match keywords in the data string
+ for keyword, category in keyword_mapping.items():
+ if keyword in data_string:
+ return category.name
+
+ # Return a default category if no match is found
+ return FormatRoute.PDF.name
def data_location_route(data_string: str):
- @ai_classifier
class LocationRoute(Enum):
- """Represents classifier for the data location, if it is device, or database connections string or URL"""
+ """Represents classifier for the data location, if it is device, or database connection string or URL"""
- DEVICE = "file_path_starting_with_.data_or_containing_it"
- # URL = "url starting with http or https"
- DATABASE = "database_name_like_postgres_or_mysql"
+ DEVICE = "DEVICE"
+ URL = "URL"
+ DATABASE = "DATABASE"
- return LocationRoute(data_string).name
+ # Convert the input string to lowercase for case-insensitive matching
+ data_string = data_string.lower()
+ # Check for specific patterns in the data string
+ if data_string.startswith(".data") or "data" in data_string:
+ return LocationRoute.DEVICE.name
+ elif data_string.startswith("http://") or data_string.startswith("https://"):
+ return LocationRoute.URL.name
+ elif "postgres" in data_string or "mysql" in data_string:
+ return LocationRoute.DATABASE.name
+
+ # Return a default category if no match is found
+ return "Unknown"
def dynamic_test_manager(context=None):
from deepeval.dataset import create_evaluation_query_answer_pairs
@@ -373,7 +462,6 @@ async def start_test(
test_set=None,
user_id=None,
params=None,
- job_id=None,
metadata=None,
generate_test_set=False,
retriever_type: str = None,
@@ -381,6 +469,7 @@ async def start_test(
"""retriever_type = "llm_context, single_document_context, multi_document_context, "cognitive_architecture""" ""
async with session_scope(session=AsyncSessionLocal()) as session:
+ job_id = ""
job_id = await fetch_job_id(session, user_id=user_id, job_id=job_id)
test_set_id = await fetch_test_set_id(session, user_id=user_id, content=str(test_set))
memory = await Memory.create_memory(
@@ -397,26 +486,26 @@ async def start_test(
if params is None:
data_format = data_format_route(
- data
+ data[0]
) # Assume data_format_route is predefined
logging.info("Data format is %s", data_format)
- data_location = data_location_route(data)
+ data_location = data_location_route(data[0])
logging.info(
"Data location is %s", data_location
) # Assume data_location_route is predefined
test_params = generate_param_variants(included_params=["chunk_size"])
if params:
data_format = data_format_route(
- data
+ data[0]
) # Assume data_format_route is predefined
logging.info("Data format is %s", data_format)
- data_location = data_location_route(data)
+ data_location = data_location_route(data[0])
logging.info(
"Data location is %s", data_location
)
test_params = generate_param_variants(included_params=params)
- print("Here are the test params", str(test_params))
+ logging.info("Here are the test params %s", str(test_params))
loader_settings = {
"format": f"{data_format}",
@@ -433,10 +522,22 @@ async def start_test(
user_id=user_id,
operation_params=str(test_params),
number_of_files=count_files_in_data_folder(),
+ operation_status = "RUNNING",
operation_type=retriever_type,
test_set_id=test_set_id,
),
)
+ doc_names = get_document_names(data)
+ for doc in doc_names:
+
+ await add_entity(
+ session,
+ DocsModel(
+ id=str(uuid.uuid4()),
+ operation_id=job_id,
+ doc_name = doc
+ )
+ )
async def run_test(
test, loader_settings, metadata, test_id=None, retriever_type=False
@@ -500,11 +601,13 @@ async def start_test(
return retrieve_action["data"]["Get"][test_id][0]["text"]
async def run_eval(test_item, search_result):
+ logging.info("Initiated test set evaluation")
test_eval = await eval_test(
- query=test_item["question"],
- expected_output=test_item["answer"],
+ query=str(test_item["question"]),
+ expected_output=str(test_item["answer"]),
context=str(search_result),
)
+ logging.info("Successfully evaluated test set")
return test_eval
async def run_generate_test_set(test_id):
@@ -521,13 +624,11 @@ async def start_test(
return dynamic_test_manager(retrieve_action)
test_eval_pipeline = []
-
if retriever_type == "llm_context":
for test_qa in test_set:
context = ""
logging.info("Loading and evaluating test set for LLM context")
test_result = await run_eval(test_qa, context)
-
test_eval_pipeline.append(test_result)
elif retriever_type == "single_document_context":
if test_set:
@@ -556,7 +657,12 @@ async def start_test(
results = []
+ logging.info("Validating the retriever type")
+
+ logging.info("Retriever type: %s", retriever_type)
+
if retriever_type == "llm_context":
+ logging.info("Retriever type: llm_context")
test_id, result = await run_test(
test=None,
loader_settings=loader_settings,
@@ -566,6 +672,7 @@ async def start_test(
results.append([result, "No params"])
elif retriever_type == "single_document_context":
+ logging.info("Retriever type: single document context")
for param in test_params:
logging.info("Running for chunk size %s", param["chunk_size"])
test_id, result = await run_test(
@@ -597,94 +704,97 @@ async def start_test(
),
)
+ await update_entity(session, Operation, job_id, "COMPLETED")
+
return results
async def main():
- metadata = {
- "version": "1.0",
- "agreement_id": "AG123456",
- "privacy_policy": "https://example.com/privacy",
- "terms_of_service": "https://example.com/terms",
- "format": "json",
- "schema_version": "1.1",
- "checksum": "a1b2c3d4e5f6",
- "owner": "John Doe",
- "license": "MIT",
- "validity_start": "2023-08-01",
- "validity_end": "2024-07-31",
- }
-
- test_set = [
- {
- "question": "Who is the main character in 'The Call of the Wild'?",
- "answer": "Buck",
- },
- {"question": "Who wrote 'The Call of the Wild'?", "answer": "Jack London"},
- {
- "question": "Where does Buck live at the start of the book?",
- "answer": "In the Santa Clara Valley, at Judge Miller’s place.",
- },
- {
- "question": "Why is Buck kidnapped?",
- "answer": "He is kidnapped to be sold as a sled dog in the Yukon during the Klondike Gold Rush.",
- },
- {
- "question": "How does Buck become the leader of the sled dog team?",
- "answer": "Buck becomes the leader after defeating the original leader, Spitz, in a fight.",
- },
- ]
+ # metadata = {
+ # "version": "1.0",
+ # "agreement_id": "AG123456",
+ # "privacy_policy": "https://example.com/privacy",
+ # "terms_of_service": "https://example.com/terms",
+ # "format": "json",
+ # "schema_version": "1.1",
+ # "checksum": "a1b2c3d4e5f6",
+ # "owner": "John Doe",
+ # "license": "MIT",
+ # "validity_start": "2023-08-01",
+ # "validity_end": "2024-07-31",
+ # }
+ #
+ # test_set = [
+ # {
+ # "question": "Who is the main character in 'The Call of the Wild'?",
+ # "answer": "Buck",
+ # },
+ # {"question": "Who wrote 'The Call of the Wild'?", "answer": "Jack London"},
+ # {
+ # "question": "Where does Buck live at the start of the book?",
+ # "answer": "In the Santa Clara Valley, at Judge Miller’s place.",
+ # },
+ # {
+ # "question": "Why is Buck kidnapped?",
+ # "answer": "He is kidnapped to be sold as a sled dog in the Yukon during the Klondike Gold Rush.",
+ # },
+ # {
+ # "question": "How does Buck become the leader of the sled dog team?",
+ # "answer": "Buck becomes the leader after defeating the original leader, Spitz, in a fight.",
+ # },
+ # ]
# "https://www.ibiblio.org/ebooks/London/Call%20of%20Wild.pdf"
- # http://public-library.uk/ebooks/59/83.pdf
- result = await start_test(
- ".data/3ZCCCW.pdf",
- test_set=test_set,
- user_id="677",
- params=["chunk_size", "search_type"],
- metadata=metadata,
- retriever_type="single_document_context",
- )
- #
- # parser = argparse.ArgumentParser(description="Run tests against a document.")
- # parser.add_argument("--url", required=True, help="URL of the document to test.")
- # parser.add_argument("--test_set", required=True, help="Path to JSON file containing the test set.")
- # parser.add_argument("--user_id", required=True, help="User ID.")
- # parser.add_argument("--params", help="Additional parameters in JSON format.")
- # parser.add_argument("--metadata", required=True, help="Path to JSON file containing metadata.")
- # parser.add_argument("--generate_test_set", required=True, help="Make a test set.")
- # parser.add_argument("--only_llm_context", required=True, help="Do a test only within the existing LLM context")
- # args = parser.parse_args()
- #
- # try:
- # with open(args.test_set, "r") as file:
- # test_set = json.load(file)
- # if not isinstance(test_set, list): # Expecting a list
- # raise TypeError("Parsed test_set JSON is not a list.")
- # except Exception as e:
- # print(f"Error loading test_set: {str(e)}")
- # return
- #
- # try:
- # with open(args.metadata, "r") as file:
- # metadata = json.load(file)
- # if not isinstance(metadata, dict):
- # raise TypeError("Parsed metadata JSON is not a dictionary.")
- # except Exception as e:
- # print(f"Error loading metadata: {str(e)}")
- # return
- #
- # if args.params:
- # try:
- # params = json.loads(args.params)
- # if not isinstance(params, dict):
- # raise TypeError("Parsed params JSON is not a dictionary.")
- # except json.JSONDecodeError as e:
- # print(f"Error parsing params: {str(e)}")
- # return
- # else:
- # params = None
- # #clean up params here
- # await start_test(args.url, test_set, args.user_id, params=None, metadata=metadata)
+ # # http://public-library.uk/ebooks/59/83.pdf
+ # result = await start_test(
+ # [".data/3ZCCCW.pdf"],
+ # test_set=test_set,
+ # user_id="677",
+ # params=["chunk_size", "search_type"],
+ # metadata=metadata,
+ # retriever_type="single_document_context",
+ # )
+
+ parser = argparse.ArgumentParser(description="Run tests against a document.")
+ parser.add_argument("--file", nargs="+", required=True, help="List of file paths to test.")
+ parser.add_argument("--test_set", required=True, help="Path to JSON file containing the test set.")
+ parser.add_argument("--user_id", required=True, help="User ID.")
+ parser.add_argument("--params", help="Additional parameters in JSON format.")
+ parser.add_argument("--metadata", required=True, help="Path to JSON file containing metadata.")
+ # parser.add_argument("--generate_test_set", required=False, help="Make a test set.")
+ parser.add_argument("--retriever_type", required=False, help="Do a test only within the existing LLM context")
+ args = parser.parse_args()
+
+ try:
+ with open(args.test_set, "r") as file:
+ test_set = json.load(file)
+ if not isinstance(test_set, list): # Expecting a list
+ raise TypeError("Parsed test_set JSON is not a list.")
+ except Exception as e:
+ print(f"Error loading test_set: {str(e)}")
+ return
+
+ try:
+ with open(args.metadata, "r") as file:
+ metadata = json.load(file)
+ if not isinstance(metadata, dict):
+ raise TypeError("Parsed metadata JSON is not a dictionary.")
+ except Exception as e:
+ print(f"Error loading metadata: {str(e)}")
+ return
+
+ if args.params:
+ try:
+ params = json.loads(args.params)
+ if not isinstance(params, dict):
+ raise TypeError("Parsed params JSON is not a dictionary.")
+ except json.JSONDecodeError as e:
+ print(f"Error parsing params: {str(e)}")
+ return
+ else:
+ params = None
+ logging.info("Args datatype is", type(args.file))
+ #clean up params here
+ await start_test(data=args.file, test_set=test_set, user_id= args.user_id, params= params, metadata =metadata, retriever_type=args.retriever_type)
if __name__ == "__main__":
diff --git a/level_3/scripts/create_database.py b/level_3/scripts/create_database.py
index b06021ddd..f6715741b 100644
--- a/level_3/scripts/create_database.py
+++ b/level_3/scripts/create_database.py
@@ -15,6 +15,7 @@ import models.sessions
import models.testoutput
import models.testset
import models.user
+import models.docs
from sqlalchemy import create_engine, text
import psycopg2
from dotenv import load_dotenv
diff --git a/level_3/vectordb/basevectordb.py b/level_3/vectordb/basevectordb.py
index 69ce66784..81f4f7618 100644
--- a/level_3/vectordb/basevectordb.py
+++ b/level_3/vectordb/basevectordb.py
@@ -289,3 +289,9 @@ class BaseMemory:
async def delete_memories(self, namespace:str, params: Optional[str] = None):
return await self.vector_db.delete_memories(namespace,params)
+
+ async def count_memories(self, namespace:str, params: Optional[str] = None):
+ return await self.vector_db.count_memories(namespace,params)
+
+
+
diff --git a/level_3/vectordb/loaders/loaders.py b/level_3/vectordb/loaders/loaders.py
index 8dad3a5d9..acb54147e 100644
--- a/level_3/vectordb/loaders/loaders.py
+++ b/level_3/vectordb/loaders/loaders.py
@@ -6,50 +6,72 @@ sys.path.append(os.path.dirname(os.path.abspath(__file__)))
from vectordb.chunkers.chunkers import chunk_data
from llama_hub.file.base import SimpleDirectoryReader
-
+from langchain.document_loaders import UnstructuredURLLoader
from langchain.document_loaders import DirectoryLoader
-
+import logging
+import os
+from langchain.document_loaders import TextLoader
import requests
async def _document_loader( observation: str, loader_settings: dict):
- # Check the format of the document
+
document_format = loader_settings.get("format", "text")
loader_strategy = loader_settings.get("strategy", "VANILLA")
chunk_size = loader_settings.get("chunk_size", 500)
chunk_overlap = loader_settings.get("chunk_overlap", 20)
+ import logging
+ import os
- print("LOADER SETTINGS", loader_settings)
+ logging.info("LOADER SETTINGS %s", loader_settings)
- if document_format == "PDF":
- if loader_settings.get("source") == "URL":
- pdf_response = requests.get(loader_settings["path"])
- pdf_stream = BytesIO(pdf_response.content)
- with fitz.open(stream=pdf_stream, filetype='pdf') as doc:
- file_content = ""
- for page in doc:
- file_content += page.get_text()
- pages = chunk_data(chunk_strategy= loader_strategy, source_data=file_content, chunk_size=chunk_size, chunk_overlap=chunk_overlap)
+ list_of_docs = loader_settings["path"]
+ chunked_doc = []
- return pages
- elif loader_settings.get("source") == "DEVICE":
- import os
+ if loader_settings.get("source") == "URL":
+ for file in list_of_docs:
+ if document_format == "PDF":
+ pdf_response = requests.get(file)
+ pdf_stream = BytesIO(pdf_response.content)
+ with fitz.open(stream=pdf_stream, filetype='pdf') as doc:
+ file_content = ""
+ for page in doc:
+ file_content += page.get_text()
+ pages = chunk_data(chunk_strategy=loader_strategy, source_data=file_content, chunk_size=chunk_size,
+ chunk_overlap=chunk_overlap)
- current_directory = os.getcwd()
- import logging
- logging.info("Current Directory: %s", current_directory)
+ chunked_doc.append(pages)
- loader = DirectoryLoader(".data", recursive=True)
+ elif document_format == "TEXT":
+ loader = UnstructuredURLLoader(urls=file)
+ file_content = loader.load()
+ pages = chunk_data(chunk_strategy=loader_strategy, source_data=file_content, chunk_size=chunk_size,
+ chunk_overlap=chunk_overlap)
+ chunked_doc.append(pages)
+ elif loader_settings.get("source") == "DEVICE":
+
+ current_directory = os.getcwd()
+ logging.info("Current Directory: %s", current_directory)
+
+ loader = DirectoryLoader(".data", recursive=True)
+ if document_format == "PDF":
# loader = SimpleDirectoryReader(".data", recursive=True, exclude_hidden=True)
documents = loader.load()
logging.info("Documents: %s", documents)
# pages = documents.load_and_split()
- return documents
+ chunked_doc.append(documents)
- elif document_format == "text":
- pages = chunk_data(chunk_strategy= loader_strategy, source_data=observation, chunk_size=chunk_size, chunk_overlap=chunk_overlap)
- return pages
+
+ elif document_format == "TEXT":
+ documents = loader.load()
+ logging.info("Documents: %s", documents)
+ # pages = documents.load_and_split()
+ chunked_doc.append(documents)
else:
- raise ValueError(f"Unsupported document format: {document_format}")
+ raise ValueError(f"Error: ")
+ return chunked_doc
+
+
+
diff --git a/level_3/vectordb/vectordb.py b/level_3/vectordb/vectordb.py
index 55894a405..0cc66066b 100644
--- a/level_3/vectordb/vectordb.py
+++ b/level_3/vectordb/vectordb.py
@@ -153,7 +153,7 @@ class WeaviateVectorDB(VectorDB):
# Assuming _document_loader returns a list of documents
documents = await _document_loader(observation, loader_settings)
logging.info("here are the docs %s", str(documents))
- for doc in documents:
+ for doc in documents[0]:
document_to_load = self._stuct(doc.page_content, params, metadata_schema_class)
logging.info("Loading document with provided loader settings %s", str(document_to_load))
@@ -290,6 +290,30 @@ class WeaviateVectorDB(VectorDB):
},
)
+
+ async def count_memories(self, namespace: str = None, params: dict = None) -> int:
+ """
+ Count memories in a Weaviate database.
+
+ Args:
+ namespace (str, optional): The Weaviate namespace to count memories in. If not provided, uses the default namespace.
+
+ Returns:
+ int: The number of memories in the specified namespace.
+ """
+ if namespace is None:
+ namespace = self.namespace
+
+ client = self.init_weaviate(namespace =namespace)
+
+ try:
+ object_count = client.query.aggregate(namespace).with_meta_count().do()
+ return object_count
+ except Exception as e:
+ logging.info(f"Error counting memories: {str(e)}")
+ # Handle the error or log it
+ return 0
+
def update_memories(self, observation, namespace: str, params: dict = None):
client = self.init_weaviate(namespace = self.namespace)
diff --git a/level_3/wait-for-it.sh b/level_3/wait-for-it.sh
new file mode 100644
index 000000000..3974640b0
--- /dev/null
+++ b/level_3/wait-for-it.sh
@@ -0,0 +1,182 @@
+#!/usr/bin/env bash
+# Use this script to test if a given TCP host/port are available
+
+WAITFORIT_cmdname=${0##*/}
+
+echoerr() { if [[ $WAITFORIT_QUIET -ne 1 ]]; then echo "$@" 1>&2; fi }
+
+usage()
+{
+ cat << USAGE >&2
+Usage:
+ $WAITFORIT_cmdname host:port [-s] [-t timeout] [-- command args]
+ -h HOST | --host=HOST Host or IP under test
+ -p PORT | --port=PORT TCP port under test
+ Alternatively, you specify the host and port as host:port
+ -s | --strict Only execute subcommand if the test succeeds
+ -q | --quiet Don't output any status messages
+ -t TIMEOUT | --timeout=TIMEOUT
+ Timeout in seconds, zero for no timeout
+ -- COMMAND ARGS Execute command with args after the test finishes
+USAGE
+ exit 1
+}
+
+wait_for()
+{
+ if [[ $WAITFORIT_TIMEOUT -gt 0 ]]; then
+ echoerr "$WAITFORIT_cmdname: waiting $WAITFORIT_TIMEOUT seconds for $WAITFORIT_HOST:$WAITFORIT_PORT"
+ else
+ echoerr "$WAITFORIT_cmdname: waiting for $WAITFORIT_HOST:$WAITFORIT_PORT without a timeout"
+ fi
+ WAITFORIT_start_ts=$(date +%s)
+ while :
+ do
+ if [[ $WAITFORIT_ISBUSY -eq 1 ]]; then
+ nc -z $WAITFORIT_HOST $WAITFORIT_PORT
+ WAITFORIT_result=$?
+ else
+ (echo -n > /dev/tcp/$WAITFORIT_HOST/$WAITFORIT_PORT) >/dev/null 2>&1
+ WAITFORIT_result=$?
+ fi
+ if [[ $WAITFORIT_result -eq 0 ]]; then
+ WAITFORIT_end_ts=$(date +%s)
+ echoerr "$WAITFORIT_cmdname: $WAITFORIT_HOST:$WAITFORIT_PORT is available after $((WAITFORIT_end_ts - WAITFORIT_start_ts)) seconds"
+ break
+ fi
+ sleep 1
+ done
+ return $WAITFORIT_result
+}
+
+wait_for_wrapper()
+{
+ # In order to support SIGINT during timeout: http://unix.stackexchange.com/a/57692
+ if [[ $WAITFORIT_QUIET -eq 1 ]]; then
+ timeout $WAITFORIT_BUSYTIMEFLAG $WAITFORIT_TIMEOUT $0 --quiet --child --host=$WAITFORIT_HOST --port=$WAITFORIT_PORT --timeout=$WAITFORIT_TIMEOUT &
+ else
+ timeout $WAITFORIT_BUSYTIMEFLAG $WAITFORIT_TIMEOUT $0 --child --host=$WAITFORIT_HOST --port=$WAITFORIT_PORT --timeout=$WAITFORIT_TIMEOUT &
+ fi
+ WAITFORIT_PID=$!
+ trap "kill -INT -$WAITFORIT_PID" INT
+ wait $WAITFORIT_PID
+ WAITFORIT_RESULT=$?
+ if [[ $WAITFORIT_RESULT -ne 0 ]]; then
+ echoerr "$WAITFORIT_cmdname: timeout occurred after waiting $WAITFORIT_TIMEOUT seconds for $WAITFORIT_HOST:$WAITFORIT_PORT"
+ fi
+ return $WAITFORIT_RESULT
+}
+
+# process arguments
+while [[ $# -gt 0 ]]
+do
+ case "$1" in
+ *:* )
+ WAITFORIT_hostport=(${1//:/ })
+ WAITFORIT_HOST=${WAITFORIT_hostport[0]}
+ WAITFORIT_PORT=${WAITFORIT_hostport[1]}
+ shift 1
+ ;;
+ --child)
+ WAITFORIT_CHILD=1
+ shift 1
+ ;;
+ -q | --quiet)
+ WAITFORIT_QUIET=1
+ shift 1
+ ;;
+ -s | --strict)
+ WAITFORIT_STRICT=1
+ shift 1
+ ;;
+ -h)
+ WAITFORIT_HOST="$2"
+ if [[ $WAITFORIT_HOST == "" ]]; then break; fi
+ shift 2
+ ;;
+ --host=*)
+ WAITFORIT_HOST="${1#*=}"
+ shift 1
+ ;;
+ -p)
+ WAITFORIT_PORT="$2"
+ if [[ $WAITFORIT_PORT == "" ]]; then break; fi
+ shift 2
+ ;;
+ --port=*)
+ WAITFORIT_PORT="${1#*=}"
+ shift 1
+ ;;
+ -t)
+ WAITFORIT_TIMEOUT="$2"
+ if [[ $WAITFORIT_TIMEOUT == "" ]]; then break; fi
+ shift 2
+ ;;
+ --timeout=*)
+ WAITFORIT_TIMEOUT="${1#*=}"
+ shift 1
+ ;;
+ --)
+ shift
+ WAITFORIT_CLI=("$@")
+ break
+ ;;
+ --help)
+ usage
+ ;;
+ *)
+ echoerr "Unknown argument: $1"
+ usage
+ ;;
+ esac
+done
+
+if [[ "$WAITFORIT_HOST" == "" || "$WAITFORIT_PORT" == "" ]]; then
+ echoerr "Error: you need to provide a host and port to test."
+ usage
+fi
+
+WAITFORIT_TIMEOUT=${WAITFORIT_TIMEOUT:-15}
+WAITFORIT_STRICT=${WAITFORIT_STRICT:-0}
+WAITFORIT_CHILD=${WAITFORIT_CHILD:-0}
+WAITFORIT_QUIET=${WAITFORIT_QUIET:-0}
+
+# Check to see if timeout is from busybox?
+WAITFORIT_TIMEOUT_PATH=$(type -p timeout)
+WAITFORIT_TIMEOUT_PATH=$(realpath $WAITFORIT_TIMEOUT_PATH 2>/dev/null || readlink -f $WAITFORIT_TIMEOUT_PATH)
+
+WAITFORIT_BUSYTIMEFLAG=""
+if [[ $WAITFORIT_TIMEOUT_PATH =~ "busybox" ]]; then
+ WAITFORIT_ISBUSY=1
+ # Check if busybox timeout uses -t flag
+ # (recent Alpine versions don't support -t anymore)
+ if timeout &>/dev/stdout | grep -q -e '-t '; then
+ WAITFORIT_BUSYTIMEFLAG="-t"
+ fi
+else
+ WAITFORIT_ISBUSY=0
+fi
+
+if [[ $WAITFORIT_CHILD -gt 0 ]]; then
+ wait_for
+ WAITFORIT_RESULT=$?
+ exit $WAITFORIT_RESULT
+else
+ if [[ $WAITFORIT_TIMEOUT -gt 0 ]]; then
+ wait_for_wrapper
+ WAITFORIT_RESULT=$?
+ else
+ wait_for
+ WAITFORIT_RESULT=$?
+ fi
+fi
+
+if [[ $WAITFORIT_CLI != "" ]]; then
+ if [[ $WAITFORIT_RESULT -ne 0 && $WAITFORIT_STRICT -eq 1 ]]; then
+ echoerr "$WAITFORIT_cmdname: strict mode, refusing to execute subprocess"
+ exit $WAITFORIT_RESULT
+ fi
+ exec "${WAITFORIT_CLI[@]}"
+else
+ exit $WAITFORIT_RESULT
+fi
\ No newline at end of file