Deploy with Docker
There are two different Docker Compose files. -They deploy the same applications and containers, but to different environments.
+Install with Docker
There are two different Docker Compose files. +They deploy the same applications and containers locally, but to different environments.
-
@@ -31,7 +31,7 @@ They deploy the same applications and containers, but to different environments.docker-compose.ymlis an OpenRAG deployment with GPU support for accelerated AI processing. - Create an OpenAI API key. This key is required to start OpenRAG, but you can choose a different model provider during Application Onboarding.
- Optional: GPU support requires an NVIDIA GPU with CUDA support and compatible NVIDIA drivers installed on the OpenRAG host machine. If you don't have GPU capabilities, OpenRAG provides an alternate CPU-only deployment.
Deploy OpenRAG with Docker Compose
+Install OpenRAG with Docker Compose
To install OpenRAG with Docker Compose, do the following:
-
@@ -67,7 +67,7 @@ Both Docker deployments depend on
docling serveto be running on poStatus: running
Endpoint: http://127.0.0.1:5001
Docs: http://127.0.0.1:5001/docs
PID: 27746 -
-
Deploy OpenRAG with Docker Compose based on your deployment type.
+Deploy OpenRAG locally with Docker Compose based on your deployment type.
For GPU-enabled systems, run the following commands:
docker compose build
docker compose up -dFor environments without GPU support, run:
@@ -93,12 +93,10 @@ Both Docker deployments depend ondocling serveto be running on pouv run python scripts/docling_ctl.py stopApplication onboarding
The first time you start OpenRAG, whether using the TUI or a
-.envfile, you must complete application onboarding.Most values from onboarding can be changed later in the OpenRAG Settings page, but there are important restrictions.
-The language model provider and embeddings model provider can only be selected at onboarding, and you must use the same provider for your language model and embedding model. -To change your provider selection later, you must completely reinstall OpenRAG.
-The language model can be changed later in Settings, but the embeddings model cannot be changed later.
+Values from onboarding can be changed later in the OpenRAG Settings page.
- OpenAI
- IBM watsonx.ai
- Ollama
-
-
- Enable Get API key from environment variable to automatically enter your key from the TUI-generated
.envfile.
+ - Enable Get API key from environment variable to automatically enter your key from the TUI-generated
.envfile. +Alternatively, paste an OpenAI API key into the field. - Under Advanced settings, select your Embedding Model and Language Model.
- To load 2 sample PDFs, enable Sample dataset. This is recommended, but not required. @@ -137,7 +135,7 @@ Documents stored in the
./documentsdirectory will persist, since tRemove all containers and data (destructive)
Completely remove your OpenRAG installation and delete all data. This deletes all of your data, including OpenSearch data, uploaded documents, and authentication.
-docker compose down --volumes --remove-orphans --rmi local
docker system prune -f
Terminal User Interface (TUI) commands
The OpenRAG Terminal User Interface (TUI) allows you to set up, configure, and monitor your OpenRAG deployment directly from the terminal, on any operating system.
+Terminal User Interface (TUI) commands
The OpenRAG Terminal User Interface (TUI) allows you to set up, configure, and monitor your OpenRAG deployment directly from the terminal, on any operating system.
Instead of starting OpenRAG using Docker commands and manually editing values in the .env file, the TUI walks you through the setup. It prompts for variables where required, creates a .env file for you, and then starts OpenRAG.
Once OpenRAG is running, use the TUI to monitor your application, control your containers, and retrieve logs.
diff --git a/index.html b/index.html index 0084606c..c50b77dd 100644 --- a/index.html +++ b/index.html @@ -4,14 +4,14 @@What is OpenRAG?
OpenRAG is an open-source package for building agentic RAG systems. +
What is OpenRAG?
OpenRAG is an open-source package for building agentic RAG systems. It supports integration with a wide range of orchestration tools, vector databases, and LLM providers.
OpenRAG connects and amplifies three popular, proven open-source projects into one powerful platform:
-
diff --git a/ingestion/index.html b/ingestion/index.html
index e43df297..885da908 100644
--- a/ingestion/index.html
+++ b/ingestion/index.html
@@ -4,7 +4,7 @@
-
-
Add your client and secret values for Google, Azure, or AWS OAuth. -These values can be found in your OAuth provider.
+Add your client and secret values for Google or Microsoft OAuth. +These values can be found with your OAuth provider. +For more information, see the Google OAuth client or Microsoft Graph OAuth client documentation.
-
The OpenRAG TUI presents redirect URIs for your OAuth app. @@ -100,13 +101,18 @@ When startup is complete, the TUI displays the following:
-
To open the OpenRAG application, click Open App, press 6, or navigate to
+You are presented with your provider's OAuth sign-in screen. +After sign-in, you are redirected to the redirect URI.http://localhost:3000. -You will be presented with your provider's OAuth sign-in screen, and be redirected to the redirect URI after sign-in. -Continue with Application Onboarding.Two additional variables are available for Advanced Setup:
The
LANGFLOW_PUBLIC_URLcontrols where the Langflow web interface can be accessed. This is where users interact with their flows in a browser.The
+Supported webhook endpoints: +WEBHOOK_BASE_URLcontrols where the endpoint for/connectors/CONNECTOR_TYPE/webhookwill be available. This connection enables real-time document synchronization with external services. -For example, for Google Drive file synchronization the webhook URL is/connectors/google_drive/webhook.-
+
- Google Drive:
/connectors/google_drive/webhook
+ - OneDrive:
/connectors/onedrive/webhook
+ - SharePoint:
/connectors/sharepoint/webhook
+
- Google Drive:
-
Continue with Application Onboarding.
@@ -114,12 +120,10 @@ For example, for Google Drive file synchronization the webhook URL is/con - OpenAI
- IBM watsonx.ai
- Ollama
- Enable Get API key from environment variable to automatically enter your key from the TUI-generated
.envfile.
+ - Enable Get API key from environment variable to automatically enter your key from the TUI-generated
.envfile. +Alternatively, paste an OpenAI API key into the field. - Under Advanced settings, select your Embedding Model and Language Model.
- To load 2 sample PDFs, enable Sample dataset. This is recommended, but not required. @@ -143,7 +147,7 @@ OpenRAG retrieves the available models from your Ollama server. This is recommended, but not required.
- Click Complete.
- Continue with the Quickstart. -
Install OpenRAG
Install the OpenRAG Python wheel, and then run the OpenRAG Terminal User Interface(TUI) to start your OpenRAG deployment with a guided setup process.
+Install OpenRAG
Install the OpenRAG Python wheel, and then run the OpenRAG Terminal User Interface(TUI) to start your OpenRAG deployment with a guided setup process.
If you prefer running Docker commands and manually editing .env files, see Deploy with Docker.
Prerequisites
-
@@ -81,8 +81,9 @@ When startup is complete, the TUI displays the following:
Paste your OpenAI API key in the OpenAI API key field.
Application onboarding
The first time you start OpenRAG, whether using the TUI or a .env file, you must complete application onboarding.
Most values from onboarding can be changed later in the OpenRAG Settings page, but there are important restrictions.
-The language model provider and embeddings model provider can only be selected at onboarding, and you must use the same provider for your language model and embedding model. -To change your provider selection later, you must completely reinstall OpenRAG.
-The language model can be changed later in Settings, but the embeddings model cannot be changed later.
+Values from onboarding can be changed later in the OpenRAG Settings page.
-
-
To load and process a directory from the mapped location, click Add Knowledge, and then click Process Folder. The files are loaded into your OpenSearch database, and appear in the Knowledge page.
Ingest files through OAuth connectors
-OpenRAG supports Google Drive, OneDrive, and AWS S3 as OAuth connectors for seamless document synchronization.
+OpenRAG supports Google Drive, OneDrive, and Sharepoint as OAuth connectors for seamless document synchronization.
OAuth integration allows individual users to connect their personal cloud storage accounts to OpenRAG. Each user must separately authorize OpenRAG to access their own cloud storage files. When a user connects a cloud service, they are redirected to authenticate with that service provider and grant OpenRAG permission to sync documents from their personal cloud storage.
Before users can connect their cloud storage accounts, you must configure OAuth credentials in OpenRAG. This requires registering OpenRAG as an OAuth application with a cloud provider and obtaining client ID and secret keys for each service you want to support.
To add an OAuth connector to OpenRAG, do the following. diff --git a/quickstart/index.html b/quickstart/index.html index 36735abc..f2a4ac7f 100644 --- a/quickstart/index.html +++ b/quickstart/index.html @@ -4,14 +4,14 @@
Quickstart
Get started with OpenRAG by loading your knowledge, swapping out your language model, and then chatting with the OpenRAG API.
+Quickstart
Get started with OpenRAG by loading your knowledge, swapping out your language model, and then chatting with the OpenRAG API.
Prerequisites
- Install and start OpenRAG @@ -47,9 +47,7 @@ If you aren't getting the results you need, you can further tune the knowle
-
To edit the Agent's behavior, click Edit in Langflow. @@ -122,7 +120,7 @@ The following is an example of a response from running the Simple Agent<
- The Langflow Quickstart extends this example with extracting fields from the response.
- Get started with the Langflow API -
Swap out the language model to modify agent behavior
To modify the knowledge ingestion or Agent behavior, click Settings.
-In this example, you'll try a different LLM to demonstrate how the Agent's response changes. -You can only change the Language model, and not the Model provider that you started with in OpenRAG. -If you're using Ollama, you can use any installed model.
+In this example, you'll try a different LLM to demonstrate how the Agent's response changes.