Install OpenRAG with TUI
Install OpenRAG and then run the OpenRAG Terminal User Interface(TUI) to start your OpenRAG deployment with a guided setup process.
The OpenRAG Terminal User Interface (TUI) allows you to set up, configure, and monitor your OpenRAG deployment directly from the terminal.
Instead of starting OpenRAG using Docker commands and manually editing values in the .env file, the TUI walks you through the setup. It prompts for variables where required, creates a .env file for you, and then starts OpenRAG.
Once OpenRAG is running, use the TUI to monitor your application, control your containers, and retrieve logs.
If you prefer running Podman or Docker containers and manually editing .env files, see Install OpenRAG Containers.
Prerequisites
- Install Python Version 3.10 to 3.13
- Install uv
- Install Podman (recommended) or Docker
- Install Docker Compose. If using Podman, use podman-compose or alias Docker compose commands to Podman commands.
- Create an OpenAI API key. This key is required to start OpenRAG, but you can choose a different model provider during Application Onboarding.
- Optional: Install GPU support with an NVIDIA GPU, CUDA support, and compatible NVIDIA drivers on the OpenRAG host machine. If you don't have GPU capabilities, OpenRAG provides an alternate CPU-only deployment.
Install OpenRAG
To use OpenRAG on Windows, use WSL (Windows Subsystem for Linux).
To set up a project and install OpenRAG as a dependency, do the following:
-
Create a new project with a virtual environment using
uv init.uv init YOUR_PROJECT_NAME
cd YOUR_PROJECT_NAMEThe
(venv)prompt doesn't change, butuvcommands will automatically use the project's virtual environment. For more information on virtual environments, see the uv documentation. -
Add OpenRAG to your project.
uv add openragTo add a specific version of OpenRAG:
uv add openrag==0.1.25 -
Start the OpenRAG TUI.
uv run openragInstall a local wheel
If you downloaded the OpenRAG wheel to your local machine, follow these steps:
-
Add the wheel to your project's virtual environment.
uv add PATH/TO/openrag-VERSION-py3-none-any.whlReplace
PATH/TO/andVERSIONwith the path and version of your downloaded OpenRAG.whlfile.For example, if your
.whlfile is in the~/Downloadsdirectory:uv add ~/Downloads/openrag-0.1.8-py3-none-any.whl -
Run OpenRAG.
uv run openrag
-
-
Continue with Set up OpenRAG with the TUI.
Set up OpenRAG with the TUI
The TUI creates a .env file in your OpenRAG directory root and starts OpenRAG.
If the TUI detects a .env file in the OpenRAG root directory, it sources any variables from the .env file.
If the TUI detects OAuth credentials, it enforces the Advanced Setup path.
- Basic setup
- Advanced setup
Basic Setup generates all of the required values for OpenRAG except the OpenAI API key. Basic Setup does not set up OAuth connections for ingestion from cloud providers. For OAuth setup, use Advanced Setup. For information about the difference between basic (no auth) and OAuth in OpenRAG, see Authentication and document access.
-
To install OpenRAG with Basic Setup, click Basic Setup or press 1.
-
Click Generate Passwords to generate passwords for OpenSearch and Langflow.
The OpenSearch password is required. The Langflow admin password is optional. If no Langflow admin password is generated, Langflow runs in autologin mode with no password required.
-
Paste your OpenAI API key in the OpenAI API key field.
-
Click Save Configuration. Your passwords are saved in the
.envfile used to start OpenRAG. -
To start OpenRAG, click Start All Services. Startup pulls container images and runs them, so it can take some time. When startup is complete, the TUI displays the following:
Services started successfully
Command completed successfully -
To open the OpenRAG application, click Open App.
-
Continue with Application Onboarding.
-
To install OpenRAG with Advanced Setup, click Advanced Setup or press 2.
-
Click Generate Passwords to generate passwords for OpenSearch and Langflow.
The OpenSearch password is required. The Langflow admin password is optional. If no Langflow admin password is generated, Langflow runs in autologin mode with no password required.
-
Paste your OpenAI API key in the OpenAI API key field.
-
Add your client and secret values for Google or Microsoft OAuth. These values can be found with your OAuth provider. For more information, see the Google OAuth client or Microsoft Graph OAuth client documentation.
-
The OpenRAG TUI presents redirect URIs for your OAuth app. These are the URLs your OAuth provider will redirect back to after user sign-in. Register these redirect values with your OAuth provider as they are presented in the TUI.
-
Click Save Configuration.
-
To start OpenRAG, click Start All Services. Startup pulls container images and runs them, so it can take some time. When startup is complete, the TUI displays the following:
Services started successfully
Command completed successfully -
To open the OpenRAG application, click Open App. You are presented with your provider's OAuth sign-in screen. After sign-in, you are redirected to the redirect URI.
Two additional variables are available for Advanced Setup:
The
LANGFLOW_PUBLIC_URLcontrols where the Langflow web interface can be accessed. This is where users interact with their flows in a browser.The
WEBHOOK_BASE_URLcontrols where the endpoint for/connectors/CONNECTOR_TYPE/webhookwill be available. This connection enables real-time document synchronization with external services. Supported webhook endpoints:- Google Drive:
/connectors/google_drive/webhook - OneDrive:
/connectors/onedrive/webhook - SharePoint:
/connectors/sharepoint/webhook
- Google Drive:
-
Continue with Application Onboarding.
Application onboarding
The first time you start OpenRAG, whether using the TUI or a .env file, it's recommended that you complete application onboarding.
To skip onboarding, click Skip onboarding.
Values from onboarding can be changed later in the OpenRAG Settings page.
Choose one LLM provider and complete only those steps:
- OpenAI
- IBM watsonx.ai
- Ollama
- Enable Get API key from environment variable to automatically enter your key from the TUI-generated
.envfile. Alternatively, paste an OpenAI API key into the field. - Under Advanced settings, select your Embedding Model and Language Model.
- To load 2 sample PDFs, enable Sample dataset. This is recommended, but not required.
- Click Complete.
- To complete the onboarding tasks, click What is OpenRAG, and then click Add a Document.
- Continue with the Quickstart.
- Complete the fields for watsonx.ai API Endpoint, IBM Project ID, and IBM API key. These values are found in your IBM watsonx deployment.
- Under Advanced settings, select your Embedding Model and Language Model.
- To load 2 sample PDFs, enable Sample dataset. This is recommended, but not required.
- Click Complete.
- To complete the onboarding tasks, click What is OpenRAG, and then click Add a Document.
- Continue with the Quickstart.
Ollama is not included with OpenRAG. To install Ollama, see the Ollama documentation.
- Enter your Ollama server's base URL address.
The default Ollama server address is
http://localhost:11434. OpenRAG automatically transformslocalhostto access services outside of the container, and sends a test connection to your Ollama server to confirm connectivity. - Select the Embedding Model and Language Model your Ollama server is running. OpenRAG retrieves the available models from your Ollama server.
- To load 2 sample PDFs, enable Sample dataset. This is recommended, but not required.
- Click Complete.
- To complete the onboarding tasks, click What is OpenRAG, and then click Add a Document.
- Continue with the Quickstart.
Close the OpenRAG TUI
To close the OpenRAG TUI, press q. The OpenRAG containers will continue to be served until the containers are stopped. For more information, see Manage OpenRAG containers with the TUI .
To start the TUI again, run uv run openrag.
Manage OpenRAG containers with the TUI
After installation, the TUI can deploy, manage, and upgrade your OpenRAG containers.
Start all services
Click Start All Services to start the OpenRAG containers.
The TUI automatically detects your container runtime, and then checks if your machine has compatible GPU support by checking for CUDA, NVIDIA_SMI, and Docker/Podman runtime support. This check determines which Docker Compose file OpenRAG uses.
The TUI then pulls the images and deploys the containers with the following command.
docker compose up -d
If images are missing, the TUI runs docker compose pull, then runs docker compose up -d.
Status
The Status menu displays information on your container deployment. Here you can check container health, find your service ports, view logs, and upgrade your containers.
To view streaming logs, select the container you want to view, and press l. To copy your logs, click Copy to Clipboard.
To upgrade your containers, click Upgrade.
Upgrade runs docker compose pull and then docker compose up -d --force-recreate.
The first command pulls the latest images of OpenRAG.
The second command recreates the containers with your data persisted.
To reset your containers, click Reset. Reset gives you a completely fresh start. Reset deletes all of your data, including OpenSearch data, uploaded documents, and authentication. Reset runs two commands. It first stops and removes all containers, volumes, and local images.
docker compose down --volumes --remove-orphans --rmi local
When the first command is complete, OpenRAG removes any additional Docker objects with prune.
docker system prune -f
Native services status
A native service in OpenRAG refers to a service run locally on your machine, and not within a container.
The docling serve process is a native service in OpenRAG, because it's a document processing service that is run on your local machine, and controlled separately from the containers.
To start or stop docling serve or any other native services, in the TUI Status menu, click Stop or Restart.
To view the status, port, or PID of a native service, in the TUI main menu, click Status.
Diagnostics
The Diagnostics menu provides health monitoring for your container runtimes and monitoring of your OpenSearch security.