Install OpenRAG with TUI
Install the OpenRAG Python wheel, and then run the OpenRAG Terminal User Interface(TUI) to start your OpenRAG deployment with a guided setup process.
The OpenRAG Terminal User Interface (TUI) allows you to set up, configure, and monitor your OpenRAG deployment directly from the terminal, on any operating system.
Instead of starting OpenRAG using Docker commands and manually editing values in the .env file, the TUI walks you through the setup. It prompts for variables where required, creates a .env file for you, and then starts OpenRAG.
Once OpenRAG is running, use the TUI to monitor your application, control your containers, and retrieve logs.
If you prefer running Docker commands and manually editing .env files, see Install with Docker.
Prerequisites
- Install Python Version 3.10 to 3.13
- Install uv
- Install Podman (recommended) or Docker
- Install Docker Compose. If using Podman, use podman-compose or alias Docker compose commands to Podman commands.
- Create an OpenAI API key. This key is required to start OpenRAG, but you can choose a different model provider during Application Onboarding.
- Optional: Install GPU support with an NVIDIA GPU, CUDA support, and compatible NVIDIA drivers on the OpenRAG host machine. If you don't have GPU capabilities, OpenRAG provides an alternate CPU-only deployment.
Install the OpenRAG Python wheel
The .whl file is currently available as an internal download during public preview, and will be published to PyPI in a future release.
The OpenRAG wheel installs the Terminal User Interface (TUI) for configuring and running OpenRAG.
-
Create a new project with a virtual environment using
uv init.uv init YOUR_PROJECT_NAME
cd YOUR_PROJECT_NAMEThe
(venv)prompt doesn't change, butuvcommands will automatically use the project's virtual environment. For more information on virtual environments, see the uv documentation. -
Add the local OpenRAG wheel to your project's virtual environment.
uv add PATH/TO/openrag-VERSION-py3-none-any.whlReplace
PATH/TO/andVERSIONwith the path and version of your downloaded OpenRAG.whlfile.For example, if your
.whlfile is in the~/Downloadsdirectory, the command isuv add ~/Downloads/openrag-0.1.8-py3-none-any.whl. -
Ensure all dependencies are installed and updated in your virtual environment.
uv sync -
Start the OpenRAG TUI.
uv run openrag -
Continue with Set up OpenRAG with the TUI.
Set up OpenRAG with the TUI
The TUI creates a .env file in your OpenRAG directory root and starts OpenRAG.
If the TUI detects a .env file in the OpenRAG root directory, it sources any variables from the .env file.
If the TUI detects OAuth credentials, it enforces the Advanced Setup path.
- Basic setup
- Advanced setup
Basic Setup generates all of the required values for OpenRAG except the OpenAI API key. Basic Setup does not set up OAuth connections for ingestion from cloud providers. For OAuth setup, use Advanced Setup. For information about the difference between basic (no auth) and OAuth in OpenRAG, see Authentication and document access.
- To install OpenRAG with Basic Setup, click Basic Setup or press 1.
- Click Generate Passwords to generate passwords for OpenSearch and Langflow.
- Paste your OpenAI API key in the OpenAI API key field.
- Click Save Configuration.
Your passwords are saved in the
.envfile used to start OpenRAG. - To start OpenRAG, click Start Container Services.
Startup pulls container images and runs them, so it can take some time.
When startup is complete, the TUI displays the following:
Services started successfully
Command completed successfully - To open the OpenRAG application, click Open App.
- Continue with Application Onboarding.
-
To install OpenRAG with Advanced Setup, click Advanced Setup or press 2.
-
Click Generate Passwords to generate passwords for OpenSearch and Langflow.
-
Paste your OpenAI API key in the OpenAI API key field.
-
Add your client and secret values for Google or Microsoft OAuth. These values can be found with your OAuth provider. For more information, see the Google OAuth client or Microsoft Graph OAuth client documentation.
-
The OpenRAG TUI presents redirect URIs for your OAuth app. These are the URLs your OAuth provider will redirect back to after user sign-in. Register these redirect values with your OAuth provider as they are presented in the TUI.
-
Click Save Configuration.
-
To start OpenRAG, click Start Container Services. Startup pulls container images and runs them, so it can take some time. When startup is complete, the TUI displays the following:
Services started successfully
Command completed successfully -
To open the OpenRAG application, click Open App, press 6, or navigate to
http://localhost:3000. You are presented with your provider's OAuth sign-in screen. After sign-in, you are redirected to the redirect URI.Two additional variables are available for Advanced Setup:
The
LANGFLOW_PUBLIC_URLcontrols where the Langflow web interface can be accessed. This is where users interact with their flows in a browser.The
WEBHOOK_BASE_URLcontrols where the endpoint for/connectors/CONNECTOR_TYPE/webhookwill be available. This connection enables real-time document synchronization with external services. Supported webhook endpoints:- Google Drive:
/connectors/google_drive/webhook - OneDrive:
/connectors/onedrive/webhook - SharePoint:
/connectors/sharepoint/webhook
- Google Drive:
-
Continue with Application Onboarding.
Application onboarding
The first time you start OpenRAG, whether using the TUI or a .env file, you must complete application onboarding.
Values from onboarding can be changed later in the OpenRAG Settings page.
Choose one LLM provider and complete only those steps:
- OpenAI
- IBM watsonx.ai
- Ollama
- Enable Get API key from environment variable to automatically enter your key from the TUI-generated
.envfile. Alternatively, paste an OpenAI API key into the field. - Under Advanced settings, select your Embedding Model and Language Model.
- To load 2 sample PDFs, enable Sample dataset. This is recommended, but not required.
- Click Complete.
- Continue with the Quickstart.
- Complete the fields for watsonx.ai API Endpoint, IBM API key, and IBM Project ID. These values are found in your IBM watsonx deployment.
- Under Advanced settings, select your Embedding Model and Language Model.
- To load 2 sample PDFs, enable Sample dataset. This is recommended, but not required.
- Click Complete.
- Continue with the Quickstart.
Ollama is not included with OpenRAG. To install Ollama, see the Ollama documentation.
- Enter your Ollama server's base URL address.
The default Ollama server address is
http://localhost:11434. OpenRAG automatically transformslocalhostto access services outside of the container, and sends a test connection to your Ollama server to confirm connectivity. - Select the Embedding Model and Language Model your Ollama server is running. OpenRAG retrieves the available models from your Ollama server.
- To load 2 sample PDFs, enable Sample dataset. This is recommended, but not required.
- Click Complete.
- Continue with the Quickstart.
Manage OpenRAG containers with the TUI
After installation, the TUI can deploy, manage, and upgrade your OpenRAG containers.
Start container services
Click Start Container Services to start the OpenRAG containers.
The TUI automatically detects your container runtime, and then checks if your machine has compatible GPU support by checking for CUDA, NVIDIA_SMI, and Docker/Podman runtime support. This check determines which Docker Compose file OpenRAG uses.
The TUI then pulls the images and deploys the containers with the following command.
docker compose up -d
If images are missing, the TUI runs docker compose pull, then runs docker compose up -d.
Start native services
A "native" service in OpenRAG refers to a service run natively on your machine, and not within a container.
The docling serve process is a native service in OpenRAG, because it's a document processing service that is run on your local machine, and controlled separately from the containers.
To start or stop docling serve or any other native services, in the TUI main menu, click Start Native Services or Stop Native Services.
To view the status, port, or PID of a native service, in the TUI main menu, click Status.
Status
The Status menu displays information on your container deployment. Here you can check container health, find your service ports, view logs, and upgrade your containers.
To view streaming logs, select the container you want to view, and press l. To copy your logs, click Copy to Clipboard.
To upgrade your containers, click Upgrade.
Upgrade runs docker compose pull and then docker compose up -d --force-recreate.
The first command pulls the latest images of OpenRAG.
The second command recreates the containers with your data persisted.
To reset your containers, click Reset. Reset gives you a completely fresh start. Reset deletes all of your data, including OpenSearch data, uploaded documents, and authentication. Reset runs two commands. It first stops and removes all containers, volumes, and local images.
docker compose down --volumes --remove-orphans --rmi local
When the first command is complete, OpenRAG removes any additional Docker objects with prune.
docker system prune -f
Diagnostics
The Diagnostics menu provides health monitoring for your container runtimes and monitoring of your OpenSearch security.