Install OpenRAG with TUI
Install OpenRAG and then run the OpenRAG Terminal User Interface(TUI) to start your OpenRAG deployment with a guided setup process.
The OpenRAG Terminal User Interface (TUI) allows you to set up, configure, and monitor your OpenRAG deployment directly from the terminal.
Instead of starting OpenRAG using Docker commands and manually editing values in the .env file, the TUI walks you through the setup. It prompts for variables where required, creates a .env file for you, and then starts OpenRAG.
Once OpenRAG is running, use the TUI to monitor your application, control your containers, and retrieve logs.
If you prefer running Podman or Docker containers and manually editing .env files, see Install OpenRAG Containers.
Prerequisites
-
All OpenRAG installations require Python version 3.10 to 3.13.
-
If you aren't using the automatic installer script, install the following:
- uv.
- Podman (recommended) or Docker.
podman-composeor Docker Compose. To use Docker Compose with Podman, you must alias Docker Compose commands to Podman commands.
-
Microsoft Windows only: To run OpenRAG on Windows, you must use the Windows Subsystem for Linux (WSL).
Install WSL for OpenRAG
-
Install WSL with the Ubuntu distribution using WSL 2:
wsl --install -d UbuntuFor new installations, the
wsl --installcommand uses WSL 2 and Ubuntu by default.For existing WSL installations, you can change the distribution and check the WSL version.
Known limitationOpenRAG isn't compatible with nested virtualization, which can cause networking issues. Don't install OpenRAG on a WSL distribution that is installed inside a Windows VM. Instead, install OpenRAG on your base OS or a non-nested Linux VM.
-
Start your WSL Ubuntu distribution if it doesn't start automatically.
-
Install Docker Desktop for Windows with WSL 2. When you reach the Docker Desktop WSL integration settings, make sure your Ubuntu distribution is enabled, and then click Apply & Restart to enable Docker support in WSL.
-
Install and run OpenRAG from within your WSL Ubuntu distribution.
If you encounter issues with port forwarding or the Windows Firewall, you might need to adjust the Hyper-V firewall settings to allow communication between your WSL distribution and the Windows host. For more troubleshooting advice for networking issues, see Troubleshooting WSL common issues.
-
-
Prepare model providers and credentials.
During Application Onboarding, you must select language model and embedding model providers. If your chosen provider offers both types, you can use the same provider for both selections. If your provider offers only one type, such as Anthropic, you must select two providers.
Gather the credentials and connection details for your chosen model providers before starting onboarding:
- OpenAI: Create an OpenAI API key.
- Anthropic language models: Create an Anthropic API key.
- IBM watsonx.ai: Get your watsonx.ai API endpoint, IBM project ID, and IBM API key from your watsonx deployment.
- Ollama: Use the Ollama documentation to set up your Ollama instance locally, in the cloud, or on a remote server, and then get your Ollama server's base URL.
-
Optional: Install GPU support with an NVIDIA GPU, CUDA support, and compatible NVIDIA drivers on the OpenRAG host machine. If you don't have GPU capabilities, OpenRAG provides an alternate CPU-only deployment.
Install OpenRAG
Choose an installation method based on your needs:
- For new users, the automatic installer script detects and installs prerequisites and then runs OpenRAG.
- For a quick test, use
uvxto run OpenRAG without creating a project or modifying files. - Use
uv addto install OpenRAG as a managed dependency in a new or existing Python project. - Use
uv pip installto install OpenRAG into an existing virtual environment.
- Automatic installer
- Quick test with uvx
- Python project with uv add
- Existing virtual environment with uv pip install
The script detects and installs uv, Docker/Podman, and Docker Compose prerequisites, then runs OpenRAG with uvx.
-
Create a directory to store the OpenRAG configuration files:
mkdir openrag-workspace
cd openrag-workspace -
Run the installer:
curl -fsSL https://docs.openr.ag/files/run_openrag_with_prereqs.sh | bash
The TUI creates a .env file and docker-compose files in the current working directory.
Use uvx to quickly run OpenRAG without creating a project or modifying any files.
-
Create a directory to store the OpenRAG configuration files:
mkdir openrag-workspace
cd openrag-workspace -
Run OpenRAG:
uvx openragTo run a specific version:
uvx --from openrag==0.1.30 openrag
The TUI creates a .env file and docker-compose files in the current working directory.
Use uv add to install OpenRAG as a dependency in your Python project. This adds OpenRAG to your pyproject.toml and lockfile, making your installation reproducible and version-controlled.
-
Create a new project with a virtual environment:
uv init YOUR_PROJECT_NAME
cd YOUR_PROJECT_NAMEThe
(venv)prompt doesn't change, butuvcommands will automatically use the project's virtual environment. -
Add OpenRAG to your project:
uv add openragTo add a specific version:
uv add openrag==0.1.30 -
Start the OpenRAG TUI:
uv run openrag
Install a local wheel
If you downloaded the OpenRAG wheel to your local machine, install it by specifying its path:
-
Add the wheel to your project:
uv add PATH/TO/openrag-VERSION-py3-none-any.whlReplace
PATH/TO/andVERSIONwith the path and version of your downloaded OpenRAG.whlfile. -
Run OpenRAG:
uv run openrag
Use uv pip install to install OpenRAG into an existing virtual environment that isn't managed by uv.
For new projects, uv add is recommended as it manages dependencies in your project's lockfile.
-
Activate your virtual environment.
-
Install OpenRAG:
uv pip install openrag -
Run OpenRAG:
uv run openrag
Continue with Set up OpenRAG with the TUI.
If you encounter errors during installation, see Troubleshoot OpenRAG.
Set up OpenRAG with the TUI
The TUI creates a .env file in your OpenRAG directory root and starts OpenRAG.
If the TUI detects a .env file in the OpenRAG root directory, it sources any variables from the .env file.
If the TUI detects OAuth credentials, it enforces the Advanced Setup path.
- Basic setup
- Advanced setup
Basic Setup can generate all of the required values for OpenRAG. The OpenAI API key is optional and can be provided during onboarding. Basic Setup does not set up OAuth connections for ingestion from cloud providers. For OAuth setup, use Advanced Setup. For information about the difference between basic (no auth) and OAuth in OpenRAG, see Authentication and document access.
-
To install OpenRAG with Basic Setup, click Basic Setup or press 1.
-
Click Generate Passwords to generate passwords for OpenSearch and Langflow.
The OpenSearch password is required. The Langflow admin password is optional. If no Langflow admin password is generated, Langflow runs in autologin mode with no password required.
-
Optional: Paste your OpenAI API key in the OpenAI API key field. You can also provide this during onboarding or choose a different model provider.
-
Click Save Configuration. Your passwords are saved in the
.envfile used to start OpenRAG. -
To start OpenRAG, click Start All Services. Startup pulls container images and runs them, so it can take some time. When startup is complete, the TUI displays the following:
Services started successfully
Command completed successfully -
To start the Docling service, under Native Services, click Start.
-
To open the OpenRAG application, navigate to the TUI main menu, and then click Open App. Alternatively, in your browser, navigate to
localhost:3000. -
Continue with Application Onboarding.
-
To install OpenRAG with Advanced Setup, click Advanced Setup or press 2.
-
Click Generate Passwords to generate passwords for OpenSearch and Langflow.
The OpenSearch password is required. The Langflow admin password is optional. If no Langflow admin password is generated, Langflow runs in autologin mode with no password required.
-
Paste your OpenAI API key in the OpenAI API key field.
-
Add your client and secret values for Google or Microsoft OAuth. These values can be found with your OAuth provider. For more information, see the Google OAuth client or Microsoft Graph OAuth client documentation.
-
The OpenRAG TUI presents redirect URIs for your OAuth app. These are the URLs your OAuth provider will redirect back to after user sign-in. Register these redirect values with your OAuth provider as they are presented in the TUI.
-
Click Save Configuration.
-
To start OpenRAG, click Start All Services. Startup pulls container images and runs them, so it can take some time. When startup is complete, the TUI displays the following:
Services started successfully
Command completed successfully -
To start the Docling service, under Native Services, click Start.
-
To open the OpenRAG application, navigate to the TUI main menu, and then click Open App. Alternatively, in your browser, navigate to
localhost:3000. You are presented with your provider's OAuth sign-in screen. After sign-in, you are redirected to the redirect URI.Two additional variables are available for Advanced Setup:
The
LANGFLOW_PUBLIC_URLcontrols where the Langflow web interface can be accessed. This is where users interact with their flows in a browser.The
WEBHOOK_BASE_URLcontrols where the endpoint for/connectors/CONNECTOR_TYPE/webhookwill be available. This connection enables real-time document synchronization with external services. Supported webhook endpoints:- Google Drive:
/connectors/google_drive/webhook - OneDrive:
/connectors/onedrive/webhook - SharePoint:
/connectors/sharepoint/webhook
- Google Drive:
-
Continue with Application Onboarding.
Application onboarding
The first time you start OpenRAG, whether using the TUI or a .env file, you must complete application onboarding.
Most values from onboarding can be changed later in the OpenRAG Settings page, but there are important restrictions.
The language model provider and embeddings model provider can only be selected at onboarding. To change your provider selection later, you must reinstall OpenRAG.
You can use different providers for your language model and embedding model, such as Anthropic for the language model and OpenAI for the embeddings model.
Choose one LLM provider and complete these steps:
- Anthropic
- OpenAI
- IBM watsonx.ai
- Ollama
Anthropic does not provide embedding models. If you select Anthropic for your language model, you must then select a different provider for embeddings.
- Enable Use environment Anthropic API key to automatically use your key from the
.envfile. Alternatively, paste an Anthropic API key into the field. - Under Advanced settings, select your Language Model.
- Click Complete.
- In the second onboarding panel, select a provider for embeddings and select your Embedding Model.
- To complete the onboarding tasks, click What is OpenRAG, and then click Add a Document. Alternatively, click Skip overview.
- Continue with the Quickstart.
- Enable Get API key from environment variable to automatically enter your key from the TUI-generated
.envfile. Alternatively, paste an OpenAI API key into the field. - Under Advanced settings, select your Language Model.
- Click Complete.
- In the second onboarding panel, select a provider for embeddings and select your Embedding Model.
- To complete the onboarding tasks, click What is OpenRAG, and then click Add a Document. Alternatively, click Skip overview.
- Continue with the Quickstart.
- Complete the fields for watsonx.ai API Endpoint, IBM Project ID, and IBM API key. These values are found in your IBM watsonx deployment.
- Under Advanced settings, select your Language Model.
- Click Complete.
- In the second onboarding panel, select a provider for embeddings and select your Embedding Model.
- To complete the onboarding tasks, click What is OpenRAG, and then click Add a Document. Alternatively, click Skip overview.
- Continue with the Quickstart.
Ollama is not included with OpenRAG. To install Ollama, see the Ollama documentation.
- To connect to an Ollama server running on your local machine, enter your Ollama server's base URL address.
The default Ollama server address is
http://localhost:11434. OpenRAG connects to the Ollama server and populates the model lists with the server's available models. - Select the Embedding Model and Language Model your Ollama server is running.
Ollama model selection and external server configuration
Using Ollama for your OpenRAG language model provider offers greater flexibility and configuration, but can also be overwhelming to start. These recommendations are a reasonable starting point for users with at least one GPU and experience running LLMs locally.
For best performance, OpenRAG recommends OpenAI's
gpt-oss:20blanguage model. However, this model uses 16GB of RAM, so consider using Ollama Cloud or running Ollama on a remote machine.For generating embeddings, OpenRAG recommends the
nomic-embed-textembedding model, which provides high-quality embeddings optimized for retrieval tasks.To run models in Ollama Cloud, follow these steps:
- Sign in to Ollama Cloud.
In a terminal, enter
ollama signinto connect your local environment with Ollama Cloud. - To run the model, in Ollama, select the
gpt-oss:20b-cloudmodel, or runollama run gpt-oss:20b-cloudin a terminal. Ollama Cloud models are run at the same URL as your local Ollama server athttp://localhost:11434, and automatically offloaded to Ollama's cloud service. - Connect OpenRAG to the same local Ollama server as you would for local models in onboarding, using the default address of
http://localhost:11434. - In the Language model field, select the
gpt-oss:20b-cloudmodel.
To run models on a remote Ollama server, follow these steps:
- Ensure your remote Ollama server is accessible from your OpenRAG instance.
- In the Ollama Base URL field, enter your remote Ollama server's base URL, such as
http://your-remote-server:11434. OpenRAG connects to the remote Ollama server and populates the lists with the server's available models. - Select your Embedding model and Language model from the available options.
- Sign in to Ollama Cloud.
In a terminal, enter
- Click Complete.
- To complete the onboarding tasks, click What is OpenRAG, and then click Add a Document.
- Continue with the Quickstart.
Exit the OpenRAG TUI
To exit the OpenRAG TUI, navigate to the main menu, and then press q. The OpenRAG containers continue to run until they are stopped. For more information, see Manage OpenRAG containers with the TUI .
To relaunch the TUI, run uv run openrag.
If you installed OpenRAG with uvx, run uvx openrag.
Manage OpenRAG containers with the TUI
After installation, the TUI can deploy, manage, and upgrade your OpenRAG containers.
Start all services
Click Start All Services to start the OpenRAG containers.
The TUI automatically detects your container runtime, and then checks if your machine has compatible GPU support by checking for CUDA, NVIDIA_SMI, and Docker/Podman runtime support. This check determines which Docker Compose file OpenRAG uses.
The TUI then pulls the images and deploys the containers with the following command.
docker compose up -d
If images are missing, the TUI runs docker compose pull, then runs docker compose up -d.
Status
The Status menu displays information on your container deployment. Here you can check container health, find your service ports, view logs, and upgrade your containers.
To view streaming logs, select the container you want to view, and press l. To copy your logs, click Copy to Clipboard.
To upgrade your containers, click Upgrade.
Upgrade runs docker compose pull and then docker compose up -d --force-recreate.
For more information, see Upgrade OpenRAG containers with the TUI.
To reset your containers, click Reset. Reset gives you a completely fresh start. Reset deletes all of your data, including OpenSearch data, uploaded documents, and authentication. Reset runs two commands. It first stops and removes all containers, volumes, and local images.
docker compose down --volumes --remove-orphans --rmi local
When the first command is complete, OpenRAG removes any additional Docker objects with prune.
docker system prune -f
Native services status
A native service in OpenRAG refers to a service run locally on your machine, and not within a container.
The docling serve process is a native service in OpenRAG, because it's a document processing service that is run on your local machine, and controlled separately from the containers.
To start or stop docling serve or any other native services, in the TUI Status menu, click Stop or Restart.
To view the status, port, or PID of a native service, in the TUI main menu, click Status.
Upgrade OpenRAG
To upgrade OpenRAG, upgrade the OpenRAG Python package, and then upgrade the OpenRAG containers using the OpenRAG TUI.
Upgrading the OpenRAG Python package updates the TUI and Python code, but container versions are controlled separately by environment variables in your .env file.
Upgrade OpenRAG python package
Use the following steps to upgrade the OpenRAG Python package to the latest version from PyPI. After upgrading the Python package, you should also upgrade your OpenRAG containers.
- Automatic installer / uvx
- Python project with uv add
- Existing virtual environment with uv pip install
If you installed OpenRAG using the automatic installer or uvx, follow these steps to upgrade:
-
Navigate to your OpenRAG workspace directory:
cd openrag-workspace -
Upgrade the OpenRAG package:
uvx --from openrag openragTo upgrade to a specific version:
uvx --from openrag==0.1.33 openrag -
After upgrading the Python package, upgrade your containers.
-
Navigate to your project directory:
cd YOUR_PROJECT_NAME -
Update OpenRAG to the latest version:
uv add --upgrade openragTo upgrade to a specific version:
uv add --upgrade openrag==0.1.33 -
Start the OpenRAG TUI:
uv run openrag -
After upgrading the Python package, upgrade your containers.
-
Activate your virtual environment.
-
Upgrade OpenRAG:
uv pip install --upgrade openragTo upgrade to a specific version:
uv pip install --upgrade openrag==0.1.33 -
Start the OpenRAG TUI:
uv run openrag -
After upgrading the Python package, upgrade your containers.
Upgrade OpenRAG containers with the TUI
After upgrading the OpenRAG Python package, upgrade your containers to ensure they match the latest version.
Upgrade runs docker compose pull, which pulls container images based on versions specified in your .env file.
OPENRAG_VERSION is set to latest by default, so it pulls the latest available container images.
- In the OpenRAG TUI, click Status, and then click Upgrade.
- When the upgrade completes, close the Status window and continue using OpenRAG.
If you encounter a langflow container already exists error during upgrade, see Langflow container already exists during upgrade in the troubleshooting guide.
To pin container versions to a specific release other than latest, set the OPENRAG_VERSION in your .env file:
OPENRAG_VERSION=0.1.33
For more information, see System settings environment variables.
Diagnostics
The Diagnostics menu provides health monitoring for your container runtimes and monitoring of your OpenSearch security.
Reinstall OpenRAG
To reinstall OpenRAG with a completely fresh setup:
-
Reset your containers using the Reset button in the TUI status menu. This removes all containers, volumes, and data.
-
Optional: Delete your project's
.envfile. The Reset operation does not remove your project's.envfile, so your passwords, API keys, and OAuth settings can be preserved. If you delete the.envfile, run the Set up OpenRAG with the TUI process again to create a new configuration. -
In the TUI Setup menu, follow these steps from Basic Setup:
- Click Start All Services to pull container images and start them.
- Under Native Services, click Start to start the Docling service.
- Click Open App to open the OpenRAG application.
- Continue with Application Onboarding.