Install OpenRAG with TUI
Install OpenRAG and then run the OpenRAG Terminal User Interface(TUI) to start your OpenRAG deployment with a guided setup process.
The OpenRAG Terminal User Interface (TUI) allows you to set up, configure, and monitor your OpenRAG deployment directly from the terminal.
Instead of starting OpenRAG using Docker commands and manually editing values in the .env file, the TUI walks you through the setup. It prompts for variables where required, creates a .env file for you, and then starts OpenRAG.
Once OpenRAG is running, use the TUI to monitor your application, control your containers, and retrieve logs.
If you prefer running Podman or Docker containers and manually editing .env files, see Install OpenRAG Containers.
Prerequisites
-
All OpenRAG installations require Python version 3.13 or later.
-
If you aren't using the automatic installer script, install the following:
- uv.
- Podman (recommended) or Docker.
podman-composeor Docker Compose. To use Docker Compose with Podman, you must alias Docker Compose commands to Podman commands.
-
Microsoft Windows only: To run OpenRAG on Windows, you must use the Windows Subsystem for Linux (WSL).
Install WSL for OpenRAG
-
Install WSL with the Ubuntu distribution using WSL 2:
wsl --install -d UbuntuFor new installations, the
wsl --installcommand uses WSL 2 and Ubuntu by default.For existing WSL installations, you can change the distribution and check the WSL version.
Known limitationOpenRAG isn't compatible with nested virtualization, which can cause networking issues. Don't install OpenRAG on a WSL distribution that is installed inside a Windows VM. Instead, install OpenRAG on your base OS or a non-nested Linux VM.
-
Start your WSL Ubuntu distribution if it doesn't start automatically.
-
Install Docker Desktop for Windows with WSL 2. When you reach the Docker Desktop WSL integration settings, make sure your Ubuntu distribution is enabled, and then click Apply & Restart to enable Docker support in WSL.
-
Install and run OpenRAG from within your WSL Ubuntu distribution.
If you encounter issues with port forwarding or the Windows Firewall, you might need to adjust the Hyper-V firewall settings to allow communication between your WSL distribution and the Windows host. For more troubleshooting advice for networking issues, see Troubleshooting WSL common issues.
-
-
Prepare model providers and credentials.
During application onboarding, you must select language model and embedding model providers. If your chosen provider offers both types, you can use the same provider for both selections. If your provider offers only one type, such as Anthropic, you must select two providers.
Gather the credentials and connection details for your chosen model providers before starting onboarding:
- OpenAI: Create an OpenAI API key.
- Anthropic language models: Create an Anthropic API key.
- IBM watsonx.ai: Get your watsonx.ai API endpoint, IBM project ID, and IBM API key from your watsonx deployment.
- Ollama: Use the Ollama documentation to set up your Ollama instance locally, in the cloud, or on a remote server, and then get your Ollama server's base URL.
-
Optional: Install GPU support with an NVIDIA GPU, CUDA support, and compatible NVIDIA drivers on the OpenRAG host machine. If you don't have GPU capabilities, OpenRAG provides an alternate CPU-only deployment.
Install OpenRAG
Choose an installation method based on your needs:
- For new users, the automatic installer script detects and installs prerequisites and then runs OpenRAG.
- For a quick test, use
uvxto run OpenRAG without creating a project or modifying files. - Use
uv addto install OpenRAG as a managed dependency in a new or existing Python project. - Use
uv pip installto install OpenRAG into an existing virtual environment.
- Automatic installer
- Quick test with uvx
- Python project with uv add
- Existing virtual environment with uv pip install
The script detects and installs uv, Docker/Podman, and Docker Compose prerequisites, then runs OpenRAG with uvx.
-
Create a directory to store the OpenRAG configuration files:
mkdir openrag-workspace
cd openrag-workspace -
Run the installer:
curl -fsSL https://docs.openr.ag/files/run_openrag_with_prereqs.sh | bash
The TUI creates a .env file and docker-compose files in the current working directory.
Use uvx to quickly run OpenRAG without creating a project or modifying any files.
-
Create a directory to store the OpenRAG configuration files:
mkdir openrag-workspace
cd openrag-workspace -
Run OpenRAG:
uvx openragTo run a specific version:
uvx --from openrag==0.1.30 openrag
The TUI creates a .env file and docker-compose files in the current working directory.
Use uv add to install OpenRAG as a dependency in your Python project. This adds OpenRAG to your pyproject.toml and lockfile, making your installation reproducible and version-controlled.
-
Create a new project with a virtual environment:
uv init YOUR_PROJECT_NAME
cd YOUR_PROJECT_NAMEThe
(venv)prompt doesn't change, butuvcommands will automatically use the project's virtual environment. -
Add OpenRAG to your project:
uv add openragTo add a specific version:
uv add openrag==0.1.30 -
Start the OpenRAG TUI:
uv run openrag
Install a local wheel
If you downloaded the OpenRAG wheel to your local machine, install it by specifying its path:
-
Add the wheel to your project:
uv add PATH/TO/openrag-VERSION-py3-none-any.whlReplace
PATH/TO/andVERSIONwith the path and version of your downloaded OpenRAG.whlfile. -
Run OpenRAG:
uv run openrag
Use uv pip install to install OpenRAG into an existing virtual environment that isn't managed by uv.
For new projects, uv add is recommended as it manages dependencies in your project's lockfile.
-
Activate your virtual environment.
-
Install OpenRAG:
uv pip install openrag -
Run OpenRAG:
uv run openrag
Continue with Set up OpenRAG with the TUI.
If you encounter errors during installation, see Troubleshoot OpenRAG.
Set up OpenRAG with the TUI
The OpenRAG setup process creates a .env file at the root of your OpenRAG directory, and then starts OpenRAG.
If it detects a .env file in the OpenRAG root directory, it sources any variables from the .env file.
The TUI offers two setup methods to populate the required values. Basic Setup can generate all minimum required values for OpenRAG. However, Basic Setup doesn't enable OAuth connectors for cloud storage. If you want to use OAuth connectors to upload documents from cloud storage, select Advanced Setup. If OpenRAG detects OAuth credentials, it recommends Advanced Setup.
- Basic setup
- Advanced setup
-
To install OpenRAG with Basic Setup, click Basic Setup or press 1.
-
Click Generate Passwords to generate passwords for OpenSearch and Langflow.
The OpenSearch password is required. The Langflow admin password is optional. If no Langflow admin password is generated, Langflow runs in autologin mode with no password required.
-
Optional: Paste your OpenAI API key in the OpenAI API key field. You can also provide this during onboarding or choose a different model provider.
-
Click Save Configuration. Your passwords are saved in the
.envfile used to start OpenRAG. -
To start OpenRAG, click Start All Services. Startup pulls container images and runs them, so it can take some time. When startup is complete, the TUI displays the following:
Services started successfully
Command completed successfully -
To start the Docling service, under Native Services, click Start.
-
To open the OpenRAG application, navigate to the TUI main menu, and then click Open App. Alternatively, in your browser, navigate to
localhost:3000. -
Continue with application onboarding.
-
To install OpenRAG with Advanced Setup, click Advanced Setup or press 2.
-
Click Generate Passwords to generate passwords for OpenSearch and Langflow.
The OpenSearch password is required. The Langflow admin password is optional. If no Langflow admin password is generated, Langflow runs in autologin mode with no password required.
-
Paste your OpenAI API key in the OpenAI API key field.
-
If you want to upload documents from external storage, such as Google Drive, add the required OAuth credentials for the connectors that you want to use. These settings can be populated automatically if OpenRAG detects these credentials in a
.envfile in the OpenRAG installation directory.- Amazon: Provide your AWS Access Key ID and AWS Secret Access Key with access to your S3 instance. For more information, see the AWS documentation on Configuring access to AWS applications.
- Google: Provide your Google OAuth Client ID and Google OAuth Client Secret. You can generate these in the Google Cloud Console. For more information, see the Google OAuth client documentation.
- Microsoft: For the Microsoft OAuth Client ID and Microsoft OAuth Client Secret, provide Azure application registration credentials for SharePoint and OneDrive. For more information, see the Microsoft Graph OAuth client documentation.
You can manage OAuth credentials later, but it is recommended to configure them during initial set up.
-
The OpenRAG TUI presents redirect URIs for your OAuth app. These are the URLs your OAuth provider will redirect back to after user sign-in. Register these redirect values with your OAuth provider as they are presented in the TUI.
-
Click Save Configuration.
-
To start OpenRAG, click Start All Services. Startup pulls container images and runs them, so it can take some time. When startup is complete, the TUI displays the following:
Services started successfully
Command completed successfully -
To start the Docling service, under Native Services, click Start.
-
To open the OpenRAG application, navigate to the TUI main menu, and then click Open App. Alternatively, in your browser, navigate to
localhost:3000. -
If you enabled OAuth connectors, you must sign in to your OAuth provider before being redirected to your OpenRAG instance.
-
Two additional variables are available for Advanced Setup at this point. Only change these variables if you have a non-default network configuration for your deployment, such as using a reverse proxy or custom domain.
-
LANGFLOW_PUBLIC_URL: Sets the base address to access the Langflow web interface. This is where users interact with flows in a browser. -
WEBHOOK_BASE_URL: Sets the base address of the OpenRAG OAuth connector endpoint. Supported webhook endpoints:- Amazon S3: Not applicable.
- Google Drive:
/connectors/google_drive/webhook - OneDrive:
/connectors/onedrive/webhook - SharePoint:
/connectors/sharepoint/webhook
-
-
Continue with application onboarding.
Application onboarding
The first time you start OpenRAG, regardless of how you installed it, you must complete application onboarding.
Some of these variables, such as the embedding models, can be changed seamlessly after onboarding. Others are immutable and require you to destroy and recreate the OpenRAG containers. For more information, see Environment variables.
You can use different providers for your language model and embedding model, such as Anthropic for the language model and OpenAI for the embeddings model. Additionally, you can set multiple embedding models.
You only need to complete onboarding for your preferred providers.
- Anthropic
- OpenAI
- IBM watsonx.ai
- Ollama
Anthropic doesn't provide embedding models. If you select Anthropic for your language model, you must select a different provider for embeddings.
- Enable Use environment Anthropic API key to automatically use your key from the
.envfile. Alternatively, paste an Anthropic API key into the field. - Under Advanced settings, select your Language Model.
- Click Complete.
- In the second onboarding panel, select a provider for embeddings and select your Embedding Model.
- To complete the onboarding tasks, click What is OpenRAG, and then click Add a Document. Alternatively, click Skip overview.
- Continue with the Quickstart.
- Enable Get API key from environment variable to automatically enter your key from the TUI-generated
.envfile. Alternatively, paste an OpenAI API key into the field. - Under Advanced settings, select your Language Model.
- Click Complete.
- In the second onboarding panel, select a provider for embeddings and select your Embedding Model.
- To complete the onboarding tasks, click What is OpenRAG, and then click Add a Document. Alternatively, click Skip overview.
- Continue with the Quickstart.
- Complete the fields for watsonx.ai API Endpoint, IBM Project ID, and IBM API key. These values are found in your IBM watsonx deployment.
- Under Advanced settings, select your Language Model.
- Click Complete.
- In the second onboarding panel, select a provider for embeddings and select your Embedding Model.
- To complete the onboarding tasks, click What is OpenRAG, and then click Add a Document. Alternatively, click Skip overview.
- Continue with the Quickstart.
Ollama isn't installed with OpenRAG. To install Ollama, see the Ollama documentation.
- To connect to an Ollama server running on your local machine, enter your Ollama server's base URL address.
The default Ollama server address is
http://localhost:11434. OpenRAG connects to the Ollama server and populates the model lists with the server's available models. - Select the Embedding Model and Language Model your Ollama server is running.
Ollama model selection and external server configuration
Using Ollama for your OpenRAG language model provider offers greater flexibility and configuration, but can also be overwhelming to start. These recommendations are a reasonable starting point for users with at least one GPU and experience running LLMs locally.
For best performance, OpenRAG recommends OpenAI's
gpt-oss:20blanguage model. However, this model uses 16GB of RAM, so consider using Ollama Cloud or running Ollama on a remote machine.For generating embeddings, OpenRAG recommends the
nomic-embed-textembedding model, which provides high-quality embeddings optimized for retrieval tasks.To run models in Ollama Cloud, follow these steps:
- Sign in to Ollama Cloud.
In a terminal, enter
ollama signinto connect your local environment with Ollama Cloud. - To run the model, in Ollama, select the
gpt-oss:20b-cloudmodel, or runollama run gpt-oss:20b-cloudin a terminal. Ollama Cloud models are run at the same URL as your local Ollama server athttp://localhost:11434, and automatically offloaded to Ollama's cloud service. - Connect OpenRAG to the same local Ollama server as you would for local models in onboarding, using the default address of
http://localhost:11434. - In the Language model field, select the
gpt-oss:20b-cloudmodel.
To run models on a remote Ollama server, follow these steps:
- Ensure your remote Ollama server is accessible from your OpenRAG instance.
- In the Ollama Base URL field, enter your remote Ollama server's base URL, such as
http://your-remote-server:11434. OpenRAG connects to the remote Ollama server and populates the lists with the server's available models. - Select your Embedding model and Language model from the available options.
- Sign in to Ollama Cloud.
In a terminal, enter
- Click Complete.
- To complete the onboarding tasks, click What is OpenRAG, and then click Add a Document.
- Continue with the Quickstart.
Exit the OpenRAG TUI
To exit the OpenRAG TUI, navigate to the main menu, and then press q. The OpenRAG containers continue to run until they are stopped. For more information, see Manage OpenRAG containers with the TUI .
To relaunch the TUI, run uv run openrag.
If you installed OpenRAG with uvx, run uvx openrag.
Manage OpenRAG containers with the TUI
After installation, the TUI can deploy, manage, and upgrade your OpenRAG containers.
Diagnostics
The Diagnostics menu provides health monitoring for your container runtimes and monitoring of your OpenSearch security.
Status
The Status menu displays information on your container deployment. Here you can check container health, find your service ports, view logs, and upgrade your containers.
-
Logs: To view streaming logs, select the container you want to view, and press l. To copy the logs, click Copy to Clipboard.
-
Upgrade: Check for updates. For more information, see upgrade OpenRAG.
-
Factory Reset: This is a destructive action that resets your containers.
-
Native services: View and manage OpenRAG services that run directly on your local machine instead of a container.
Reset containers
Reset your OpenRAG deployment by recreating the containers and removing some related data.
This is a destructive action that destroys the following:
- All OpenRAG containers, volumes, and local images
- Any additional Docker objects
- The contents of OpenRAG's
configand./opensearch-datadirectories - The
conversations.jsonfile
This operation doesn't remove the .env file or the contents of the ./openrag-documents directory.
-
To destroy and recreate your OpenRAG containers, go to the TUI Status menu, and then click Factory Reset.
This function runs the following commands and deletes the contents of OpenRAG's
configand./opensearch-datadirectories.docker compose down --volumes --remove-orphans --rmi local
docker system prune -f -
If you reset your containers as part of reinstalling OpenRAG, continue the reinstallation process after resetting the containers. Otherwise, in the TUI Setup menu, repeat the setup process to start the services and launch the OpenRAG app. Your OpenRAG passwords, OAuth credentials (if previously set), and onboarding configuration are restored from the
.envfile.
Start all services
Through the TUI, you can view and manage OpenRAG services that run in containers and directly on your local machine.
Start containers
On the TUI main page or the Setup menu, click Start All Services to start the OpenRAG containers and launch OpenRAG itself.
When you start all services, the following processes happen:
-
OpenRAG automatically detects your container runtime, and then checks if your machine has compatible GPU support by checking for
CUDA,NVIDIA_SMI, and Docker/Podman runtime support. This check determines which Docker Compose file OpenRAG uses. -
OpenRAG pulls the OpenRAG container images with
docker compose pullif any images are missing. -
OpenRAG deploys the containers with
docker compose up -d.
Start native services (Docling)
A native service in OpenRAG is a service that runs locally on your machine, not within a container. For example, the docling serve process is an OpenRAG native service because this document processing service runs on your local machine, separate from the OpenRAG containers.
From the Status menu, you can view the status, port, and process ID (PID) of the OpenRAG native services. You can also click Stop or Restart to stop and start OpenRAG native services.
Upgrade OpenRAG
To upgrade OpenRAG, upgrade the OpenRAG Python package, and then upgrade the OpenRAG containers.
This is a two part process because upgrading the OpenRAG Python package updates the TUI and Python code, but the container versions are controlled by environment variables in your .env file.
-
Stop your OpenRAG containers: In the OpenRAG TUI, go to the Status menu, and then click Stop Services.
-
Upgrade the OpenRAG Python package to the latest version from PyPI.
- Automatic installer or uvx
- Python project (uv add)
- Virtual environment (uv pip install)
Use these steps to upgrade the Python package if you installed OpenRAG using the automatic installer or
uvx:-
Navigate to your OpenRAG workspace directory:
cd openrag-workspace -
Upgrade the OpenRAG package:
uvx --from openrag openragTo upgrade to a specific version:
uvx --from openrag==0.1.33 openrag
Use these steps to upgrade the Python package if you installed OpenRAG in a Python project with
uv add:-
Navigate to your project directory:
cd YOUR_PROJECT_NAME -
Update OpenRAG to the latest version:
uv add --upgrade openragTo upgrade to a specific version:
uv add --upgrade openrag==0.1.33 -
Start the OpenRAG TUI:
uv run openrag
Use these steps to upgrade the Python package if you installed OpenRAG in a venv with
uv pip install:-
Activate your virtual environment.
-
Upgrade OpenRAG:
uv pip install --upgrade openragTo upgrade to a specific version:
uv pip install --upgrade openrag==0.1.33 -
Start the OpenRAG TUI:
uv run openrag
-
Start the upgraded OpenRAG containers: In the OpenRAG TUI, click Start All Services, and then wait while the containers start.
After upgrading the Python package, OpenRAG runs
docker compose pullto get the appropriate container images matching the version specified in your OpenRAG.envfile. Then, it recreates the containers with the new images usingdocker compose up -d --force-recreate.In the
.envfile, theOPENRAG_VERSIONenvironment variable is set tolatestby default, which it pulls thelatestavailable container images. To pin a specific container image version, you can setOPENRAG_VERSIONto the desired container image version, such asOPENRAG_VERSION=0.1.33.However, when you upgrade the Python package, OpenRAG automatically attempts to keep the
OPENRAG_VERSIONsynchronized with the Python package version. You might need to edit the.envfile after upgrading the Python package to enforce a different container version. The TUI warns you if it detects a version mismatch.If you get an error that
langflow container already existserror during upgrade, see Langflow container already exists during upgrade. -
When the upgrade process is complete, you can close the Status window and continue using OpenRAG.
Reinstall OpenRAG
Reset your OpenRAG deployment by recreating the containers and, optionally, removing related data:
-
In the TUI, reset your containers to destroy the following:
- All existing OpenRAG containers, volumes, and local images
- Any additional Docker objects
- The contents of OpenRAG's
configand./opensearch-datadirectories - The
conversations.jsonfile
-
Optional: Remove data that wasn't deleted by the Factory Reset operation. For a completely fresh installation, delete all of this data.
- OpenRAG's
.envfile: Contains your OpenRAG configuration, including OpenRAG passwords, API keys, OAuth settings, and other environment variables. If you delete this file, you must either repeat the setup process to create a new.envfile, or add a populated.envfile to your OpenRAG installation directory before restarting OpenRAG. - The contents of the
./openrag-documentsdirectory: Contains documents that you uploaded to OpenRAG. Delete these files to prevent documents from being reingested to your knowledge base after restarting OpenRAG. However, you might want to preserve OpenRAG's default documents.
- OpenRAG's
-
In the TUI Setup menu, repeat the setup process to configure OpenRAG, restart the services, and launch the OpenRAG app, and repeat application onboarding. If OpenRAG detects a
.envfile, it automatically populates any OpenRAG passwords, OAuth credentials, and onboarding configuration set in that file.