partials for common install content

This commit is contained in:
April M 2025-12-04 22:10:16 -08:00
parent 951fd4b176
commit 15e3b99da0
19 changed files with 790 additions and 598 deletions

View file

@ -0,0 +1,5 @@
## Next steps
* Try some of OpenRAG's core features in the [quickstart](/quickstart#chat-with-documents).
* Learn how to [manage OpenRAG services](/manage-services).
* [Upload documents](/ingestion), and then use the [**Chat**](/chat) to explore your data.

View file

@ -5,7 +5,7 @@ import PartialOllama from '@site/docs/_partial-ollama.mdx';
## Application onboarding
The first time you start OpenRAG, regardless of how you installed it, you must complete application onboarding.
The first time you start the OpenRAG application, you must complete application onboarding to select language and embedding models that are essential for OpenRAG features like the [**Chat**](/chat).
Some of these variables, such as the embedding models, can be changed seamlessly after onboarding.
Others are immutable and require you to destroy and recreate the OpenRAG containers.

View file

@ -0,0 +1,12 @@
- Gather the credentials and connection details for your preferred model providers.
- OpenAI: Create an [OpenAI API key](https://platform.openai.com/api-keys).
- Anthropic language models: Create an [Anthropic API key](https://www.anthropic.com/docs/api/reference).
- IBM watsonx.ai: Get your watsonx.ai API endpoint, IBM project ID, and IBM API key from your watsonx deployment.
- Ollama: Use the [Ollama documentation](https://docs.ollama.com/) to set up your Ollama instance locally, in the cloud, or on a remote server, and then get your Ollama server's base URL.
You must have access to at least one language model and one embedding model.
If your chosen provider offers both types, you can use the same provider for both models.
If your provider offers only one type, such as Anthropic, you must select two providers.
- Optional: Install GPU support with an NVIDIA GPU, [CUDA](https://docs.nvidia.com/cuda/) support, and compatible NVIDIA drivers on the OpenRAG host machine. If you don't have GPU capabilities, OpenRAG provides an alternate CPU-only deployment.

View file

@ -0,0 +1,6 @@
- Install [uv](https://docs.astral.sh/uv/getting-started/installation/).
- Install [Podman](https://podman.io/docs/installation) (recommended) or [Docker](https://docs.docker.com/get-docker/).
- Install [Podman Compose](https://docs.podman.io/en/latest/markdown/podman-compose.1.html) or [Docker Compose](https://docs.docker.com/compose/install/).
To use Docker Compose with Podman, you must alias Docker Compose commands to Podman commands.

View file

@ -0,0 +1 @@
- Install [Python](https://www.python.org/downloads/release/python-3100/) version 3.13 or later.

View file

@ -0,0 +1,2 @@
- For Microsoft Windows, you must use the Windows Subsystem for Linux (WSL).
See [Install OpenRAG on Windows](/install-windows) before proceeding.

View file

@ -0,0 +1,116 @@
import Tabs from '@theme/Tabs';
import TabItem from '@theme/TabItem';
You can use either **Basic Setup** or **Advanced Setup** to configure OpenRAG.
This choice determines [how OpenRAG authenticates with OpenSearch and controls access to documents](/knowledge#auth).
:::info
You must use **Advanced Setup** if you want to [use OAuth connectors to upload documents from cloud storage](/ingestion#oauth-ingestion).
:::
If OpenRAG detects OAuth credentials during setup, it recommends **Advanced Setup** in the TUI.
<Tabs groupId="Setup method">
<TabItem value="Basic setup" label="Basic setup" default>
1. In the TUI, click **Basic Setup** or press <kbd>1</kbd>.
2. Enter administrator passwords for the OpenRAG OpenSearch and Langflow services, or click **Generate Passwords** to generate passwords automatically.
The OpenSearch password is required.
The Langflow password is optional.
If the Langflow password is empty, Langflow runs in [autologin mode](https://docs.langflow.org/api-keys-and-authentication#langflow-auto-login) without password authentication.
3. Optional: Enter your OpenAI API key, or leave this field empty if you want to configure model provider credentials later during application onboarding.
4. Click **Save Configuration**.
Your passwords and API key, if provided, are stored in the `.env` file in your OpenRAG installation directory.
If you modified any credentials that were pulled from an existing `.env` file, those values are updated in the `.env` file.
5. Click **Start All Services** to start the OpenRAG services that run in containers.
This process can take some time while OpenRAG pulls and runs the container images.
If all services start successfully, the TUI prints a confirmation message:
```text
Services started successfully
Command completed successfully
```
6. Under [**Native Services**](/manage-services), click **Start** to start the Docling service.
7. Launch the OpenRAG application:
* From the TUI main menu, click **Open App**.
* In your browser, navigate to `localhost:3000`.
8. Continue with [application onboarding](#application-onboarding).
</TabItem>
<TabItem value="Advanced setup" label="Advanced setup">
1. In the TUI, click **Advanced Setup** or press <kbd>2</kbd>.
2. Enter administrator passwords for the OpenRAG OpenSearch and Langflow services, or click **Generate Passwords** to generate passwords automatically.
The OpenSearch password is required.
The Langflow password is optional.
If the Langflow password is empty, Langflow runs in [autologin mode](https://docs.langflow.org/api-keys-and-authentication#langflow-auto-login) without password authentication.
3. Optional: Enter your OpenAI API key, or leave this field empty if you want to configure model provider credentials later during application onboarding.
4. To upload documents from external storage, such as Google Drive, add the required OAuth credentials for the connectors that you want to use. These settings can be populated automatically if OpenRAG detects these credentials in a `.env` file in the OpenRAG installation directory.
* **Amazon**: Provide your AWS Access Key ID and AWS Secret Access Key with access to your S3 instance. For more information, see the AWS documentation on [Configuring access to AWS applications](https://docs.aws.amazon.com/singlesignon/latest/userguide/manage-your-applications.html).
* **Google**: Provide your Google OAuth Client ID and Google OAuth Client Secret. You can generate these in the [Google Cloud Console](https://console.cloud.google.com/apis/credentials). For more information, see the [Google OAuth client documentation](https://developers.google.com/identity/protocols/oauth2).
* **Microsoft**: For the Microsoft OAuth Client ID and Microsoft OAuth Client Secret, provide [Azure application registration credentials for SharePoint and OneDrive](https://learn.microsoft.com/en-us/onedrive/developer/rest-api/getting-started/app-registration?view=odsp-graph-online). For more information, see the [Microsoft Graph OAuth client documentation](https://learn.microsoft.com/en-us/onedrive/developer/rest-api/getting-started/graph-oauth).
You can [manage OAuth credentials](/ingestion#oauth-ingestion) later, but it is recommended to configure them during initial set up.
5. The OpenRAG TUI presents redirect URIs for your OAuth app.
These are the URLs your OAuth provider will redirect back to after user sign-in.
Register these redirect values with your OAuth provider as they are presented in the TUI.
6. Click **Save Configuration**.
Your passwords, API key (if provided), and OAuth credentials (if provided) are stored in the `.env` file in your OpenRAG installation directory.
If you modified any credentials that were pulled from an existing `.env` file, those values are updated in the `.env` file.
7. Click **Start All Services** to start the OpenRAG services that run in containers.
This process can take some time while OpenRAG pulls and runs the container images.
If all services start successfully, the TUI prints a confirmation message:
```text
Services started successfully
Command completed successfully
```
8. Under [**Native Services**](/manage-services), click **Start** to start the Docling service.
9. Launch the OpenRAG application:
* From the TUI main menu, click **Open App**.
* In your browser, navigate to `localhost:3000`.
10. If you enabled OAuth connectors, you must sign in to your OAuth provider before being redirected to your OpenRAG instance.
11. If required, you can edit the following additional environment variables.
Only change these variables if your OpenRAG deployment has a non-default network configuration, such as a reverse proxy or custom domain.
* `LANGFLOW_PUBLIC_URL`: Sets the base address to access the Langflow web interface. This is where users interact with flows in a browser.
* `WEBHOOK_BASE_URL`: Sets the base address for the following OpenRAG OAuth connector endpoints:
- Amazon S3: Not applicable.
- Google Drive: `WEBHOOK_BASE_URL/connectors/google_drive/webhook`
- OneDrive: `WEBHOOK_BASE_URL/connectors/onedrive/webhook`
- SharePoint: `WEBHOOK_BASE_URL/connectors/sharepoint/webhook`
12. Continue with [application onboarding](#application-onboarding).
</TabItem>
</Tabs>

View file

@ -64,26 +64,26 @@ To enable multiple connectors, you must register an app and generate credentials
<Tabs>
<TabItem value="TUI" label="TUI Advanced Setup" default>
If you use the TUI to manage your OpenRAG services, provide OAuth credentials in the **Advanced Setup**.
If you use the [Terminal User Interface (TUI)](/tui) to manage your OpenRAG services, enter OAuth credentials in the **Advanced Setup** menu.
You can do this during [installation](/install#setup), or you can add the credentials afterwards:
1. If OpenRAG is running, stop it: Go to [**Status**](/manage-services#tui-container-management), and then click **Stop Services**.
1. If OpenRAG is running, open the TUI's **Status** menu (<kbd>3</kbd>), and then click **Stop Services**.
2. Click **Advanced Setup**, and then add the OAuth credentials for the cloud storage providers that you want to use:
2. Open the **Advanced Setup** menu (<kbd>2</kbd>), and then add the OAuth credentials for the cloud storage providers that you want to use:
* **Amazon**: Provide your AWS Access Key ID and AWS Secret Access Key with access to your S3 instance. For more information, see the AWS documentation on [Configuring access to AWS applications](https://docs.aws.amazon.com/singlesignon/latest/userguide/manage-your-applications.html).
* **Google**: Provide your Google OAuth Client ID and Google OAuth Client Secret. You can generate these in the [Google Cloud Console](https://console.cloud.google.com/apis/credentials). For more information, see the [Google OAuth client documentation](https://developers.google.com/identity/protocols/oauth2).
* **Microsoft**: For the Microsoft OAuth Client ID and Microsoft OAuth Client Secret, provide [Azure application registration credentials for SharePoint and OneDrive](https://learn.microsoft.com/en-us/onedrive/developer/rest-api/getting-started/app-registration?view=odsp-graph-online). For more information, see the [Microsoft Graph OAuth client documentation](https://learn.microsoft.com/en-us/onedrive/developer/rest-api/getting-started/graph-oauth).
3. The OpenRAG TUI presents redirect URIs for your OAuth app that you must register with your OAuth provider.
3. The TUI presents redirect URIs for your OAuth app that you must register with your OAuth provider.
These are the URLs your OAuth provider will redirect back to after users authenticate and grant access to their cloud storage.
4. Click **Save Configuration**.
4. Click **Save Configuration** to add the OAuth credentials to your OpenRAG [`.env`](/reference/configuration) file.
OpenRAG regenerates the [`.env`](/reference/configuration) file with the given credentials.
5. Click **Start All Services** to restart the OpenRAG containers with OAuth enabled.
5. Click **Start All Services**.
6. Launch the OpenRAG app.
You should be prompted to sign in to your OAuth provider before being redirected to your OpenRAG instance.
</TabItem>
<TabItem value="env" label="Docker Compose .env file">
@ -92,24 +92,19 @@ If you [installed OpenRAG with self-managed services](/docker), set OAuth creden
You can do this during [initial set up](/docker#install-openrag-with-docker-compose), or you can add the credentials afterwards:
1. Stop your OpenRAG deployment.
1. Stop your OpenRAG deployment:
<Tabs>
<TabItem value="podman" label="Podman">
* Docker:
```bash
podman stop --all
```
```bash
docker stop $(docker ps -q)
```
</TabItem>
<TabItem value="docker" label="Docker">
* Podman:
```bash
docker stop $(docker ps -q)
```
</TabItem>
</Tabs>
```bash
podman stop --all
```
2. Edit the `.env` file for Docker Compose to add the OAuth credentials for the cloud storage providers that you want to use:
@ -138,22 +133,17 @@ You can do this during [initial set up](/docker#install-openrag-with-docker-comp
4. Restart your OpenRAG deployment:
<Tabs>
<TabItem value="podman" label="Podman">
* Docker:
```bash
podman-compose up -d
```
```bash
docker compose up -d
```
</TabItem>
<TabItem value="docker" label="Docker">
* Podman:
```bash
docker-compose up -d
```
</TabItem>
</Tabs>
```bash
podman compose up -d
```
</TabItem>
</Tabs>

View file

@ -10,7 +10,7 @@ import TabItem from '@theme/TabItem';
OpenRAG includes a built-in [OpenSearch](https://docs.opensearch.org/latest/) instance that serves as the underlying datastore for your _knowledge_ (documents).
This specialized database is used to store and retrieve your documents and the associated vector data (embeddings).
The documents in your OpenSearch knowledge base provide specialized context in addition to the general knowledge available to the language model that you select when you [install OpenRAG](/install) or [edit a flow](/agents).
The documents in your OpenSearch knowledge base provide specialized context in addition to the general knowledge available to the language model that you select when you [install OpenRAG](/install-options) or [edit a flow](/agents).
You can [upload documents](/ingestion) from a variety of sources to populate your knowledge base with unique content, such as your own company documents, research papers, or websites.
Documents are processed through OpenRAG's knowledge ingestion flows with Docling.
@ -76,7 +76,7 @@ If needed, you can use [filters](/knowledge-filters) to separate documents that
### Set the embedding model and dimensions {#set-the-embedding-model-and-dimensions}
When you [install OpenRAG](/install), you select at least one embedding model during [application onboarding](/install#application-onboarding).
When you [install OpenRAG](/install-options), you select at least one embedding model during [application onboarding](/install#application-onboarding).
OpenRAG automatically detects and configures the appropriate vector dimensions for your selected embedding model, ensuring optimal search performance and compatibility.
In the OpenRAG repository, you can find the complete list of supported models in [`models_service.py`](https://github.com/langflow-ai/openrag/blob/main/src/services/models_service.py) and the corresponding vector dimensions in [`settings.py`](https://github.com/langflow-ai/openrag/blob/main/src/config/settings.py).
@ -120,7 +120,7 @@ To modify the Docling ingestion and embedding parameters, click <Icon name="Sett
:::tip
OpenRAG warns you if `docling serve` isn't running.
You can [start and stop OpenRAG services](/manage-services#tui-container-management) from the TUI main menu with **Start Native Services** or **Stop Native Services**.
For information about starting and stopping OpenRAG native services, like Docling, see [Manage OpenRAG services](/manage-services).
:::
* **Embedding model**: Select the model to use to generate vector embeddings for your documents.

View file

@ -6,43 +6,25 @@ slug: /docker
import Tabs from '@theme/Tabs';
import TabItem from '@theme/TabItem';
import PartialOnboarding from '@site/docs/_partial-onboarding.mdx';
import PartialPrereqCommon from '@site/docs/_partial-prereq-common.mdx';
import PartialPrereqNoScript from '@site/docs/_partial-prereq-no-script.mdx';
import PartialPrereqWindows from '@site/docs/_partial-prereq-windows.mdx';
import PartialPrereqPython from '@site/docs/_partial-prereq-python.mdx';
import PartialInstallNextSteps from '@site/docs/_partial-install-next-steps.mdx';
To manage your own OpenRAG services, deploy OpenRAG with Docker or Podman.
Use this installation method if you don't want to [use the Terminal User Interface (TUI)](/tui), or you need to run OpenRAG in an environment where using the TUI is unfeasible.
OpenRAG has two Docker Compose files. Both files deploy the same services, but they are for different environments:
- [`docker-compose.yml`](https://github.com/langflow-ai/openrag/blob/main/docker-compose.yml) is an OpenRAG deployment with GPU support for accelerated AI processing. This Docker Compose file requires an NVIDIA GPU with [CUDA](https://docs.nvidia.com/cuda/) support.
- [`docker-compose-cpu.yml`](https://github.com/langflow-ai/openrag/blob/main/docker-compose-cpu.yml) is a CPU-only version of OpenRAG for systems without NVIDIA GPU support. Use this Docker Compose file for environments where GPU drivers aren't available.
## Prerequisites
- For Microsoft Windows, you must use the Windows Subsystem for Linux (WSL).
See [Install OpenRAG on Windows](/install-windows) before proceeding.
<PartialPrereqWindows />
- Install [Python](https://www.python.org/downloads/release/python-3100/) version 3.13 or later.
<PartialPrereqCommon />
- Install [uv](https://docs.astral.sh/uv/getting-started/installation/).
<PartialPrereqPython />
- Install [Podman](https://podman.io/docs/installation) (recommended) or [Docker](https://docs.docker.com/get-docker/).
- Install [Podman Compose](https://docs.podman.io/en/latest/markdown/podman-compose.1.html) or [Docker Compose](https://docs.docker.com/compose/install/).
To use Docker Compose with Podman, you must alias Docker Compose commands to Podman commands.
- Gather the credentials and connection details for your preferred model providers.
- OpenAI: Create an [OpenAI API key](https://platform.openai.com/api-keys).
- Anthropic language models: Create an [Anthropic API key](https://www.anthropic.com/docs/api/reference).
- IBM watsonx.ai: Get your watsonx.ai API endpoint, IBM project ID, and IBM API key from your watsonx deployment.
- Ollama: Use the [Ollama documentation](https://docs.ollama.com/) to set up your Ollama instance locally, in the cloud, or on a remote server, and then get your Ollama server's base URL.
You must have access to at least one language model and one embedding model.
If your chosen provider offers both types, you can use the same provider for both models.
If your provider offers only one type, such as Anthropic, you must select two providers.
- Optional: Install GPU support with an NVIDIA GPU, [CUDA](https://docs.nvidia.com/cuda/) support, and compatible NVIDIA drivers on the OpenRAG host machine. If you don't have GPU capabilities, OpenRAG provides an alternate CPU-only deployment.
<PartialPrereqNoScript />
## Install OpenRAG with Docker Compose
@ -76,7 +58,7 @@ To install OpenRAG with Docker Compose, do the following:
The following values are optional:
```bash
```env
OPENAI_API_KEY=your_openai_api_key
LANGFLOW_SECRET_KEY=your_secret_key
```
@ -87,7 +69,7 @@ To install OpenRAG with Docker Compose, do the following:
The following Langflow configuration values are optional but important to consider:
```bash
```env
LANGFLOW_SUPERUSER=admin
LANGFLOW_SUPERUSER_PASSWORD=your_langflow_password
```
@ -117,7 +99,12 @@ To install OpenRAG with Docker Compose, do the following:
PID: 27746
```
7. Deploy OpenRAG locally with Docker Compose based on your deployment type.
7. Deploy OpenRAG locally with the appropriate Docker Compose file for your environment.
Both files deploy the same services.
- [`docker-compose.yml`](https://github.com/langflow-ai/openrag/blob/main/docker-compose.yml) is an OpenRAG deployment with GPU support for accelerated AI processing. This Docker Compose file requires an NVIDIA GPU with [CUDA](https://docs.nvidia.com/cuda/) support.
- [`docker-compose-cpu.yml`](https://github.com/langflow-ai/openrag/blob/main/docker-compose-cpu.yml) is a CPU-only version of OpenRAG for systems without NVIDIA GPU support. Use this Docker Compose file for environments where GPU drivers aren't available.
<Tabs groupId="Compose file">
<TabItem value="docker-compose.yml" label="docker-compose.yml" default>
@ -135,6 +122,8 @@ To install OpenRAG with Docker Compose, do the following:
</TabItem>
</Tabs>
<!-- add podman compose -->
The OpenRAG Docker Compose file starts five containers:
| Container Name | Default Address | Purpose |
|---|---|---|
@ -146,6 +135,7 @@ To install OpenRAG with Docker Compose, do the following:
8. Verify installation by confirming all services are running.
<!-- add podman compose -->
```bash
docker compose ps
```
@ -156,203 +146,8 @@ To install OpenRAG with Docker Compose, do the following:
- **Backend API**: http://localhost:8000
- **Langflow**: http://localhost:7860
9. Continue with [application onboarding](#application-onboarding).
To stop `docling serve` when you're done with your OpenRAG deployment, run:
```bash
uv run python scripts/docling_ctl.py stop
```
9. Access the OpenRAG frontend to continue with [application onboarding](#application-onboarding).
<PartialOnboarding />
## Container management commands
Manage your OpenRAG containers with the following commands.
### Upgrade containers
<!-- also on /upgrade -->
Upgrade your containers to the latest version while preserving your data.
```bash
docker compose pull
docker compose up -d --force-recreate
```
### Reset containers (destructive) {#reset-containers}
<!-- part of this needs to be on /reinstall and /uninstall -->
:::warning
These are destructive operations that reset your OpenRAG deployment to an initial state.
Be aware that data is lost and cannot be recovered after running these commands.
:::
<Tabs>
<TabItem value="docker-compose" label="Docker Compose" default>
* Rebuild containers: This command destroys and recreates the containers. Data stored exclusively on the containers is lost, such as Langflow flows.
The `.env` file, `config` directory, `./openrag-documents` directory, `./opensearch-data` directory, and the `conversations.json` file are preserved.
```bash
docker compose up --build --force-recreate --remove-orphans
```
* Destroy and recreate containers with the option for additional data removal: These commands destroy the containers, and then recreate them.
This allows you to delete other OpenRAG data before recreating the containers.
1. Destroy the containers, volumes, and local images, and then remove (prune) any additional Docker objects:
```bash
docker compose down --volumes --remove-orphans --rmi local
docker system prune -f
```
2. Optional: Remove data that wasn't deleted by the previous commands:
* OpenRAG's `.env` file
* The contents of OpenRAG's `config` directory
* The contents of the `./openrag-documents` directory
* The contents of the `./opensearch-data` directory
* The `conversations.json` file
3. Recreate the containers:
```bash
docker compose up -d
```
</TabItem>
<TabItem value="Podman-compose" label="Podman Compose">
* Rebuild containers: This command destroys and recreates the containers. Data stored exclusively on the containers is lost, such as Langflow flows.
The `.env` file, `config` directory, `./openrag-documents` directory, `./opensearch-data` directory, and the `conversations.json` file are preserved.
```bash
podman-compose up --build --force-recreate --remove-orphans
```
* Destroy and recreate containers with the option for additional data removal: These commands destroy the containers, and then recreate them.
This allows you to delete other OpenRAG data before recreating the containers.
1. Destroy the containers, volumes, and local images, and then remove (prune) any additional Podman objects:
```bash
podman-compose down --volumes --remove-orphans --rmi local
podman system prune -f
```
2. Optional: Remove data that wasn't deleted by the previous commands:
* OpenRAG's `.env` file
* The contents of OpenRAG's `config` directory
* The contents of the `./openrag-documents` directory
* The contents of the `./opensearch-data` directory
* The `conversations.json` file
3. Recreate the containers:
```bash
podman-compose up -d
```
</TabItem>
<TabItem value="docker" label="Docker">
1. Stop all running containers:
```bash
docker stop $(docker ps -q)
```
2. Remove all containers, including stopped containers:
```bash
docker rm --force $(docker ps -aq)
```
3. Remove all images:
```bash
docker rmi --force $(docker images -q)
```
4. Remove all volumes:
```bash
docker volume prune --force
```
5. Remove all networks except the default network:
```bash
docker network prune --force
```
6. Clean up any leftover data:
```bash
docker system prune --all --force --volumes
```
7. Optional: Remove data that wasn't deleted by the previous commands:
* OpenRAG's `.env` file
* The contents of OpenRAG's `config` directory
* The contents of the `./openrag-documents` directory
* The contents of the `./opensearch-data` directory
* The `conversations.json` file
</TabItem>
<TabItem value="podman" label="Podman">
1. Stop all running containers:
```bash
podman stop --all
```
2. Remove all containers, including stopped containers:
```bash
podman rm --all --force
```
3. Remove all images:
```bash
podman rmi --all --force
```
4. Remove all volumes:
```bash
podman volume prune --force
```
5. Remove all networks except the default network:
```bash
podman network prune --force
```
6. Clean up any leftover data:
```bash
podman system prune --all --force --volumes
```
7. Optional: Remove data that wasn't deleted by the previous commands:
* OpenRAG's `.env` file
* The contents of OpenRAG's `config` directory
* The contents of the `./openrag-documents` directory
* The contents of the `./opensearch-data` directory
* The `conversations.json` file
</TabItem>
</Tabs>
After resetting your containers, you must repeat [application onboarding](#application-onboarding).
<PartialInstallNextSteps />

View file

@ -3,6 +3,16 @@ title: Install OpenRAG in a Python project with uv
slug: /install-uv
---
import Tabs from '@theme/Tabs';
import TabItem from '@theme/TabItem';
import PartialOnboarding from '@site/docs/_partial-onboarding.mdx';
import PartialSetup from '@site/docs/_partial-setup.mdx';
import PartialPrereqCommon from '@site/docs/_partial-prereq-common.mdx';
import PartialPrereqNoScript from '@site/docs/_partial-prereq-no-script.mdx';
import PartialPrereqWindows from '@site/docs/_partial-prereq-windows.mdx';
import PartialPrereqPython from '@site/docs/_partial-prereq-python.mdx';
import PartialInstallNextSteps from '@site/docs/_partial-install-next-steps.mdx';
For guided configuration and simplified service management, install OpenRAG with services managed by the [Terminal User Interface (TUI)](/tui).
You can use [`uv`](https://docs.astral.sh/uv/getting-started/installation/) to install OpenRAG as a managed or unmanaged dependency in a new or existing Python project.
@ -11,30 +21,13 @@ For other installation methods, see [Choose an installation method](/install-opt
## Prerequisites
- For Microsoft Windows, you must use the Windows Subsystem for Linux (WSL).
See [Install OpenRAG on Windows](/install-windows) before proceeding.
<PartialPrereqWindows />
- Install [Python](https://www.python.org/downloads/release/python-3100/) version 3.13 or later.
<PartialPrereqCommon />
- Install [uv](https://docs.astral.sh/uv/getting-started/installation/).
<PartialPrereqPython />
- Install [Podman](https://podman.io/docs/installation) (recommended) or [Docker](https://docs.docker.com/get-docker/).
- Install [Podman Compose](https://docs.podman.io/en/latest/markdown/podman-compose.1.html) or [Docker Compose](https://docs.docker.com/compose/install/).
To use Docker Compose with Podman, you must alias Docker Compose commands to Podman commands.
- Gather the credentials and connection details for your preferred model providers.
- OpenAI: Create an [OpenAI API key](https://platform.openai.com/api-keys).
- Anthropic language models: Create an [Anthropic API key](https://www.anthropic.com/docs/api/reference).
- IBM watsonx.ai: Get your watsonx.ai API endpoint, IBM project ID, and IBM API key from your watsonx deployment.
- Ollama: Use the [Ollama documentation](https://docs.ollama.com/) to set up your Ollama instance locally, in the cloud, or on a remote server, and then get your Ollama server's base URL.
You must have access to at least one language model and one embedding model.
If your chosen provider offers both types, you can use the same provider for both models.
If your provider offers only one type, such as Anthropic, you must select two providers.
- Optional: Install GPU support with an NVIDIA GPU, [CUDA](https://docs.nvidia.com/cuda/) support, and compatible NVIDIA drivers on the OpenRAG host machine. If you don't have GPU capabilities, OpenRAG provides an alternate CPU-only deployment.
<PartialPrereqNoScript />
## Install and start OpenRAG with uv
@ -108,14 +101,17 @@ If you encounter errors during installation, see [Troubleshoot OpenRAG](/support
uv run openrag
```
## Set up OpenRAG with the TUI
## Set up OpenRAG with the TUI {#setup}
When the Terminal User Interface (TUI) starts, you must complete the initial setup to configure OpenRAG.
When you install OpenRAG with `uv`, you manage the OpenRAG services with the Terminal User Interface (TUI).
The TUI guides you through the initial configuration process before you start the OpenRAG services.
![OpenRAG TUI Interface](@site/static/img/OpenRAG_TUI_2025-09-10T13_04_11_757637.svg)
Your [OpenRAG configuration](/reference/configuration) is stored in a `.env` file that is created automatically in the Python project where you installed OpenRAG.
If OpenRAG detects an existing `.env` file, the TUI automatically populates those values during setup and onboarding.
Container definitions are stored in a `docker-compose.yml` file in the same directory.
## Next steps
<PartialSetup />
* Try some of OpenRAG's core features in the [quickstart](/quickstart#chat-with-documents).
* Learn how to [manage OpenRAG services](/manage-services).
* [Upload documents](/ingestion), and then use the [**Chat**](/chat) to explore your data.
<PartialOnboarding />
<PartialInstallNextSteps />

View file

@ -3,6 +3,16 @@ title: Invoke OpenRAG with uvx
slug: /install-uvx
---
import Tabs from '@theme/Tabs';
import TabItem from '@theme/TabItem';
import PartialOnboarding from '@site/docs/_partial-onboarding.mdx';
import PartialSetup from '@site/docs/_partial-setup.mdx';
import PartialPrereqCommon from '@site/docs/_partial-prereq-common.mdx';
import PartialPrereqNoScript from '@site/docs/_partial-prereq-no-script.mdx';
import PartialPrereqWindows from '@site/docs/_partial-prereq-windows.mdx';
import PartialPrereqPython from '@site/docs/_partial-prereq-python.mdx';
import PartialInstallNextSteps from '@site/docs/_partial-install-next-steps.mdx';
For guided configuration and simplified service management, install OpenRAG with services managed by the [Terminal User Interface (TUI)](/tui).
You can use [`uvx`](https://docs.astral.sh/uv/guides/tools/#running-tools) to invoke OpenRAG outside of a Python project or without modifying your project's dependencies.
@ -16,30 +26,13 @@ For other installation methods, see [Choose an installation method](/install-opt
## Prerequisites
- For Microsoft Windows, you must use the Windows Subsystem for Linux (WSL).
See [Install OpenRAG on Windows](/install-windows) before proceeding.
<PartialPrereqWindows />
- Install [Python](https://www.python.org/downloads/release/python-3100/) version 3.13 or later.
<PartialPrereqCommon />
- Install [uv](https://docs.astral.sh/uv/getting-started/installation/).
<PartialPrereqPython />
- Install [Podman](https://podman.io/docs/installation) (recommended) or [Docker](https://docs.docker.com/get-docker/).
- Install [Podman Compose](https://docs.podman.io/en/latest/markdown/podman-compose.1.html) or [Docker Compose](https://docs.docker.com/compose/install/).
To use Docker Compose with Podman, you must alias Docker Compose commands to Podman commands.
- Gather the credentials and connection details for your preferred model providers.
- OpenAI: Create an [OpenAI API key](https://platform.openai.com/api-keys).
- Anthropic language models: Create an [Anthropic API key](https://www.anthropic.com/docs/api/reference).
- IBM watsonx.ai: Get your watsonx.ai API endpoint, IBM project ID, and IBM API key from your watsonx deployment.
- Ollama: Use the [Ollama documentation](https://docs.ollama.com/) to set up your Ollama instance locally, in the cloud, or on a remote server, and then get your Ollama server's base URL.
You must have access to at least one language model and one embedding model.
If your chosen provider offers both types, you can use the same provider for both models.
If your provider offers only one type, such as Anthropic, you must select two providers.
- Optional: Install GPU support with an NVIDIA GPU, [CUDA](https://docs.nvidia.com/cuda/) support, and compatible NVIDIA drivers on the OpenRAG host machine. If you don't have GPU capabilities, OpenRAG provides an alternate CPU-only deployment.
<PartialPrereqNoScript />
## Install and run OpenRAG with uvx
@ -71,17 +64,20 @@ To use Docker Compose with Podman, you must alias Docker Compose commands to Pod
If you encounter errors during installation, see [Troubleshoot OpenRAG](/support/troubleshoot).
## Set up OpenRAG with the TUI
## Set up OpenRAG with the TUI {#setup}
When the Terminal User Interface (TUI) starts, you must complete the initial setup to configure OpenRAG.
When you install OpenRAG with `uvx`, you manage the OpenRAG services with the Terminal User Interface (TUI).
The TUI guides you through the initial configuration process before you start the OpenRAG services.
![OpenRAG TUI Interface](@site/static/img/OpenRAG_TUI_2025-09-10T13_04_11_757637.svg)
Your [OpenRAG configuration](/reference/configuration) is stored in a `.env` file that is created automatically in the OpenRAG installation directory.
If OpenRAG detects an existing `.env` file, the TUI automatically populates those values during setup and onboarding.
The OpenRAG setup process creates the `.env` and `docker-compose.yml` files in the directory where you invoked OpenRAG.
If it detects a `.env` file in the OpenRAG installation directory, it sources any variables from that file.
Container definitions are stored in a `docker-compose.yml` file in the OpenRAG installation directory.
## Next steps
With `uvx`, the OpenRAG `.env` and `docker-compose.yml` files are stored in the directory where you invoked OpenRAG.
* Try some of OpenRAG's core features in the [quickstart](/quickstart#chat-with-documents).
* Learn how to [manage OpenRAG services](/manage-services).
* [Upload documents](/ingestion), and then use the [**Chat**](/chat) to explore your data.
<PartialSetup />
<PartialOnboarding />
<PartialInstallNextSteps />

View file

@ -6,6 +6,11 @@ slug: /install
import Tabs from '@theme/Tabs';
import TabItem from '@theme/TabItem';
import PartialOnboarding from '@site/docs/_partial-onboarding.mdx';
import PartialSetup from '@site/docs/_partial-setup.mdx';
import PartialPrereqCommon from '@site/docs/_partial-prereq-common.mdx';
import PartialPrereqWindows from '@site/docs/_partial-prereq-windows.mdx';
import PartialPrereqPython from '@site/docs/_partial-prereq-python.mdx';
import PartialInstallNextSteps from '@site/docs/_partial-install-next-steps.mdx';
:::tip
For a fully guided installation and preview of OpenRAG's core features, try the [quickstart](/quickstart).
@ -20,23 +25,11 @@ For other installation methods, see [Choose an installation method](/install-opt
## Prerequisites
- For Microsoft Windows, you must use the Windows Subsystem for Linux (WSL).
See [Install OpenRAG on Windows](/install-windows) before proceeding.
<PartialPrereqWindows />
- Install [Python](https://www.python.org/downloads/release/python-3100/) version 3.13 or later.
<PartialPrereqCommon />
- Gather the credentials and connection details for your preferred model providers.
- OpenAI: Create an [OpenAI API key](https://platform.openai.com/api-keys).
- Anthropic language models: Create an [Anthropic API key](https://www.anthropic.com/docs/api/reference).
- IBM watsonx.ai: Get your watsonx.ai API endpoint, IBM project ID, and IBM API key from your watsonx deployment.
- Ollama: Use the [Ollama documentation](https://docs.ollama.com/) to set up your Ollama instance locally, in the cloud, or on a remote server, and then get your Ollama server's base URL.
You must have access to at least one language model and one embedding model.
If your chosen provider offers both types, you can use the same provider for both models.
If your provider offers only one type, such as Anthropic, you must select two providers.
- Optional: Install GPU support with an NVIDIA GPU, [CUDA](https://docs.nvidia.com/cuda/) support, and compatible NVIDIA drivers on the OpenRAG host machine. If you don't have GPU capabilities, OpenRAG provides an alternate CPU-only deployment.
<PartialPrereqPython />
## Run the installer script {#install}
@ -89,126 +82,8 @@ Container definitions are stored in a `docker-compose.yml` file in the OpenRAG i
Because the installer script uses `uvx`, the OpenRAG `.env` and `docker-compose.yml` files are stored in the directory where you ran the installer script.
You can use either **Basic Setup** or **Advanced Setup** to configure OpenRAG.
This choice determines [how OpenRAG authenticates with OpenSearch and controls access to documents](/knowledge#auth).
:::info
You must use **Advanced Setup** if you want to [use OAuth connectors to upload documents from cloud storage](/ingestion#oauth-ingestion).
:::
If OpenRAG detects OAuth credentials during setup, it recommends **Advanced Setup** in the TUI.
<Tabs groupId="Setup method">
<TabItem value="Basic setup" label="Basic setup" default>
1. In the TUI, click **Basic Setup** or press <kbd>1</kbd>.
2. Enter administrator passwords for the OpenRAG OpenSearch and Langflow services, or click **Generate Passwords** to generate passwords automatically.
The OpenSearch password is required.
The Langflow password is optional.
If the Langflow password is empty, Langflow runs in [autologin mode](https://docs.langflow.org/api-keys-and-authentication#langflow-auto-login) without password authentication.
3. Optional: Enter your OpenAI API key, or leave this field empty if you want to configure model provider credentials later during application onboarding.
4. Click **Save Configuration**.
Your passwords and API key, if provided, are stored in the `.env` file in your OpenRAG installation directory.
If you modified any credentials that were pulled from an existing `.env` file, those values are updated in the `.env` file.
5. Click **Start All Services** to start the OpenRAG services that run in containers.
This process can take some time while OpenRAG pulls and runs the container images.
If all services start successfully, the TUI prints a confirmation message:
```text
Services started successfully
Command completed successfully
```
6. Under [**Native Services**](/manage-services), click **Start** to start the Docling service.
7. Launch the OpenRAG application:
* From the TUI main menu, click **Open App**.
* In your browser, navigate to `localhost:3000`.
8. Continue with [application onboarding](#application-onboarding).
</TabItem>
<TabItem value="Advanced setup" label="Advanced setup">
1. In the TUI, click **Advanced Setup** or press <kbd>2</kbd>.
2. Enter administrator passwords for the OpenRAG OpenSearch and Langflow services, or click **Generate Passwords** to generate passwords automatically.
The OpenSearch password is required.
The Langflow password is optional.
If the Langflow password is empty, Langflow runs in [autologin mode](https://docs.langflow.org/api-keys-and-authentication#langflow-auto-login) without password authentication.
3. Optional: Enter your OpenAI API key, or leave this field empty if you want to configure model provider credentials later during application onboarding.
4. To upload documents from external storage, such as Google Drive, add the required OAuth credentials for the connectors that you want to use. These settings can be populated automatically if OpenRAG detects these credentials in a `.env` file in the OpenRAG installation directory.
* **Amazon**: Provide your AWS Access Key ID and AWS Secret Access Key with access to your S3 instance. For more information, see the AWS documentation on [Configuring access to AWS applications](https://docs.aws.amazon.com/singlesignon/latest/userguide/manage-your-applications.html).
* **Google**: Provide your Google OAuth Client ID and Google OAuth Client Secret. You can generate these in the [Google Cloud Console](https://console.cloud.google.com/apis/credentials). For more information, see the [Google OAuth client documentation](https://developers.google.com/identity/protocols/oauth2).
* **Microsoft**: For the Microsoft OAuth Client ID and Microsoft OAuth Client Secret, provide [Azure application registration credentials for SharePoint and OneDrive](https://learn.microsoft.com/en-us/onedrive/developer/rest-api/getting-started/app-registration?view=odsp-graph-online). For more information, see the [Microsoft Graph OAuth client documentation](https://learn.microsoft.com/en-us/onedrive/developer/rest-api/getting-started/graph-oauth).
You can [manage OAuth credentials](/ingestion#oauth-ingestion) later, but it is recommended to configure them during initial set up.
5. The OpenRAG TUI presents redirect URIs for your OAuth app.
These are the URLs your OAuth provider will redirect back to after user sign-in.
Register these redirect values with your OAuth provider as they are presented in the TUI.
6. Click **Save Configuration**.
Your passwords, API key (if provided), and OAuth credentials (if provided) are stored in the `.env` file in your OpenRAG installation directory.
If you modified any credentials that were pulled from an existing `.env` file, those values are updated in the `.env` file.
7. Click **Start All Services** to start the OpenRAG services that run in containers.
This process can take some time while OpenRAG pulls and runs the container images.
If all services start successfully, the TUI prints a confirmation message:
```text
Services started successfully
Command completed successfully
```
8. Under [**Native Services**](/manage-services), click **Start** to start the Docling service.
9. Launch the OpenRAG application:
* From the TUI main menu, click **Open App**.
* In your browser, navigate to `localhost:3000`.
10. If you enabled OAuth connectors, you must sign in to your OAuth provider before being redirected to your OpenRAG instance.
11. If required, you can edit the following additional environment variables.
Only change these variables if your OpenRAG deployment has a non-default network configuration, such as a reverse proxy or custom domain.
* `LANGFLOW_PUBLIC_URL`: Sets the base address to access the Langflow web interface. This is where users interact with flows in a browser.
* `WEBHOOK_BASE_URL`: Sets the base address for the following OpenRAG OAuth connector endpoints:
- Amazon S3: Not applicable.
- Google Drive: `WEBHOOK_BASE_URL/connectors/google_drive/webhook`
- OneDrive: `WEBHOOK_BASE_URL/connectors/onedrive/webhook`
- SharePoint: `WEBHOOK_BASE_URL/connectors/sharepoint/webhook`
12. Continue with [application onboarding](#application-onboarding).
</TabItem>
</Tabs>
The first time you start the OpenRAG application, you must complete application onboarding to select language and embedding models that are essential for OpenRAG features like the [**Chat**](/chat).
<PartialSetup />
<PartialOnboarding />
## Next steps
* Try some of OpenRAG's core features in the [quickstart](/quickstart#chat-with-documents).
* Learn how to [manage OpenRAG services](/manage-services).
* [Upload documents](/ingestion), and then use the [**Chat**](/chat) to explore your data.
<PartialInstallNextSteps />

View file

@ -6,97 +6,182 @@ slug: /manage-services
Service management is an essential part of maintaining your OpenRAG deployment.
Most OpenRAG services run in containers.
However, some services, like Docling, run directly on the local machine.
_Native services_, like Docling, run directly on the local machine where you installed OpenRAG.
If you [installed OpenRAG](/install-options) with the automated installer script, `uv`, or `uvx`, you can use the [Terminal User Interface (TUI)](/tui) to manage your OpenRAG configuration and services.
## Manage containers and services with the TUI {#tui-container-management}
For [self-managed deployments](/docker), run Docker or Podman commands to manage your OpenRAG services.
If you [installed OpenRAG](/install-options) with the automated installer script, `uv`, or `uvx`, you can use the [Terminal User Interface (TUI)](/tui) to manage your services:
## Monitor services
* Start and stop OpenRAG container-based services.
* Start and stop OpenRAG's native services (Docling).
* View the status of your OpenRAG services.
* Access container logs for troubleshooting.
* Upgrade your OpenRAG containers to the latest version.
* Reset your OpenRAG containers to an initial state (destructive).
### Diagnostics
The **Diagnostics** menu provides health monitoring for your container runtimes and monitoring of your OpenSearch security.
### Status {#status}
The TUI's **Status** menu provides information about your OpenRAG services, including health, ports, logs, and controls:
* **Logs**: To view streaming logs, select the container you want to view, and press <kbd>l</kbd>.
* **TUI Status menu**: In the **Status** menu (<kbd>3</kbd>), you can access streaming logs for all OpenRAG services.
Select the service you want to view, and then press <kbd>l</kbd>.
To copy the logs, click **Copy to Clipboard**.
* **Upgrade**: Check for updates to OpenRAG. For more information, see [upgrade OpenRAG](/upgrade).
* **TUI Diagnostics menu**: The TUI's **Diagnostics** menu (<kbd>4</kbd>) provides health monitoring for your container runtimes and monitoring of your OpenSearch instance.
* **Factory Reset**: This is a destructive action that [resets your containers](#reset-containers).
* **Self-managed containers**: Get container logs with [`docker compose logs`](https://docs.docker.com/reference/cli/docker/compose/logs/) or [`podman logs`](https://docs.podman.io/en/latest/markdown/podman-logs.1.html).
* **Native services**: [View and manage OpenRAG's native services](#start-all-services) that run directly on the local machine instead of a container.
* **Docling**: See [Stop, start, and inspect native services](#start-native-services).
### Reset containers {#reset-containers}
## Stop and start containers
* **TUI**: In the TUI's **Status** menu (<kbd>3</kbd>), click **Stop Services** to stop all OpenRAG container-based services.
Click **Start All Services** to restart the OpenRAG containers.
This function triggers the following processes:
1. OpenRAG automatically detects your container runtime, and then checks if your machine has compatible GPU support by checking for `CUDA`, `NVIDIA_SMI`, and Docker/Podman runtime support. This check determines which Docker Compose file OpenRAG uses because there are separate Docker Compose files for GPU and CPU deployments.
2. OpenRAG pulls the OpenRAG container images with `docker compose pull` if any images are missing.
3. OpenRAG deploys the containers with `docker compose up -d`.
* **Self-managed containers**: Use [`docker compose down`](https://docs.docker.com/reference/cli/docker/compose/down/) and [`docker compose up -d`](https://docs.docker.com/reference/cli/docker/compose/up/).
To stop or start individual containers, use targeted commands like `docker stop CONTAINER_ID` and `docker start CONTAINER_ID`.
## Stop, start, and inspect native services (Docling) {#start-native-services}
A _native service_ in OpenRAG is a service that runs locally on your machine, not within a container. For example, the `docling serve` process is an OpenRAG native service because this document processing service runs on your local machine, separate from the OpenRAG containers.
* **TUI**: From the TUI's **Status** menu (<kbd>3</kbd>), click **Native Services** to do the following:
* View the service's status, port, and process ID (PID).
* Stop, start, and restart native services.
* **Self-managed services**: Because the Docling service doesn't run in a container, you must start and stop it manually on the host machine:
* Stop `docling serve`:
```bash
uv run python scripts/docling_ctl.py stop
```
* Start `docling serve`:
```bash
uv run python scripts/docling_ctl.py start --port 5001
```
* Check that `docling serve` is running:
```bash
uv run python scripts/docling_ctl.py status
```
If `docling serve` is running, the output includes the status, address, and process ID (PID):
```text
Status: running
Endpoint: http://127.0.0.1:5001
Docs: http://127.0.0.1:5001/docs
PID: 27746
```
## Upgrade services
See [Upgrade OpenRAG](/upgrade).
## Reset containers (destructive) {#reset-containers}
Reset your OpenRAG deployment by recreating the containers and removing some related data.
:::warning
This is a destructive action that destroys the following:
To completely reset your OpenRAG deployment and delete all OpenRAG data, see [Reinstall OpenRAG](/reinstall).
* All OpenRAG containers, volumes, and local images
* Any additional Docker objects
* The contents of OpenRAG's `config` and `./opensearch-data` directories
* The `conversations.json` file
### Export customized flows before resetting containers {#export-customized-flows-before-resetting-containers}
If you modified the built-in flows or created custom flows in your OpenRAG Langflow instance, and you want to preserve those changes, [export your flows](https://docs.langflow.org/concepts-flows-import) before resetting your OpenRAG containers.
### Factory Reset with the TUI
:::warning
This is a destructive action that does the following:
* Destroys all OpenRAG containers, volumes, and local images with `docker compose down --volumes --remove-orphans --rmi local`.
* Prunes any additional Docker objects with `docker system prune -f`.
* Deletes the contents of OpenRAG's `config` and `./opensearch-data` directories.
* Deletes the `conversations.json` file.
Destroyed containers and deleted data are lost and cannot be recovered after running this operation.
This operation _doesn't_ remove the `.env` file or the contents of the `./openrag-documents` directory.
:::
1. To destroy and recreate your OpenRAG containers, go to the TUI [**Status** menu](#status), and then click **Factory Reset**.
1. To destroy and recreate your OpenRAG containers, open the TUI's **Status** menu (<kbd>3</kbd>), and then click **Factory Reset**.
This function runs the following commands _and_ deletes the contents of OpenRAG's `config` and `./opensearch-data` directories.
2. Repeat the [setup process](/install#setup) to restart the services and launch the OpenRAG app. Your OpenRAG passwords, OAuth credentials (if previously set), and onboarding configuration are restored from the `.env` file.
### Rebuild self-managed containers
This command destroys and recreates the containers. Data stored exclusively on the containers is lost, such as Langflow flows.
If you want to preserve customized flows, see [Export customized flows before resetting containers](#export-customized-flows-before-resetting-containers).
The `.env` file, `config` directory, `./openrag-documents` directory, `./opensearch-data` directory, and the `conversations.json` file are preserved.
* Docker Compose:
```bash
docker compose down --volumes --remove-orphans --rmi local
docker system prune -f
docker compose up --build --force-recreate --remove-orphans
```
* Podman Compose:
```bash
podman compose up --build --force-recreate --remove-orphans
```
2. If you reset your containers as part of reinstalling OpenRAG, continue the [reinstallation process](/reinstall) after resetting the containers.
Otherwise, in the TUI **Setup** menu, repeat the [setup process](#setup) to start the services and launch the OpenRAG app. Your OpenRAG passwords, OAuth credentials (if previously set), and onboarding configuration are restored from the `.env` file.
### Destroy and recreate self-managed containers
### Start all services {#start-all-services}
Use separate commands to destroy and recreate the containers if you want to modify the configuration or delete other OpenRAG data before recreating the containers.
Through the TUI, you can stop and start OpenRAG services.
:::warning
These are destructive operations that reset your OpenRAG deployment to an initial state.
Destroyed containers and deleted data are lost and cannot be recovered after running this operation.
:::
#### Start containers
1. Destroy the containers, volumes, and local images, and then remove (prune) any additional Docker objects:
On the TUI main page or the **Setup** menu, click **Start All Services** to start the OpenRAG containers.
* Docker Compose:
When you start all services, the following processes happen:
```bash
docker compose down --volumes --remove-orphans --rmi local
docker system prune -f
```
1. OpenRAG automatically detects your container runtime, and then checks if your machine has compatible GPU support by checking for `CUDA`, `NVIDIA_SMI`, and Docker/Podman runtime support. This check determines which Docker Compose file OpenRAG uses.
* Podman Compose:
2. OpenRAG pulls the OpenRAG container images with `docker compose pull` if any images are missing.
```bash
podman compose down --volumes --remove-orphans --rmi local
podman system prune -f
```
3. OpenRAG deploys the containers with `docker compose up -d`.
2. Optional: Remove data that wasn't deleted by the previous commands:
#### Start native services (Docling)
* OpenRAG's `.env` file
* The contents of OpenRAG's `config` directory
* The contents of the `./openrag-documents` directory
* The contents of the `./opensearch-data` directory
* The `conversations.json` file
A _native service_ in OpenRAG is a service that runs locally on your machine, not within a container. For example, the `docling serve` process is an OpenRAG native service because this document processing service runs on your local machine, separate from the OpenRAG containers.
3. If you deleted the `.env` file, prepare a new `.env` before redeploying the containers.
For more information, see [Deploy OpenRAG with self-managed services](/docker).
From the **Status** menu, you can view the status, port, and process ID (PID) of the OpenRAG native services.
You can also click **Stop** or **Restart** to stop and start OpenRAG native services.
4. Recreate the containers:
## Manage containers with Docker or Podman
* Docker Compose:
If you [deployed OpenRAG with self-managed containers](/docker), run Docker or Podman commands to manage your OpenRAG containers.
```bash
docker compose up -d
```
* Start containers
* Stop containers
* View container status
* Access container logs for troubleshooting
* Upgrade your OpenRAG containers to the latest version
* Reset your OpenRAG containers to an initial state (destructive)
* Podman Compose:
```bash
podman compose up -d
```
5. Launch the OpenRAG app, and then repeat [application onboarding](/docker#application-onboarding).
## See also

View file

@ -7,19 +7,20 @@ import Icon from "@site/src/components/icon/icon";
import Tabs from '@theme/Tabs';
import TabItem from '@theme/TabItem';
import PartialIntegrateChat from '@site/docs/_partial-integrate-chat.mdx';
import PartialPrereqWindows from '@site/docs/_partial-prereq-windows.mdx';
import PartialPrereqPython from '@site/docs/_partial-prereq-python.mdx';
Use this quickstart to install OpenRAG, and then try some of OpenRAG's core features.
## Prerequisites
<PartialPrereqWindows />
- Get an [OpenAI API key](https://platform.openai.com/api-keys).
This quickstart uses OpenAI for simplicity.
For other providers, see the other [installation methods](/install-options).
- Install [Python](https://www.python.org/downloads/release/python-3100/) version 3.13 or later.
- For Microsoft Windows, you must use the Windows Subsystem for Linux (WSL).
See [Install OpenRAG on Windows](/install-windows) before proceeding.
<PartialPrereqPython />
## Install OpenRAG

View file

@ -5,26 +5,186 @@ slug: /reinstall
You can reset your OpenRAG deployment to its initial state by recreating the containers and deleting accessory data like the `.env` file and ingested documents.
:::warning
These are destructive operations that reset your OpenRAG deployment to an initial state.
Destroyed containers and deleted data are lost and cannot be recovered after running these operations.
:::
## Export customized flows before reinstalling
If you modified the built-in flows or created custom flows in your OpenRAG Langflow instance, and you want to preserve those changes, [export your flows](https://docs.langflow.org/concepts-flows-import) before reinstalling OpenRAG.
## Reinstall TUI-managed containers
1. In the TUI, [reset your containers](/manage-services) to destroy the following:
1. In the TUI's **Status** menu (<kbd>3</kbd>), click **Factory Reset** to destroy your OpenRAG containers and some related data.
* All existing OpenRAG containers, volumes, and local images
* Any additional Docker objects
* The contents of OpenRAG's `config` and `./opensearch-data` directories
* The `conversations.json` file
:::warning
This is a destructive action that does the following:
2. Optional: Remove data that wasn't deleted by the **Factory Reset** operation.
* Destroys all OpenRAG containers, volumes, and local images with `docker compose down --volumes --remove-orphans --rmi local`.
* Prunes any additional Docker objects with `docker system prune -f`.
* Deletes the contents of OpenRAG's `config` and `./opensearch-data` directories.
* Deletes the `conversations.json` file.
<br/>
Destroyed containers and deleted data are lost and cannot be recovered after running this operation.
This operation _doesn't_ remove the `.env` file or the contents of the `./openrag-documents` directory.
:::
2. Exit the TUI with <kbd>q</kbd>.
3. Optional: Remove data that wasn't deleted by the **Factory Reset** operation.
For a completely fresh installation, delete all of this data.
* **OpenRAG's `.env` file**: Contains your OpenRAG configuration, including OpenRAG passwords, API keys, OAuth settings, and other [environment variables](/reference/configuration). If you delete this file, OpenRAG automatically generates a new one after you repeat the initial setup and onboarding process. Alternatively, you can add a prepopulated `.env` file to your OpenRAG installation directory before restarting OpenRAG.
* **The contents of the `./openrag-documents` directory**: Contains documents that you uploaded to OpenRAG. Delete these files to prevent documents from being reingested to your knowledge base after restarting OpenRAG. However, you might want to preserve OpenRAG's [default documents](https://github.com/langflow-ai/openrag/tree/main/openrag-documents).
* **OpenRAG's `.env` file**: Contains your OpenRAG configuration, including OpenRAG passwords, API keys, OAuth settings, and other [environment variables](/reference/configuration). If you delete this file, OpenRAG automatically generates a new one after you repeat the setup and onboarding process. Alternatively, you can add a prepopulated `.env` file to your OpenRAG installation directory before restarting OpenRAG.
* **The contents of the `./openrag-documents` directory**: Contains documents that you uploaded to OpenRAG. Delete these files to prevent documents from being reingested to your knowledge base after restarting OpenRAG. However, you might want to preserve OpenRAG's [default documents](https://github.com/langflow-ai/openrag/tree/main/openrag-documents).
3. In the TUI **Setup** menu, repeat the **Basic/Advanced Setup** process to configure OpenRAG and restart all services.
Then, launch the OpenRAG app and repeat application onboarding.
4. Restart the TUI with `uv run openrag` or `uvx openrag`.
If OpenRAG detects a `.env` file during start up, it automatically populates any OpenRAG passwords, OAuth credentials, and onboarding configuration set in that file.
5. Repeat the [setup process](/install#setup) to configure OpenRAG and restart all services.
Then, launch the OpenRAG app and repeat [application onboarding](/install#application-onboarding).
## Reinstall self-managed containers
If OpenRAG detects a `.env` file during setup and onboarding, it automatically populates any OpenRAG passwords, OAuth credentials, and onboarding configuration set in that file.
If you manage your own OpenRAG containers with Docker or Podman, follow these steps to reinstall OpenRAG:
## Reinstall with Docker Compose or Podman Compose
1. Destroy the containers, volumes, and local images, and then remove (prune) any additional Podman objects:
* Docker Compose:
```bash
docker compose down --volumes --remove-orphans --rmi local
docker system prune -f
```
* Podman Compose:
```bash
podman compose down --volumes --remove-orphans --rmi local
podman system prune -f
```
2. Optional: Remove data that wasn't deleted by the previous commands:
* OpenRAG's `.env` file
* The contents of OpenRAG's `config` directory
* The contents of the `./openrag-documents` directory
* The contents of the `./opensearch-data` directory
* The `conversations.json` file
3. If you deleted the `.env` file, prepare a new `.env` before redeploying the containers.
For more information, see [Deploy OpenRAG with self-managed services](/docker).
4. Redeploy OpenRAG:
* Docker Compose:
```bash
docker compose up -d
```
* Podman Compose:
```bash
podman compose up -d
```
5. Launch the OpenRAG app, and then repeat [application onboarding](/docker#application-onboarding).
## Step-by-step reinstallation with Docker or Podman
Use these commands for step-by-step container removal and cleanup:
1. Stop all running containers:
* Docker:
```bash
docker stop $(docker ps -q)
```
* Podman:
```bash
podman stop --all
```
2. Remove all containers, including stopped containers:
* Docker:
```bash
docker rm --force $(docker ps -aq)
```
* Podman:
```bash
podman rm --all --force
```
3. Remove all images:
* Docker:
```bash
docker rmi --force $(docker images -q)
```
* Podman:
```bash
podman rmi --all --force
```
4. Remove all volumes:
* Docker:
```bash
docker volume prune --force
```
* Podman:
```bash
podman volume prune --force
```
5. Remove all networks except the default network:
* Docker:
```bash
docker network prune --force
```
* Podman:
```bash
podman network prune --force
```
6. Clean up any leftover data:
* Docker:
```bash
docker system prune --all --force --volumes
```
* Podman:
```bash
podman system prune --all --force --volumes
```
7. Optional: Remove data that wasn't deleted by the previous commands:
* OpenRAG's `.env` file
* The contents of OpenRAG's `config` directory
* The contents of the `./openrag-documents` directory
* The contents of the `./opensearch-data` directory
* The `conversations.json` file
8. [Redeploy OpenRAG](/docker).

View file

@ -18,14 +18,15 @@ If you installed OpenRAG with `uv`, access the TUI with `uv run openrag`.
If you installed OpenRAG with the automatic installer script or `uvx`, access the TUI with `uvx openrag`.
## Manage services with the TUI
Use the TUI's **Status** menu (<kbd>3</kbd>) and **Diagnostics** menu (<kbd>4</kbd>) to access controls and information for your OpenRAG services.
For more information, see [Manage OpenRAG services](/manage-services).
## Exit the OpenRAG TUI
To exit the OpenRAG TUI, go to the TUI main menu, and then press <kbd>q</kbd>.
Your OpenRAG containers continue to run until they are stopped.
To restart the TUI, see [Access the TUI](#access-the-tui).
## See also
* [Manage services](/manage-services)
To restart the TUI, see [Access the TUI](#access-the-tui).

View file

@ -7,8 +7,146 @@ slug: /uninstall
If you want to reset your OpenRAG containers without removing OpenRAG entirely, see [Reset OpenRAG containers](/manage-services) and [Reinstall OpenRAG](/reinstall).
:::
## Uninstall TUI-managed deployments
If you used the [automated installer script](/install) or [`uvx`](/install-uvx) to install OpenRAG, clear your `uv` cache (`uv cache clean`) to remove the TUI environment, and then delete the directory containing your OpenRAG configuration files and data (where you would invoke OpenRAG).
If you used [`uv`](/install-uv) to install OpenRAG, run `uv remove openrag` in your Python project.
For self-managed containers, destroy the containers, prune any additional Docker objects, and delete any remaining OpenRAG files, as explained in [Reset OpenRAG containers](/manage-services).
## Uninstall self-managed deployments
For self-managed services, destroy the containers, prune any additional Docker objects, shut down the Docling service, and delete any remaining OpenRAG files.
### Uninstall with Docker Compose or Podman Compose
1. Destroy the containers, volumes, and local images, and then remove (prune) any additional Docker objects:
* Docker Compose:
```bash
docker compose down --volumes --remove-orphans --rmi local
docker system prune -f
```
* Podman Compose:
```bash
podman compose down --volumes --remove-orphans --rmi local
podman system prune -f
```
2. Remove data that wasn't deleted by the previous commands:
* OpenRAG's `.env` file
* The contents of OpenRAG's `config` directory
* The contents of the `./openrag-documents` directory
* The contents of the `./opensearch-data` directory
* The `conversations.json` file
3. Stop `docling-serve`:
```bash
uv run python scripts/docling_ctl.py stop
```
### Step-by-step removal and cleanup with Docker or Podman
Use these commands for step-by-step container removal and cleanup:
1. Stop all running containers:
* Docker:
```bash
docker stop $(docker ps -q)
```
* Podman:
```bash
podman stop --all
```
2. Remove all containers, including stopped containers:
* Docker:
```bash
docker rm --force $(docker ps -aq)
```
* Podman:
```bash
podman rm --all --force
```
3. Remove all images:
* Docker:
```bash
docker rmi --force $(docker images -q)
```
* Podman:
```bash
podman rmi --all --force
```
4. Remove all volumes:
* Docker:
```bash
docker volume prune --force
```
* Podman:
```bash
podman volume prune --force
```
5. Remove all networks except the default network:
* Docker:
```bash
docker network prune --force
```
* Podman:
```bash
podman network prune --force
```
6. Clean up any leftover data:
* Docker:
```bash
docker system prune --all --force --volumes
```
* Podman:
```bash
podman system prune --all --force --volumes
```
7. Remove data that wasn't deleted by the previous commands:
* OpenRAG's `.env` file
* The contents of OpenRAG's `config` directory
* The contents of the `./openrag-documents` directory
* The contents of the `./opensearch-data` directory
* The `conversations.json` file
8. Stop `docling-serve`:
```bash
uv run python scripts/docling_ctl.py stop
```

View file

@ -8,99 +8,107 @@ import TabItem from '@theme/TabItem';
Use these steps to upgrade your OpenRAG deployment to the latest version or a specific version.
## Export customized flows before upgrading
If you modified the built-in flows or created custom flows in your OpenRAG Langflow instance, [export your flows](https://docs.langflow.org/concepts-flows-import) before upgrading.
This ensure that you won't lose your flows after upgrading, and you can reference the exported flows if there are any breaking changes in the new version.
## Upgrade TUI-managed installations
To upgrade OpenRAG, you need to upgrade the OpenRAG Python package, and then upgrade the OpenRAG containers.
Upgrading the Python package also upgrades Docling by bumping the dependency in `pyproject.toml`.
This is a two part process because upgrading the OpenRAG Python package updates the TUI and Python code, but the container versions are controlled by environment variables in your `.env` file.
This is a two part process because upgrading the OpenRAG Python package updates the Terminal User Interface (TUI) and Python code, but the container versions are controlled by environment variables in your `.env` file.
1. Stop all OpenRAG services: In the OpenRAG TUI, go to the **Status** menu, and then click **Stop Services**.
1. To check for updates, open the TUI's **Status** menu (<kbd>3</kbd>), and then click **Upgrade**.
2. Upgrade the OpenRAG Python package to the latest version from [PyPI](https://pypi.org/project/openrag/).
2. If there is an update, stop all OpenRAG services.
In the **Status** menu, click **Stop Services**.
<Tabs groupId="Installation method">
<TabItem value="installer" label="Automatic installer or uvx" default>
3. Upgrade the OpenRAG Python package to the latest version from [PyPI](https://pypi.org/project/openrag/).
Use these steps to upgrade the Python package if you installed OpenRAG using the automatic installer or `uvx`:
<Tabs groupId="Installation method">
<TabItem value="installer" label="Automatic installer or uvx" default>
1. Navigate to your OpenRAG workspace directory:
Use these steps to upgrade the Python package if you installed OpenRAG using the automatic installer or `uvx`:
```bash
cd openrag-workspace
```
1. Navigate to your OpenRAG workspace directory:
2. Upgrade the OpenRAG package:
```bash
cd openrag-workspace
```
```bash
uvx --from openrag openrag
```
2. Upgrade the OpenRAG package:
To upgrade to a specific version:
```bash
uvx --from openrag openrag
```
```bash
uvx --from openrag==0.1.33 openrag
```
To upgrade to a specific version:
</TabItem>
<TabItem value="uv-add" label="Python project (uv add)">
```bash
uvx --from openrag==0.1.33 openrag
```
Use these steps to upgrade the Python package if you installed OpenRAG in a Python project with `uv add`:
</TabItem>
<TabItem value="uv-add" label="Python project (uv add)">
1. Navigate to your project directory:
Use these steps to upgrade the Python package if you installed OpenRAG in a Python project with `uv add`:
```bash
cd YOUR_PROJECT_NAME
```
1. Navigate to your project directory:
2. Update OpenRAG to the latest version:
```bash
cd YOUR_PROJECT_NAME
```
```bash
uv add --upgrade openrag
```
2. Update OpenRAG to the latest version:
To upgrade to a specific version:
```bash
uv add --upgrade openrag
```
```bash
uv add --upgrade openrag==0.1.33
```
To upgrade to a specific version:
3. Start the OpenRAG TUI:
```bash
uv add --upgrade openrag==0.1.33
```
```bash
uv run openrag
```
3. Start the OpenRAG TUI:
</TabItem>
<TabItem value="uv-pip" label="Virtual environment (uv pip install)">
```bash
uv run openrag
```
Use these steps to upgrade the Python package if you installed OpenRAG in a venv with `uv pip install`:
</TabItem>
<TabItem value="uv-pip" label="Virtual environment (uv pip install)">
1. Activate your virtual environment.
Use these steps to upgrade the Python package if you installed OpenRAG in a venv with `uv pip install`:
2. Upgrade OpenRAG:
1. Activate your virtual environment.
```bash
uv pip install --upgrade openrag
```
2. Upgrade OpenRAG:
To upgrade to a specific version:
```bash
uv pip install --upgrade openrag
```
```bash
uv pip install --upgrade openrag==0.1.33
```
To upgrade to a specific version:
3. Start the OpenRAG TUI:
```bash
uv pip install --upgrade openrag==0.1.33
```
```bash
uv run openrag
```
3. Start the OpenRAG TUI:
</TabItem>
</Tabs>
```bash
uv run openrag
```
3. Start the upgraded OpenRAG containers: In the OpenRAG TUI, click **Start All Services**, and then wait while the containers start.
</TabItem>
</Tabs>
4. Start the upgraded OpenRAG containers: In the OpenRAG TUI, click **Start All Services**, and then wait while the containers start.
After upgrading the Python package, OpenRAG runs `docker compose pull` to get the appropriate container images matching the version specified in your OpenRAG `.env` file. Then, it recreates the containers with the new images using `docker compose up -d --force-recreate`.
@ -113,9 +121,9 @@ This is a two part process because upgrading the OpenRAG Python package updates
If you get an error that `langflow container already exists` error during upgrade, see [Langflow container already exists during upgrade](/support/troubleshoot#langflow-container-already-exists-during-upgrade).
4. Under [**Native Services**](/manage-services), click **Start** to start the Docling service.
5. Under [**Native Services**](/manage-services), click **Start** to start the Docling service.
5. When the upgrade process is complete, you can close the **Status** window and continue using OpenRAG.
6. When the upgrade process is complete, you can close the **Status** window and continue using OpenRAG.
## Upgrade self-managed containers
@ -133,6 +141,11 @@ To fetch and apply the latest container images while preserving your OpenRAG dat
* Podman Compose:
```bash
podman-compose pull
podman-compose up -d --force-recreate
```
podman compose pull
podman compose up -d --force-recreate
```
## See also
* [Manage OpenRAG services](/manage-services)
* [Troubleshoot OpenRAG](/support/troubleshoot)