From ac97ef72c65fddd2be6cb46ec1a06541fd3fdf49 Mon Sep 17 00:00:00 2001 From: April M <36110273+aimurphy@users.noreply.github.com> Date: Mon, 24 Nov 2025 13:54:17 -0800 Subject: [PATCH 1/9] add wsl instructions --- docs/docs/_partial-wsl-install.mdx | 19 +++++++++++++ docs/docs/get-started/install.mdx | 42 +++++++++++++++++++++------- docs/docs/get-started/quickstart.mdx | 19 ++++++++++++- 3 files changed, 69 insertions(+), 11 deletions(-) create mode 100644 docs/docs/_partial-wsl-install.mdx diff --git a/docs/docs/_partial-wsl-install.mdx b/docs/docs/_partial-wsl-install.mdx new file mode 100644 index 00000000..bff89586 --- /dev/null +++ b/docs/docs/_partial-wsl-install.mdx @@ -0,0 +1,19 @@ +1. [Install WSL](https://learn.microsoft.com/en-us/windows/wsl/install) with the Ubuntu distribution using WSL 2: + + ```powershell + wsl --install -d Ubuntu + ``` + + For new installations, the `wsl --install` command uses WSL 2 and Ubuntu by default. + + For existing WSL installations, you can [change the distribution](https://learn.microsoft.com/en-us/windows/wsl/install#change-the-default-linux-distribution-installed) and [check the WSL version](https://learn.microsoft.com/en-us/windows/wsl/install#upgrade-version-from-wsl-1-to-wsl-2). + +2. [Start your WSL Ubuntu distribution](https://learn.microsoft.com/en-us/windows/wsl/install#ways-to-run-multiple-linux-distributions-with-wsl) if it doesn't start automatically. + +3. [Set up a username and password for your WSL distribution](https://learn.microsoft.com/en-us/windows/wsl/setup/environment#set-up-your-linux-username-and-password). + +4. [Install Docker Desktop for Windows with WSL 2](https://learn.microsoft.com/en-us/windows/wsl/tutorials/wsl-containers). When you reach the Docker Desktop **WSL integration** settings, make sure your Ubuntu distribution is enabled, and then click **Apply & Restart** to enable Docker support in WSL. + +5. Install and run OpenRAG from within your WSL Ubuntu distribution using any of the installation methods described on this page. + +If you encounter issues with port forwarding or the Windows Firewall, you might to adjust the [Hyper-V firewall settings](https://learn.microsoft.com/en-us/windows/security/operating-system-security/network-security/windows-firewall/hyper-v-firewall) to allow communication between your WSL distribution and the Windows host. For more troubleshooting advice for networking issues, see [Troubleshooting WLS common issues](https://learn.microsoft.com/en-us/windows/wsl/troubleshooting#common-issues). \ No newline at end of file diff --git a/docs/docs/get-started/install.mdx b/docs/docs/get-started/install.mdx index 06f73e3e..ab142b81 100644 --- a/docs/docs/get-started/install.mdx +++ b/docs/docs/get-started/install.mdx @@ -5,7 +5,8 @@ slug: /install import Tabs from '@theme/Tabs'; import TabItem from '@theme/TabItem'; -import PartialOnboarding from '@site/docs/_partial-onboarding.mdx'; +import PartialOnboarding from '@site/docs/_partial-onboarding.mdx'; +import PartialWsl from '@site/docs/_partial-wsl-install.mdx'; [Install OpenRAG](#install) and then run the [OpenRAG Terminal User Interface(TUI)](#setup) to start your OpenRAG deployment with a guided setup process. @@ -21,19 +22,40 @@ If you prefer running Podman or Docker containers and manually editing `.env` fi ## Prerequisites -- Install [Python Version 3.10 to 3.13](https://www.python.org/downloads/release/python-3100/) -- Install [uv](https://docs.astral.sh/uv/getting-started/installation/) -- Install [Podman](https://podman.io/docs/installation) (recommended) or [Docker](https://docs.docker.com/get-docker/) -- Install [Docker Compose](https://docs.docker.com/compose/install/). If using Podman, use [podman-compose](https://docs.podman.io/en/latest/markdown/podman-compose.1.html) or alias Docker compose commands to Podman commands. -- Optional: Create an [OpenAI API key](https://platform.openai.com/api-keys). During [Application Onboarding](#application-onboarding), you can provide this key or choose a different model provider. +- All OpenRAG installations require [Python](https://www.python.org/downloads/release/python-3100/) version 3.10 to 3.13. + +- If you aren't using the automatic installer script, install the following: + + - [uv](https://docs.astral.sh/uv/getting-started/installation/). + - [Podman](https://podman.io/docs/installation) (recommended) or [Docker](https://docs.docker.com/get-docker/). + - [`podman-compose`](https://docs.podman.io/en/latest/markdown/podman-compose.1.html) or [Docker Compose](https://docs.docker.com/compose/install/). To use Docker Compose with Podman, you must alias Docker Compose commands to Podman commands. + +- Microsoft Windows only: To run OpenRAG on Windows, you must use the Windows Subsystem for Linux (WSL). + +
+ Install WSL for OpenRAG + + + +
+ +- Prepare model providers and credentials. + + During [Application Onboarding](#application-onboarding), you must select language model and embedding model providers. + If your chosen provider offers both types, you can use the same provider for both selections. + If your provider offers only one type, such as Anthropic, you must select two providers. + + Gather the credentials and connection details for your chosen model providers before starting onboarding: + + - OpenAI: Create an [OpenAI API key](https://platform.openai.com/api-keys). + - Anthropic language models: Create an [Anthropic API key](https://www.anthropic.com/docs/api/reference). + - IBM watsonx.ai: Get your watsonx.ai API endpoint, IBM project ID, and IBM API key from your watsonx deployment. + - Ollama: Use the [Ollama documentation](https://docs.ollama.com/) to set up your Ollama instance locally, in the cloud, or on a remote server, and then get your Ollama server's base URL. + - Optional: Install GPU support with an NVIDIA GPU, [CUDA](https://docs.nvidia.com/cuda/) support, and compatible NVIDIA drivers on the OpenRAG host machine. If you don't have GPU capabilities, OpenRAG provides an alternate CPU-only deployment. ## Install OpenRAG {#install} -:::note Windows users -To use OpenRAG on Windows, use [WSL (Windows Subsystem for Linux)](https://learn.microsoft.com/en-us/windows/wsl/install). -::: - Choose an installation method based on your needs: * For new users, the automatic installer script detects and installs prerequisites and then runs OpenRAG. diff --git a/docs/docs/get-started/quickstart.mdx b/docs/docs/get-started/quickstart.mdx index bb605b05..80eb0902 100644 --- a/docs/docs/get-started/quickstart.mdx +++ b/docs/docs/get-started/quickstart.mdx @@ -6,12 +6,29 @@ slug: /quickstart import Icon from "@site/src/components/icon/icon"; import Tabs from '@theme/Tabs'; import TabItem from '@theme/TabItem'; +import PartialWsl from '@site/docs/_partial-wsl-install.mdx'; Use this quickstart to install OpenRAG, and then try some of OpenRAG's core features. ## Prerequisites -This quickstart requires an [OpenAI API key](https://platform.openai.com/api-keys) and [Python](https://www.python.org/downloads/release/python-3100/) version 3.10 to 3.13. +This quickstart requires the following: + +- An [OpenAI API key](https://platform.openai.com/api-keys). +This quickstart uses OpenAI for simplicity. +For other providers, see the complete [installation guide](/install). + +- [Python](https://www.python.org/downloads/release/python-3100/) version 3.10 to 3.13. + +- Microsoft Windows only: To run OpenRAG on Windows, you must use the Windows Subsystem for Linux (WSL). + +
+ Install WSL for OpenRAG + + + +
+ ## Install OpenRAG From 3b325d1a1c77a0cd697ee8ecf82b9cf415c275d0 Mon Sep 17 00:00:00 2001 From: April M <36110273+aimurphy@users.noreply.github.com> Date: Mon, 24 Nov 2025 14:18:48 -0800 Subject: [PATCH 2/9] ssl troubleshooting --- docs/docs/core-components/agents.mdx | 2 +- docs/docs/core-components/ingestion.mdx | 2 +- docs/docs/core-components/knowledge.mdx | 4 +- docs/docs/get-started/install.mdx | 2 + docs/docs/reference/configuration.mdx | 6 +- docs/docs/support/troubleshoot.mdx | 235 +++++++++++++++--------- 6 files changed, 155 insertions(+), 96 deletions(-) diff --git a/docs/docs/core-components/agents.mdx b/docs/docs/core-components/agents.mdx index 88102a60..9d440ddf 100644 --- a/docs/docs/core-components/agents.mdx +++ b/docs/docs/core-components/agents.mdx @@ -40,7 +40,7 @@ This flow contains eight components connected together to chat with your data: * The [**Agent** component](https://docs.langflow.org/agents) orchestrates the entire flow by deciding when to search the knowledge base, how to formulate search queries, and how to combine retrieved information with the user's question to generate a comprehensive response. The **Agent** behaves according to the prompt in the **Agent Instructions** field. * The [**Chat Input** component](https://docs.langflow.org/components-io) is connected to the Agent component's Input port. This allows to flow to be triggered by an incoming prompt from a user or application. -* The [**OpenSearch** component](https://docs.langflow.org/bundles-elastic#opensearch) is connected to the Agent component's Tools port. The agent may not use this database for every request; the agent only uses this connection if it decides the knowledge can help respond to the prompt. +* The [**OpenSearch** component](https://docs.langflow.org/bundles-elastic#opensearch) is connected to the Agent component's Tools port. The agent might not use this database for every request; the agent only uses this connection if it decides the knowledge can help respond to the prompt. * The [**Language Model** component](https://docs.langflow.org/components-models) is connected to the Agent component's Language Model port. The agent uses the connected LLM to reason through the request sent through Chat Input. * The [**Embedding Model** component](https://docs.langflow.org/components-embedding-models) is connected to the OpenSearch component's Embedding port. This component converts text queries into vector representations that are compared with document embeddings stored in OpenSearch for semantic similarity matching. This gives your Agent's queries context. * The [**Text Input** component](https://docs.langflow.org/components-io) is populated with the global variable `OPENRAG-QUERY-FILTER`. diff --git a/docs/docs/core-components/ingestion.mdx b/docs/docs/core-components/ingestion.mdx index 08159ee0..67f746b0 100644 --- a/docs/docs/core-components/ingestion.mdx +++ b/docs/docs/core-components/ingestion.mdx @@ -27,7 +27,7 @@ To start or stop `docling serve` or any other native services, in the TUI main m **Embedding model** determines which AI model is used to create vector embeddings. The default is the OpenAI `text-embedding-3-small` model. **Chunk size** determines how large each text chunk is in number of characters. -Larger chunks yield more context per chunk, but may include irrelevant information. Smaller chunks yield more precise semantic search, but may lack context. +Larger chunks yield more context per chunk, but can include irrelevant information. Smaller chunks yield more precise semantic search, but can lack context. The default value of `1000` characters provides a good starting point that balances these considerations. **Chunk overlap** controls the number of characters that overlap over chunk boundaries. diff --git a/docs/docs/core-components/knowledge.mdx b/docs/docs/core-components/knowledge.mdx index 2e63198e..80a997c2 100644 --- a/docs/docs/core-components/knowledge.mdx +++ b/docs/docs/core-components/knowledge.mdx @@ -83,7 +83,7 @@ The **Add Cloud Knowledge** page opens. Select the files or folders you want and click **Select**. You can select multiple files. 3. When your files are selected, click **Ingest Files**. -The ingestion process may take some time, depending on the size of your documents. +The ingestion process can take some time depending on the size of your documents. 4. When ingestion is complete, your documents are available in the Knowledge screen. If ingestion fails, click **Status** to view the logged error. @@ -151,7 +151,7 @@ The complete list of supported models is available at [`models_service.py` in th You can use custom embedding models by specifying them in your configuration. -If you use an unknown embedding model, OpenRAG will automatically fall back to `1536` dimensions and log a warning. The system will continue to work, but search quality may be affected if the actual model dimensions differ from `1536`. +If you use an unknown embedding model, OpenRAG automatically falls back to `1536` dimensions and logs a warning. The system continues to work, but search quality can be affected if the actual model dimensions differ from `1536`. The default embedding dimension is `1536` and the default model is `text-embedding-3-small`. diff --git a/docs/docs/get-started/install.mdx b/docs/docs/get-started/install.mdx index 06f73e3e..fe9c713b 100644 --- a/docs/docs/get-started/install.mdx +++ b/docs/docs/get-started/install.mdx @@ -154,6 +154,8 @@ Choose an installation method based on your needs: Continue with [Set up OpenRAG with the TUI](#setup). +If you encounter errors during installation, see [Troubleshoot OpenRAG](/support/troubleshoot). + ## Set up OpenRAG with the TUI {#setup} The TUI creates a `.env` file in your OpenRAG directory root and starts OpenRAG. diff --git a/docs/docs/reference/configuration.mdx b/docs/docs/reference/configuration.mdx index 1827e94a..1dbc4198 100644 --- a/docs/docs/reference/configuration.mdx +++ b/docs/docs/reference/configuration.mdx @@ -9,9 +9,9 @@ import TabItem from '@theme/TabItem'; OpenRAG recognizes environment variables from the following sources: -* [Environment variables](#configure-environment-variables) - Values set in the `.env` file. -* [Langflow runtime overrides](#langflow-runtime-overrides) - Langflow components may tweak environment variables at runtime. -* [Default or fallback values](#default-values-and-fallbacks) - These values are default or fallback values if OpenRAG doesn't find a value. +* [Environment variables](#configure-environment-variables): Values set in the `.env` file. +* [Langflow runtime overrides](#langflow-runtime-overrides): Langflow components can set environment variables at runtime. +* [Default or fallback values](#default-values-and-fallbacks): These values are default or fallback values if OpenRAG doesn't find a value. ## Configure environment variables diff --git a/docs/docs/support/troubleshoot.mdx b/docs/docs/support/troubleshoot.mdx index 2613dcaa..7031a0cb 100644 --- a/docs/docs/support/troubleshoot.mdx +++ b/docs/docs/support/troubleshoot.mdx @@ -1,5 +1,5 @@ --- -title: Troubleshooting +title: Troubleshoot OpenRAG slug: /support/troubleshoot --- @@ -13,7 +13,7 @@ This page provides troubleshooting advice for issues you might encounter when us Check that `OPENSEARCH_PASSWORD` set in [Environment variables](/reference/configuration) meets requirements. The password must contain at least 8 characters, and must contain at least one uppercase letter, one lowercase letter, one digit, and one special character that is strong. -## OpenRAG fails to start from the TUI with "Operation not supported" error +## OpenRAG fails to start from the TUI with operation not supported This error occurs when starting OpenRAG with the TUI in [WSL (Windows Subsystem for Linux)](https://learn.microsoft.com/en-us/windows/wsl/install). @@ -21,27 +21,36 @@ The error occurs because OpenRAG is running within a WSL environment, so `webbro To access the OpenRAG application, open a web browser and enter `http://localhost:3000` in the address bar. +## OpenRAG installation fails with unable to get local issuer certificate + +If you are installing OpenRAG on macOS, and the installation fails with `unable to get local issuer certificate`, run the following command, and then retry the installation: + +```bash +open "/Applications/Python VERSION/Install Certificates.command" +``` + +Replace `VERSION` with your installed Python version, such as `3.13`. + ## Langflow connection issues Verify the `LANGFLOW_SUPERUSER` credentials set in [Environment variables](/reference/configuration) are correct. -## Memory errors - -### Container out of memory errors +## Container out of memory errors Increase Docker memory allocation or use [docker-compose-cpu.yml](https://github.com/langflow-ai/openrag/blob/main/docker-compose-cpu.yml) to deploy OpenRAG. -### Podman on macOS memory issues +## Memory issue with Podman on macOS -If you're using Podman on macOS, you may need to increase VM memory on your Podman machine. +If you're using Podman on macOS, you might need to increase VM memory on your Podman machine. This example increases the machine size to 8 GB of RAM, which should be sufficient to run OpenRAG. - ```bash - podman machine stop - podman machine rm - podman machine init --memory 8192 # 8 GB example - podman machine start - ``` - + +```bash +podman machine stop +podman machine rm +podman machine init --memory 8192 # 8 GB example +podman machine start +``` + ## Port conflicts Ensure ports 3000, 7860, 8000, 9200, 5601 are available. @@ -52,7 +61,7 @@ If Docling ingestion fails with an OCR-related error and mentions `easyocr` is m `easyocr` is already included as a dependency in OpenRAG's `pyproject.toml`. Project-managed installations using `uv sync` and `uv run` always sync dependencies directly from your `pyproject.toml`, so they should have `easyocr` installed. -If you're running OpenRAG with `uvx openrag`, `uvx` creates a cached, ephemeral environment that doesn't modify your project. This cache may become stale. +If you're running OpenRAG with `uvx openrag`, `uvx` creates a cached, ephemeral environment that doesn't modify your project. This cache can become stale. On macOS, this cache directory is typically a user cache directory such as `/Users/USER_NAME/.cache/uv`. 1. To clear the uv cache, run: @@ -66,87 +75,135 @@ On macOS, this cache directory is typically a user cache directory such as `/Use If you do not need OCR, you can disable OCR-based processing in your ingestion settings to avoid requiring `easyocr`. -## Langflow container already exists {#langflow-container-already-exists-during-upgrade} +## Upgrade fails due to Langflow container already exists {#langflow-container-already-exists-during-upgrade} -If you encounter a `langflow container already exists` error when upgrading OpenRAG, this typically means you upgraded OpenRAG with `uv`, but didn't remove or upgrade containers from a previous installation. +If you encounter a `langflow container already exists` error when upgrading OpenRAG, this typically means you upgraded OpenRAG with `uv`, but you didn't remove or upgrade containers from a previous installation. -1. Remove only the problematic Langflow container: +To resolve this issue, do the following: - - - ```bash - # Stop the langflow container - podman stop langflow - - # Remove the langflow container - podman rm langflow --force - ``` - - - ```bash - # Stop the langflow container - docker stop langflow - - # Remove the langflow container - docker rm langflow --force - ``` - - +First, try removing only the Langflow container, and then retry the upgrade in the OpenRAG TUI by clicking **Status** and then **Upgrade**. -2. After removing the container, retry the upgrade in the OpenRAG TUI by clicking **Status** > **Upgrade**. + + -### Reinstall all containers +1. Stop the Langflow container: -If reinstalling the Langflow container doesn't resolve the issue, or if you want a completely fresh installation, remove all OpenRAG containers and data, and then retry the upgrade. -:::warning Data loss -The complete reset removes all your data, including OpenSearch data, uploaded documents, and authentication. Your `.env` file is preserved, so your configuration settings remain intact. + ```bash + podman stop langflow + ``` + +2. Remove the Langflow container: + + ```bash + podman rm langflow --force + ``` + + + + +1. Stop the Langflow container: + + ```bash + docker stop langflow + ``` + +2. Remove the Langflow container: + + ```bash + docker rm langflow --force + ``` + + + + +If reinstalling the Langflow container doesn't resolve the issue, you must reset to a fresh installation by removing all OpenRAG containers and data. +Then, you can retry the upgrade. + +:::warning +This is a destructive operation that completely resets your OpenRAG containers and removes all OpenRAG data, including OpenSearch data, uploaded documents, and authentication details. +Your `.env` file is preserved, so your configuration settings remain intact, but all other data is lost. ::: -1. Stop your containers and completely remove them. +To reset your installation, stop your containers, and then completely remove them. +After removing the containers, retry the upgrade in the OpenRAG TUI by clicking **Status** and then **Upgrade**. - - - ```bash - # Stop all running containers - podman stop --all - - # Remove all containers (including stopped ones) - podman rm --all --force - - # Remove all images - podman rmi --all --force - - # Remove all volumes - podman volume prune --force - - # Remove all networks (except default) - podman network prune --force - - # Clean up any leftover data - podman system prune --all --force --volumes - ``` - - - ```bash - # Stop all running containers - docker stop $(docker ps -q) - - # Remove all containers (including stopped ones) - docker rm --force $(docker ps -aq) - - # Remove all images - docker rmi --force $(docker images -q) - - # Remove all volumes - docker volume prune --force - - # Remove all networks (except default) - docker network prune --force - - # Clean up any leftover data - docker system prune --all --force --volumes - ``` - - + + -2. After removing the containers, retry the upgrade in the OpenRAG TUI by clicking **Status** > **Upgrade**. \ No newline at end of file +1. Stop all running containers: + + ```bash + podman stop --all + ``` + +2. Remove all containers, including stopped containers: + + ```bash + podman rm --all --force + ``` + +3. Remove all images: + + ```bash + podman rmi --all --force + ``` + +4. Remove all volumes: + + ```bash + podman volume prune --force + ``` + +5. Remove all networks except the default network: + + ```bash + podman network prune --force + ``` + +6. Clean up any leftover data: + + ```bash + podman system prune --all --force --volumes + ``` + + + + +1. Stop all running containers: + + ```bash + docker stop $(docker ps -q) + ``` + +2. Remove all containers, including stopped containers: + + ```bash + docker rm --force $(docker ps -aq) + ``` + +3. Remove all images: + + ```bash + docker rmi --force $(docker images -q) + ``` + +4. Remove all volumes: + + ```bash + docker volume prune --force + ``` + +5. Remove all networks except the default network: + + ```bash + docker network prune --force + ``` + +6. Clean up any leftover data: + + ```bash + docker system prune --all --force --volumes + ``` + + + \ No newline at end of file From ecb486b9a4835493baf612b6fb36b7e982435471 Mon Sep 17 00:00:00 2001 From: April M <36110273+aimurphy@users.noreply.github.com> Date: Mon, 24 Nov 2025 14:27:13 -0800 Subject: [PATCH 3/9] update docker page prereqs --- docs/docs/_partial-wsl-install.mdx | 2 +- docs/docs/get-started/docker.mdx | 39 ++++++++++++++++++++++++------ 2 files changed, 33 insertions(+), 8 deletions(-) diff --git a/docs/docs/_partial-wsl-install.mdx b/docs/docs/_partial-wsl-install.mdx index bff89586..ff674297 100644 --- a/docs/docs/_partial-wsl-install.mdx +++ b/docs/docs/_partial-wsl-install.mdx @@ -14,6 +14,6 @@ 4. [Install Docker Desktop for Windows with WSL 2](https://learn.microsoft.com/en-us/windows/wsl/tutorials/wsl-containers). When you reach the Docker Desktop **WSL integration** settings, make sure your Ubuntu distribution is enabled, and then click **Apply & Restart** to enable Docker support in WSL. -5. Install and run OpenRAG from within your WSL Ubuntu distribution using any of the installation methods described on this page. +5. Install and run OpenRAG from within your WSL Ubuntu distribution. If you encounter issues with port forwarding or the Windows Firewall, you might to adjust the [Hyper-V firewall settings](https://learn.microsoft.com/en-us/windows/security/operating-system-security/network-security/windows-firewall/hyper-v-firewall) to allow communication between your WSL distribution and the Windows host. For more troubleshooting advice for networking issues, see [Troubleshooting WLS common issues](https://learn.microsoft.com/en-us/windows/wsl/troubleshooting#common-issues). \ No newline at end of file diff --git a/docs/docs/get-started/docker.mdx b/docs/docs/get-started/docker.mdx index d8494134..b9bb312a 100644 --- a/docs/docs/get-started/docker.mdx +++ b/docs/docs/get-started/docker.mdx @@ -6,8 +6,9 @@ slug: /docker import Tabs from '@theme/Tabs'; import TabItem from '@theme/TabItem'; import PartialOnboarding from '@site/docs/_partial-onboarding.mdx'; +import PartialWsl from '@site/docs/_partial-wsl-install.mdx'; -OpenRAG has two Docker Compose files. Both files deploy the same applications and containers locally, but they are for different environments. +OpenRAG has two Docker Compose files. Both files deploy the same applications and containers locally, but they are for different environments: - [`docker-compose.yml`](https://github.com/langflow-ai/openrag/blob/main/docker-compose.yml) is an OpenRAG deployment with GPU support for accelerated AI processing. This Docker Compose file requires an NVIDIA GPU with [CUDA](https://docs.nvidia.com/cuda/) support. @@ -15,12 +16,36 @@ OpenRAG has two Docker Compose files. Both files deploy the same applications an ## Prerequisites -- Install [Python Version 3.10 to 3.13](https://www.python.org/downloads/release/python-3100/) -- Install [uv](https://docs.astral.sh/uv/getting-started/installation/) -- Install [Podman](https://podman.io/docs/installation) (recommended) or [Docker](https://docs.docker.com/get-docker/) -- Install [Docker Compose](https://docs.docker.com/compose/install/). If using Podman, use [podman-compose](https://docs.podman.io/en/latest/markdown/podman-compose.1.html) or alias Docker compose commands to Podman commands. -- Optional: Create an [OpenAI API key](https://platform.openai.com/api-keys). You can provide this key during [Application Onboarding](#application-onboarding) or choose a different model provider. -- Optional: Install GPU support with an NVIDIA GPU, [CUDA](https://docs.nvidia.com/cuda/) support, and compatible NVIDIA drivers on the OpenRAG host machine. If you don't have GPU capabilities, OpenRAG provides an alternate CPU-only deployment. +- Install the following: + + - [Python](https://www.python.org/downloads/release/python-3100/) version 3.10 to 3.13. + - [uv](https://docs.astral.sh/uv/getting-started/installation/). + - [Podman](https://podman.io/docs/installation) (recommended) or [Docker](https://docs.docker.com/get-docker/). + - [`podman-compose`](https://docs.podman.io/en/latest/markdown/podman-compose.1.html) or [Docker Compose](https://docs.docker.com/compose/install/). To use Docker Compose with Podman, you must alias Docker Compose commands to Podman commands. + +- Microsoft Windows only: To run OpenRAG on Windows, you must use the Windows Subsystem for Linux (WSL). + +
+ Install WSL for OpenRAG + + + +
+ +- Prepare model providers and credentials. + + During [Application Onboarding](#application-onboarding), you must select language model and embedding model providers. + If your chosen provider offers both types, you can use the same provider for both selections. + If your provider offers only one type, such as Anthropic, you must select two providers. + + Gather the credentials and connection details for your chosen model providers before starting onboarding: + + - OpenAI: Create an [OpenAI API key](https://platform.openai.com/api-keys). + - Anthropic language models: Create an [Anthropic API key](https://www.anthropic.com/docs/api/reference). + - IBM watsonx.ai: Get your watsonx.ai API endpoint, IBM project ID, and IBM API key from your watsonx deployment. + - Ollama: Use the [Ollama documentation](https://docs.ollama.com/) to set up your Ollama instance locally, in the cloud, or on a remote server, and then get your Ollama server's base URL. + +- Optional: Install GPU support with an NVIDIA GPU, [CUDA](https://docs.nvidia.com/cuda/) support, and compatible NVIDIA drivers on the OpenRAG host machine. This is required to use the GPU-accelerated Docker Compose file. If you choose not to use GPU support, you must use the CPU-only Docker Compose file instead. ## Install OpenRAG with Docker Compose From 2020b924ad2302a075e186f77fd717d6ed1d2e76 Mon Sep 17 00:00:00 2001 From: April M <36110273+aimurphy@users.noreply.github.com> Date: Mon, 24 Nov 2025 14:34:53 -0800 Subject: [PATCH 4/9] indentation --- docs/docs/support/troubleshoot.mdx | 112 +++++++++++++++-------------- 1 file changed, 58 insertions(+), 54 deletions(-) diff --git a/docs/docs/support/troubleshoot.mdx b/docs/docs/support/troubleshoot.mdx index 7031a0cb..3fb26feb 100644 --- a/docs/docs/support/troubleshoot.mdx +++ b/docs/docs/support/troubleshoot.mdx @@ -64,14 +64,18 @@ If Docling ingestion fails with an OCR-related error and mentions `easyocr` is m If you're running OpenRAG with `uvx openrag`, `uvx` creates a cached, ephemeral environment that doesn't modify your project. This cache can become stale. On macOS, this cache directory is typically a user cache directory such as `/Users/USER_NAME/.cache/uv`. + 1. To clear the uv cache, run: - ```bash - uv cache clean - ``` + + ```bash + uv cache clean + ``` + 2. Start OpenRAG: - ```bash - uvx openrag - ``` + + ```bash + uvx openrag + ``` If you do not need OCR, you can disable OCR-based processing in your ingestion settings to avoid requiring `easyocr`. @@ -88,30 +92,30 @@ First, try removing only the Langflow container, and then retry the upgrade in t 1. Stop the Langflow container: - ```bash - podman stop langflow - ``` + ```bash + podman stop langflow + ``` 2. Remove the Langflow container: - ```bash - podman rm langflow --force - ``` + ```bash + podman rm langflow --force + ``` 1. Stop the Langflow container: - ```bash - docker stop langflow - ``` + ```bash + docker stop langflow + ``` 2. Remove the Langflow container: - ```bash - docker rm langflow --force - ``` + ```bash + docker rm langflow --force + ``` @@ -132,78 +136,78 @@ After removing the containers, retry the upgrade in the OpenRAG TUI by clicking 1. Stop all running containers: - ```bash - podman stop --all - ``` + ```bash + podman stop --all + ``` 2. Remove all containers, including stopped containers: - ```bash - podman rm --all --force - ``` + ```bash + podman rm --all --force + ``` 3. Remove all images: - ```bash - podman rmi --all --force - ``` + ```bash + podman rmi --all --force + ``` 4. Remove all volumes: - ```bash - podman volume prune --force - ``` + ```bash + podman volume prune --force + ``` 5. Remove all networks except the default network: - ```bash - podman network prune --force - ``` + ```bash + podman network prune --force + ``` 6. Clean up any leftover data: - ```bash - podman system prune --all --force --volumes - ``` + ```bash + podman system prune --all --force --volumes + ``` 1. Stop all running containers: - ```bash - docker stop $(docker ps -q) - ``` + ```bash + docker stop $(docker ps -q) + ``` 2. Remove all containers, including stopped containers: - ```bash - docker rm --force $(docker ps -aq) - ``` + ```bash + docker rm --force $(docker ps -aq) + ``` 3. Remove all images: - ```bash - docker rmi --force $(docker images -q) - ``` + ```bash + docker rmi --force $(docker images -q) + ``` 4. Remove all volumes: - ```bash - docker volume prune --force - ``` + ```bash + docker volume prune --force + ``` 5. Remove all networks except the default network: - ```bash - docker network prune --force - ``` + ```bash + docker network prune --force + ``` 6. Clean up any leftover data: - ```bash - docker system prune --all --force --volumes - ``` + ```bash + docker system prune --all --force --volumes + ``` \ No newline at end of file From bc4354ea208284dbce45ea4f8df81e4f993553be Mon Sep 17 00:00:00 2001 From: Mike Fortman Date: Mon, 24 Nov 2025 16:46:16 -0600 Subject: [PATCH 5/9] Detect and default to provider with env set --- .../_components/onboarding-card.tsx | 40 ++++++++++++++++++- 1 file changed, 38 insertions(+), 2 deletions(-) diff --git a/frontend/app/onboarding/_components/onboarding-card.tsx b/frontend/app/onboarding/_components/onboarding-card.tsx index 3dffb21e..7ac2e85c 100644 --- a/frontend/app/onboarding/_components/onboarding-card.tsx +++ b/frontend/app/onboarding/_components/onboarding-card.tsx @@ -2,7 +2,7 @@ import { useQueryClient } from "@tanstack/react-query"; import { AnimatePresence, motion } from "framer-motion"; -import { Info, X } from "lucide-react"; +import { X } from "lucide-react"; import { useEffect, useState } from "react"; import { toast } from "sonner"; import { @@ -74,6 +74,42 @@ const OnboardingCard = ({ // Fetch current settings to check if providers are already configured const { data: currentSettings } = useGetSettingsQuery(); + // Auto-select the first provider that has an API key set in env vars + useEffect(() => { + if (!currentSettings?.providers) return; + + // Define provider order based on whether it's embedding or not + const providerOrder = isEmbedding + ? ["openai", "watsonx", "ollama"] + : ["anthropic", "openai", "watsonx", "ollama"]; + + // Find the first provider with an API key + for (const provider of providerOrder) { + if ( + provider === "anthropic" && + currentSettings.providers.anthropic?.has_api_key + ) { + setModelProvider("anthropic"); + return; + } else if (provider === "openai" && currentSettings.providers.openai?.has_api_key) { + setModelProvider("openai"); + return; + } else if ( + provider === "watsonx" && + currentSettings.providers.watsonx?.has_api_key + ) { + setModelProvider("watsonx"); + return; + } else if ( + provider === "ollama" && + currentSettings.providers.ollama?.endpoint + ) { + setModelProvider("ollama"); + return; + } + } + }, [currentSettings, isEmbedding]); + const handleSetModelProvider = (provider: string) => { setIsLoadingModels(false); setModelProvider(provider); @@ -305,7 +341,7 @@ const OnboardingCard = ({
From a358166a2d372813ac50f21550c826a04e0694e3 Mon Sep 17 00:00:00 2001 From: "April I. Murphy" <36110273+aimurphy@users.noreply.github.com> Date: Mon, 24 Nov 2025 15:25:21 -0800 Subject: [PATCH 6/9] Apply suggestions from code review Co-authored-by: Mendon Kissling <59585235+mendonk@users.noreply.github.com> --- docs/docs/_partial-wsl-install.mdx | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/docs/docs/_partial-wsl-install.mdx b/docs/docs/_partial-wsl-install.mdx index ff674297..7cab4029 100644 --- a/docs/docs/_partial-wsl-install.mdx +++ b/docs/docs/_partial-wsl-install.mdx @@ -15,5 +15,5 @@ 4. [Install Docker Desktop for Windows with WSL 2](https://learn.microsoft.com/en-us/windows/wsl/tutorials/wsl-containers). When you reach the Docker Desktop **WSL integration** settings, make sure your Ubuntu distribution is enabled, and then click **Apply & Restart** to enable Docker support in WSL. 5. Install and run OpenRAG from within your WSL Ubuntu distribution. - -If you encounter issues with port forwarding or the Windows Firewall, you might to adjust the [Hyper-V firewall settings](https://learn.microsoft.com/en-us/windows/security/operating-system-security/network-security/windows-firewall/hyper-v-firewall) to allow communication between your WSL distribution and the Windows host. For more troubleshooting advice for networking issues, see [Troubleshooting WLS common issues](https://learn.microsoft.com/en-us/windows/wsl/troubleshooting#common-issues). \ No newline at end of file +
+If you encounter issues with port forwarding or the Windows Firewall, you might need to adjust the [Hyper-V firewall settings](https://learn.microsoft.com/en-us/windows/security/operating-system-security/network-security/windows-firewall/hyper-v-firewall) to allow communication between your WSL distribution and the Windows host. For more troubleshooting advice for networking issues, see [Troubleshooting WLS common issues](https://learn.microsoft.com/en-us/windows/wsl/troubleshooting#common-issues). \ No newline at end of file From ad703df6ec9d7291f57c4880691532c07dec582d Mon Sep 17 00:00:00 2001 From: April M <36110273+aimurphy@users.noreply.github.com> Date: Tue, 25 Nov 2025 08:09:32 -0800 Subject: [PATCH 7/9] limitation for nested virtualization --- docs/docs/_partial-wsl-install.mdx | 16 +++++++++++----- 1 file changed, 11 insertions(+), 5 deletions(-) diff --git a/docs/docs/_partial-wsl-install.mdx b/docs/docs/_partial-wsl-install.mdx index 7cab4029..536c10b6 100644 --- a/docs/docs/_partial-wsl-install.mdx +++ b/docs/docs/_partial-wsl-install.mdx @@ -1,12 +1,18 @@ 1. [Install WSL](https://learn.microsoft.com/en-us/windows/wsl/install) with the Ubuntu distribution using WSL 2: - ```powershell - wsl --install -d Ubuntu - ``` + ```powershell + wsl --install -d Ubuntu + ``` - For new installations, the `wsl --install` command uses WSL 2 and Ubuntu by default. + For new installations, the `wsl --install` command uses WSL 2 and Ubuntu by default. - For existing WSL installations, you can [change the distribution](https://learn.microsoft.com/en-us/windows/wsl/install#change-the-default-linux-distribution-installed) and [check the WSL version](https://learn.microsoft.com/en-us/windows/wsl/install#upgrade-version-from-wsl-1-to-wsl-2). + For existing WSL installations, you can [change the distribution](https://learn.microsoft.com/en-us/windows/wsl/install#change-the-default-linux-distribution-installed) and [check the WSL version](https://learn.microsoft.com/en-us/windows/wsl/install#upgrade-version-from-wsl-1-to-wsl-2). + + :::info Known limitation + OpenRAG isn't compatible with nested virtualization, which can cause networking issues. + Don't install OpenRAG in on a WSL distribution that is installed inside a Windows VM. + Instead, install OpenRAG on your base OS or a non-nested Linux VM. + ::: 2. [Start your WSL Ubuntu distribution](https://learn.microsoft.com/en-us/windows/wsl/install#ways-to-run-multiple-linux-distributions-with-wsl) if it doesn't start automatically. From b807bbcd253ceceb298d0f37a6a0f47e2cd878b6 Mon Sep 17 00:00:00 2001 From: "April I. Murphy" <36110273+aimurphy@users.noreply.github.com> Date: Tue, 25 Nov 2025 08:15:12 -0800 Subject: [PATCH 8/9] Update docs/docs/_partial-wsl-install.mdx --- docs/docs/_partial-wsl-install.mdx | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/docs/docs/_partial-wsl-install.mdx b/docs/docs/_partial-wsl-install.mdx index 536c10b6..42768ff5 100644 --- a/docs/docs/_partial-wsl-install.mdx +++ b/docs/docs/_partial-wsl-install.mdx @@ -8,7 +8,7 @@ For existing WSL installations, you can [change the distribution](https://learn.microsoft.com/en-us/windows/wsl/install#change-the-default-linux-distribution-installed) and [check the WSL version](https://learn.microsoft.com/en-us/windows/wsl/install#upgrade-version-from-wsl-1-to-wsl-2). - :::info Known limitation + :::warning Known limitation OpenRAG isn't compatible with nested virtualization, which can cause networking issues. Don't install OpenRAG in on a WSL distribution that is installed inside a Windows VM. Instead, install OpenRAG on your base OS or a non-nested Linux VM. From 370c668fc4a8f958d433ef0ee165e46c8d47b580 Mon Sep 17 00:00:00 2001 From: April M <36110273+aimurphy@users.noreply.github.com> Date: Tue, 25 Nov 2025 08:34:01 -0800 Subject: [PATCH 9/9] peer review --- docs/docs/_partial-wsl-install.mdx | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/docs/docs/_partial-wsl-install.mdx b/docs/docs/_partial-wsl-install.mdx index 42768ff5..4db6d1dd 100644 --- a/docs/docs/_partial-wsl-install.mdx +++ b/docs/docs/_partial-wsl-install.mdx @@ -10,7 +10,7 @@ :::warning Known limitation OpenRAG isn't compatible with nested virtualization, which can cause networking issues. - Don't install OpenRAG in on a WSL distribution that is installed inside a Windows VM. + Don't install OpenRAG on a WSL distribution that is installed inside a Windows VM. Instead, install OpenRAG on your base OS or a non-nested Linux VM. ::: @@ -22,4 +22,4 @@ 5. Install and run OpenRAG from within your WSL Ubuntu distribution.
-If you encounter issues with port forwarding or the Windows Firewall, you might need to adjust the [Hyper-V firewall settings](https://learn.microsoft.com/en-us/windows/security/operating-system-security/network-security/windows-firewall/hyper-v-firewall) to allow communication between your WSL distribution and the Windows host. For more troubleshooting advice for networking issues, see [Troubleshooting WLS common issues](https://learn.microsoft.com/en-us/windows/wsl/troubleshooting#common-issues). \ No newline at end of file +If you encounter issues with port forwarding or the Windows Firewall, you might need to adjust the [Hyper-V firewall settings](https://learn.microsoft.com/en-us/windows/security/operating-system-security/network-security/windows-firewall/hyper-v-firewall) to allow communication between your WSL distribution and the Windows host. For more troubleshooting advice for networking issues, see [Troubleshooting WSL common issues](https://learn.microsoft.com/en-us/windows/wsl/troubleshooting#common-issues). \ No newline at end of file