Merge pull request #493 from langflow-ai/issue-480-wsl-docs

Docs: Add WSL details
This commit is contained in:
April I. Murphy 2025-11-24 15:29:50 -08:00 committed by GitHub
commit c6c562a2f1
No known key found for this signature in database
GPG key ID: B5690EEEBB952194
4 changed files with 101 additions and 18 deletions

View file

@ -0,0 +1,19 @@
1. [Install WSL](https://learn.microsoft.com/en-us/windows/wsl/install) with the Ubuntu distribution using WSL 2:
```powershell
wsl --install -d Ubuntu
```
For new installations, the `wsl --install` command uses WSL 2 and Ubuntu by default.
For existing WSL installations, you can [change the distribution](https://learn.microsoft.com/en-us/windows/wsl/install#change-the-default-linux-distribution-installed) and [check the WSL version](https://learn.microsoft.com/en-us/windows/wsl/install#upgrade-version-from-wsl-1-to-wsl-2).
2. [Start your WSL Ubuntu distribution](https://learn.microsoft.com/en-us/windows/wsl/install#ways-to-run-multiple-linux-distributions-with-wsl) if it doesn't start automatically.
3. [Set up a username and password for your WSL distribution](https://learn.microsoft.com/en-us/windows/wsl/setup/environment#set-up-your-linux-username-and-password).
4. [Install Docker Desktop for Windows with WSL 2](https://learn.microsoft.com/en-us/windows/wsl/tutorials/wsl-containers). When you reach the Docker Desktop **WSL integration** settings, make sure your Ubuntu distribution is enabled, and then click **Apply & Restart** to enable Docker support in WSL.
5. Install and run OpenRAG from within your WSL Ubuntu distribution.
<br/>
If you encounter issues with port forwarding or the Windows Firewall, you might need to adjust the [Hyper-V firewall settings](https://learn.microsoft.com/en-us/windows/security/operating-system-security/network-security/windows-firewall/hyper-v-firewall) to allow communication between your WSL distribution and the Windows host. For more troubleshooting advice for networking issues, see [Troubleshooting WLS common issues](https://learn.microsoft.com/en-us/windows/wsl/troubleshooting#common-issues).

View file

@ -6,8 +6,9 @@ slug: /docker
import Tabs from '@theme/Tabs';
import TabItem from '@theme/TabItem';
import PartialOnboarding from '@site/docs/_partial-onboarding.mdx';
import PartialWsl from '@site/docs/_partial-wsl-install.mdx';
OpenRAG has two Docker Compose files. Both files deploy the same applications and containers locally, but they are for different environments.
OpenRAG has two Docker Compose files. Both files deploy the same applications and containers locally, but they are for different environments:
- [`docker-compose.yml`](https://github.com/langflow-ai/openrag/blob/main/docker-compose.yml) is an OpenRAG deployment with GPU support for accelerated AI processing. This Docker Compose file requires an NVIDIA GPU with [CUDA](https://docs.nvidia.com/cuda/) support.
@ -15,12 +16,36 @@ OpenRAG has two Docker Compose files. Both files deploy the same applications an
## Prerequisites
- Install [Python Version 3.10 to 3.13](https://www.python.org/downloads/release/python-3100/)
- Install [uv](https://docs.astral.sh/uv/getting-started/installation/)
- Install [Podman](https://podman.io/docs/installation) (recommended) or [Docker](https://docs.docker.com/get-docker/)
- Install [Docker Compose](https://docs.docker.com/compose/install/). If using Podman, use [podman-compose](https://docs.podman.io/en/latest/markdown/podman-compose.1.html) or alias Docker compose commands to Podman commands.
- Optional: Create an [OpenAI API key](https://platform.openai.com/api-keys). You can provide this key during [Application Onboarding](#application-onboarding) or choose a different model provider.
- Optional: Install GPU support with an NVIDIA GPU, [CUDA](https://docs.nvidia.com/cuda/) support, and compatible NVIDIA drivers on the OpenRAG host machine. If you don't have GPU capabilities, OpenRAG provides an alternate CPU-only deployment.
- Install the following:
- [Python](https://www.python.org/downloads/release/python-3100/) version 3.10 to 3.13.
- [uv](https://docs.astral.sh/uv/getting-started/installation/).
- [Podman](https://podman.io/docs/installation) (recommended) or [Docker](https://docs.docker.com/get-docker/).
- [`podman-compose`](https://docs.podman.io/en/latest/markdown/podman-compose.1.html) or [Docker Compose](https://docs.docker.com/compose/install/). To use Docker Compose with Podman, you must alias Docker Compose commands to Podman commands.
- Microsoft Windows only: To run OpenRAG on Windows, you must use the Windows Subsystem for Linux (WSL).
<details>
<summary>Install WSL for OpenRAG</summary>
<PartialWsl />
</details>
- Prepare model providers and credentials.
During [Application Onboarding](#application-onboarding), you must select language model and embedding model providers.
If your chosen provider offers both types, you can use the same provider for both selections.
If your provider offers only one type, such as Anthropic, you must select two providers.
Gather the credentials and connection details for your chosen model providers before starting onboarding:
- OpenAI: Create an [OpenAI API key](https://platform.openai.com/api-keys).
- Anthropic language models: Create an [Anthropic API key](https://www.anthropic.com/docs/api/reference).
- IBM watsonx.ai: Get your watsonx.ai API endpoint, IBM project ID, and IBM API key from your watsonx deployment.
- Ollama: Use the [Ollama documentation](https://docs.ollama.com/) to set up your Ollama instance locally, in the cloud, or on a remote server, and then get your Ollama server's base URL.
- Optional: Install GPU support with an NVIDIA GPU, [CUDA](https://docs.nvidia.com/cuda/) support, and compatible NVIDIA drivers on the OpenRAG host machine. This is required to use the GPU-accelerated Docker Compose file. If you choose not to use GPU support, you must use the CPU-only Docker Compose file instead.
## Install OpenRAG with Docker Compose

View file

@ -5,7 +5,8 @@ slug: /install
import Tabs from '@theme/Tabs';
import TabItem from '@theme/TabItem';
import PartialOnboarding from '@site/docs/_partial-onboarding.mdx';
import PartialOnboarding from '@site/docs/_partial-onboarding.mdx';
import PartialWsl from '@site/docs/_partial-wsl-install.mdx';
[Install OpenRAG](#install) and then run the [OpenRAG Terminal User Interface(TUI)](#setup) to start your OpenRAG deployment with a guided setup process.
@ -21,19 +22,40 @@ If you prefer running Podman or Docker containers and manually editing `.env` fi
## Prerequisites
- Install [Python Version 3.10 to 3.13](https://www.python.org/downloads/release/python-3100/)
- Install [uv](https://docs.astral.sh/uv/getting-started/installation/)
- Install [Podman](https://podman.io/docs/installation) (recommended) or [Docker](https://docs.docker.com/get-docker/)
- Install [Docker Compose](https://docs.docker.com/compose/install/). If using Podman, use [podman-compose](https://docs.podman.io/en/latest/markdown/podman-compose.1.html) or alias Docker compose commands to Podman commands.
- Optional: Create an [OpenAI API key](https://platform.openai.com/api-keys). During [Application Onboarding](#application-onboarding), you can provide this key or choose a different model provider.
- All OpenRAG installations require [Python](https://www.python.org/downloads/release/python-3100/) version 3.10 to 3.13.
- If you aren't using the automatic installer script, install the following:
- [uv](https://docs.astral.sh/uv/getting-started/installation/).
- [Podman](https://podman.io/docs/installation) (recommended) or [Docker](https://docs.docker.com/get-docker/).
- [`podman-compose`](https://docs.podman.io/en/latest/markdown/podman-compose.1.html) or [Docker Compose](https://docs.docker.com/compose/install/). To use Docker Compose with Podman, you must alias Docker Compose commands to Podman commands.
- Microsoft Windows only: To run OpenRAG on Windows, you must use the Windows Subsystem for Linux (WSL).
<details>
<summary>Install WSL for OpenRAG</summary>
<PartialWsl />
</details>
- Prepare model providers and credentials.
During [Application Onboarding](#application-onboarding), you must select language model and embedding model providers.
If your chosen provider offers both types, you can use the same provider for both selections.
If your provider offers only one type, such as Anthropic, you must select two providers.
Gather the credentials and connection details for your chosen model providers before starting onboarding:
- OpenAI: Create an [OpenAI API key](https://platform.openai.com/api-keys).
- Anthropic language models: Create an [Anthropic API key](https://www.anthropic.com/docs/api/reference).
- IBM watsonx.ai: Get your watsonx.ai API endpoint, IBM project ID, and IBM API key from your watsonx deployment.
- Ollama: Use the [Ollama documentation](https://docs.ollama.com/) to set up your Ollama instance locally, in the cloud, or on a remote server, and then get your Ollama server's base URL.
- Optional: Install GPU support with an NVIDIA GPU, [CUDA](https://docs.nvidia.com/cuda/) support, and compatible NVIDIA drivers on the OpenRAG host machine. If you don't have GPU capabilities, OpenRAG provides an alternate CPU-only deployment.
## Install OpenRAG {#install}
:::note Windows users
To use OpenRAG on Windows, use [WSL (Windows Subsystem for Linux)](https://learn.microsoft.com/en-us/windows/wsl/install).
:::
Choose an installation method based on your needs:
* For new users, the automatic installer script detects and installs prerequisites and then runs OpenRAG.

View file

@ -6,12 +6,29 @@ slug: /quickstart
import Icon from "@site/src/components/icon/icon";
import Tabs from '@theme/Tabs';
import TabItem from '@theme/TabItem';
import PartialWsl from '@site/docs/_partial-wsl-install.mdx';
Use this quickstart to install OpenRAG, and then try some of OpenRAG's core features.
## Prerequisites
This quickstart requires an [OpenAI API key](https://platform.openai.com/api-keys) and [Python](https://www.python.org/downloads/release/python-3100/) version 3.10 to 3.13.
This quickstart requires the following:
- An [OpenAI API key](https://platform.openai.com/api-keys).
This quickstart uses OpenAI for simplicity.
For other providers, see the complete [installation guide](/install).
- [Python](https://www.python.org/downloads/release/python-3100/) version 3.10 to 3.13.
- Microsoft Windows only: To run OpenRAG on Windows, you must use the Windows Subsystem for Linux (WSL).
<details>
<summary>Install WSL for OpenRAG</summary>
<PartialWsl />
</details>
## Install OpenRAG