propagate prerequisites and format install cmds
This commit is contained in:
parent
d578e7832e
commit
951fd4b176
8 changed files with 195 additions and 173 deletions
|
|
@ -30,7 +30,6 @@ You only need to complete onboarding for your preferred providers.
|
|||
4. In the second onboarding panel, select a provider for embeddings and select your **Embedding Model**.
|
||||
5. To complete the onboarding tasks, click **What is OpenRAG**, and then click **Add a Document**.
|
||||
Alternatively, click <Icon name="ArrowRight" aria-hidden="true"/> **Skip overview**.
|
||||
6. Continue with the [Quickstart](/quickstart).
|
||||
|
||||
</TabItem>
|
||||
<TabItem value="OpenAI" label="OpenAI">
|
||||
|
|
@ -42,7 +41,6 @@ You only need to complete onboarding for your preferred providers.
|
|||
4. In the second onboarding panel, select a provider for embeddings and select your **Embedding Model**.
|
||||
5. To complete the onboarding tasks, click **What is OpenRAG**, and then click **Add a Document**.
|
||||
Alternatively, click <Icon name="ArrowRight" aria-hidden="true"/> **Skip overview**.
|
||||
6. Continue with the [Quickstart](/quickstart).
|
||||
|
||||
</TabItem>
|
||||
<TabItem value="IBM watsonx.ai" label="IBM watsonx.ai">
|
||||
|
|
@ -54,7 +52,6 @@ You only need to complete onboarding for your preferred providers.
|
|||
4. In the second onboarding panel, select a provider for embeddings and select your **Embedding Model**.
|
||||
5. To complete the onboarding tasks, click **What is OpenRAG**, and then click **Add a Document**.
|
||||
Alternatively, click <Icon name="ArrowRight" aria-hidden="true"/> **Skip overview**.
|
||||
6. Continue with the [Quickstart](/quickstart).
|
||||
|
||||
</TabItem>
|
||||
<TabItem value="Ollama" label="Ollama">
|
||||
|
|
@ -73,7 +70,6 @@ You only need to complete onboarding for your preferred providers.
|
|||
</details>
|
||||
3. Click **Complete**.
|
||||
4. To complete the onboarding tasks, click **What is OpenRAG**, and then click **Add a Document**.
|
||||
5. Continue with the [Quickstart](/quickstart).
|
||||
|
||||
</TabItem>
|
||||
</Tabs>
|
||||
|
|
@ -1,25 +0,0 @@
|
|||
1. [Install WSL](https://learn.microsoft.com/en-us/windows/wsl/install) with the Ubuntu distribution using WSL 2:
|
||||
|
||||
```powershell
|
||||
wsl --install -d Ubuntu
|
||||
```
|
||||
|
||||
For new installations, the `wsl --install` command uses WSL 2 and Ubuntu by default.
|
||||
|
||||
For existing WSL installations, you can [change the distribution](https://learn.microsoft.com/en-us/windows/wsl/install#change-the-default-linux-distribution-installed) and [check the WSL version](https://learn.microsoft.com/en-us/windows/wsl/install#upgrade-version-from-wsl-1-to-wsl-2).
|
||||
|
||||
:::warning Known limitation
|
||||
OpenRAG isn't compatible with nested virtualization, which can cause networking issues.
|
||||
Don't install OpenRAG on a WSL distribution that is installed inside a Windows VM.
|
||||
Instead, install OpenRAG on your base OS or a non-nested Linux VM.
|
||||
:::
|
||||
|
||||
2. [Start your WSL Ubuntu distribution](https://learn.microsoft.com/en-us/windows/wsl/install#ways-to-run-multiple-linux-distributions-with-wsl) if it doesn't start automatically.
|
||||
|
||||
3. [Set up a username and password for your WSL distribution](https://learn.microsoft.com/en-us/windows/wsl/setup/environment#set-up-your-linux-username-and-password).
|
||||
|
||||
4. [Install Docker Desktop for Windows with WSL 2](https://learn.microsoft.com/en-us/windows/wsl/tutorials/wsl-containers). When you reach the Docker Desktop **WSL integration** settings, make sure your Ubuntu distribution is enabled, and then click **Apply & Restart** to enable Docker support in WSL.
|
||||
|
||||
5. Install and run OpenRAG from within your WSL Ubuntu distribution.
|
||||
<br/>
|
||||
If you encounter issues with port forwarding or the Windows Firewall, you might need to adjust the [Hyper-V firewall settings](https://learn.microsoft.com/en-us/windows/security/operating-system-security/network-security/windows-firewall/hyper-v-firewall) to allow communication between your WSL distribution and the Windows host. For more troubleshooting advice for networking issues, see [Troubleshooting WSL common issues](https://learn.microsoft.com/en-us/windows/wsl/troubleshooting#common-issues).
|
||||
|
|
@ -6,7 +6,6 @@ slug: /docker
|
|||
import Tabs from '@theme/Tabs';
|
||||
import TabItem from '@theme/TabItem';
|
||||
import PartialOnboarding from '@site/docs/_partial-onboarding.mdx';
|
||||
import PartialWsl from '@site/docs/_partial-wsl-install.mdx';
|
||||
|
||||
To manage your own OpenRAG services, deploy OpenRAG with Docker or Podman.
|
||||
|
||||
|
|
@ -20,36 +19,30 @@ OpenRAG has two Docker Compose files. Both files deploy the same services, but t
|
|||
|
||||
## Prerequisites
|
||||
|
||||
- Install the following:
|
||||
- For Microsoft Windows, you must use the Windows Subsystem for Linux (WSL).
|
||||
See [Install OpenRAG on Windows](/install-windows) before proceeding.
|
||||
|
||||
- [Python](https://www.python.org/downloads/release/python-3100/) version 3.13 or later.
|
||||
- [uv](https://docs.astral.sh/uv/getting-started/installation/).
|
||||
- [Podman](https://podman.io/docs/installation) (recommended) or [Docker](https://docs.docker.com/get-docker/).
|
||||
- [`podman-compose`](https://docs.podman.io/en/latest/markdown/podman-compose.1.html) or [Docker Compose](https://docs.docker.com/compose/install/). To use Docker Compose with Podman, you must alias Docker Compose commands to Podman commands.
|
||||
- Install [Python](https://www.python.org/downloads/release/python-3100/) version 3.13 or later.
|
||||
|
||||
- Microsoft Windows only: To run OpenRAG on Windows, you must use the Windows Subsystem for Linux (WSL).
|
||||
- Install [uv](https://docs.astral.sh/uv/getting-started/installation/).
|
||||
|
||||
<details>
|
||||
<summary>Install WSL for OpenRAG</summary>
|
||||
- Install [Podman](https://podman.io/docs/installation) (recommended) or [Docker](https://docs.docker.com/get-docker/).
|
||||
|
||||
<PartialWsl />
|
||||
- Install [Podman Compose](https://docs.podman.io/en/latest/markdown/podman-compose.1.html) or [Docker Compose](https://docs.docker.com/compose/install/).
|
||||
To use Docker Compose with Podman, you must alias Docker Compose commands to Podman commands.
|
||||
|
||||
</details>
|
||||
|
||||
- Prepare model providers and credentials.
|
||||
|
||||
During [application onboarding](#application-onboarding), you must select language model and embedding model providers.
|
||||
If your chosen provider offers both types, you can use the same provider for both selections.
|
||||
If your provider offers only one type, such as Anthropic, you must select two providers.
|
||||
|
||||
Gather the credentials and connection details for your chosen model providers before starting onboarding:
|
||||
- Gather the credentials and connection details for your preferred model providers.
|
||||
|
||||
- OpenAI: Create an [OpenAI API key](https://platform.openai.com/api-keys).
|
||||
- Anthropic language models: Create an [Anthropic API key](https://www.anthropic.com/docs/api/reference).
|
||||
- IBM watsonx.ai: Get your watsonx.ai API endpoint, IBM project ID, and IBM API key from your watsonx deployment.
|
||||
- Ollama: Use the [Ollama documentation](https://docs.ollama.com/) to set up your Ollama instance locally, in the cloud, or on a remote server, and then get your Ollama server's base URL.
|
||||
|
||||
- Optional: Install GPU support with an NVIDIA GPU, [CUDA](https://docs.nvidia.com/cuda/) support, and compatible NVIDIA drivers on the OpenRAG host machine. This is required to use the GPU-accelerated Docker Compose file. If you choose not to use GPU support, you must use the CPU-only Docker Compose file instead.
|
||||
You must have access to at least one language model and one embedding model.
|
||||
If your chosen provider offers both types, you can use the same provider for both models.
|
||||
If your provider offers only one type, such as Anthropic, you must select two providers.
|
||||
|
||||
- Optional: Install GPU support with an NVIDIA GPU, [CUDA](https://docs.nvidia.com/cuda/) support, and compatible NVIDIA drivers on the OpenRAG host machine. If you don't have GPU capabilities, OpenRAG provides an alternate CPU-only deployment.
|
||||
|
||||
## Install OpenRAG with Docker Compose
|
||||
|
||||
|
|
|
|||
|
|
@ -9,76 +9,113 @@ You can use [`uv`](https://docs.astral.sh/uv/getting-started/installation/) to i
|
|||
|
||||
For other installation methods, see [Choose an installation method](/install-options).
|
||||
|
||||
## Prerequisites
|
||||
|
||||
- For Microsoft Windows, you must use the Windows Subsystem for Linux (WSL).
|
||||
See [Install OpenRAG on Windows](/install-windows) before proceeding.
|
||||
|
||||
* **`uv pip install`**: Install OpenRAG as an unmanaged dependency in a virtual environment.
|
||||
* **`uv add`** (Recommended): Install OpenRAG as a managed dependency in a new or existing Python project.
|
||||
- Install [Python](https://www.python.org/downloads/release/python-3100/) version 3.13 or later.
|
||||
|
||||
- Install [uv](https://docs.astral.sh/uv/getting-started/installation/).
|
||||
|
||||
## uv add (Recommended)
|
||||
- Install [Podman](https://podman.io/docs/installation) (recommended) or [Docker](https://docs.docker.com/get-docker/).
|
||||
|
||||
Use `uv add` to install OpenRAG as a dependency in your Python project. This adds OpenRAG to your `pyproject.toml` and lockfile, making your installation reproducible and version-controlled.
|
||||
- Install [Podman Compose](https://docs.podman.io/en/latest/markdown/podman-compose.1.html) or [Docker Compose](https://docs.docker.com/compose/install/).
|
||||
To use Docker Compose with Podman, you must alias Docker Compose commands to Podman commands.
|
||||
|
||||
1. Create a new project with a virtual environment:
|
||||
```bash
|
||||
uv init YOUR_PROJECT_NAME
|
||||
cd YOUR_PROJECT_NAME
|
||||
```
|
||||
- Gather the credentials and connection details for your preferred model providers.
|
||||
|
||||
The `(venv)` prompt doesn't change, but `uv` commands will automatically use the project's virtual environment.
|
||||
- OpenAI: Create an [OpenAI API key](https://platform.openai.com/api-keys).
|
||||
- Anthropic language models: Create an [Anthropic API key](https://www.anthropic.com/docs/api/reference).
|
||||
- IBM watsonx.ai: Get your watsonx.ai API endpoint, IBM project ID, and IBM API key from your watsonx deployment.
|
||||
- Ollama: Use the [Ollama documentation](https://docs.ollama.com/) to set up your Ollama instance locally, in the cloud, or on a remote server, and then get your Ollama server's base URL.
|
||||
|
||||
2. Add the OpenRAG package to your project:
|
||||
```bash
|
||||
uv add openrag
|
||||
```
|
||||
You must have access to at least one language model and one embedding model.
|
||||
If your chosen provider offers both types, you can use the same provider for both models.
|
||||
If your provider offers only one type, such as Anthropic, you must select two providers.
|
||||
|
||||
To add a specific version:
|
||||
```bash
|
||||
uv add openrag==0.1.30
|
||||
```
|
||||
- Optional: Install GPU support with an NVIDIA GPU, [CUDA](https://docs.nvidia.com/cuda/) support, and compatible NVIDIA drivers on the OpenRAG host machine. If you don't have GPU capabilities, OpenRAG provides an alternate CPU-only deployment.
|
||||
|
||||
If you downloaded the OpenRAG wheel to your local machine, install OpenRAG by specifying the path and name of the OpenRAG `.whl` file:
|
||||
## Install and start OpenRAG with uv
|
||||
|
||||
```bash
|
||||
uv add PATH/TO/openrag-VERSION-py3-none-any.whl
|
||||
```
|
||||
There are two ways to install OpenRAG with `uv`:
|
||||
|
||||
* [**`uv add`** (Recommended)](#uv-add): Install OpenRAG as a managed dependency in a new or existing `uv` Python project.
|
||||
This is recommended because it adds OpenRAG to your `pyproject.toml` and lockfile for better management of dependencies and the virtual environment.
|
||||
|
||||
* [**`uv pip install`**](#uv-pip-install): Use the [`uv pip` interface](https://docs.astral.sh/uv/pip/) to install OpenRAG into an existing Python project that uses `pip`, `pip-tools`, and `virtualenv` commands.
|
||||
|
||||
3. Start the OpenRAG TUI:
|
||||
```bash
|
||||
uv run openrag
|
||||
```
|
||||
If you encounter errors during installation, see [Troubleshoot OpenRAG](/support/troubleshoot).
|
||||
|
||||
### uv add {#uv-add}
|
||||
|
||||
## uv pip install
|
||||
1. Create a new `uv`-managed Python project:
|
||||
|
||||
Use `uv pip install` to install OpenRAG into an existing virtual environment that isn't managed by `uv`.
|
||||
```bash
|
||||
uv init PROJECT_NAME
|
||||
```
|
||||
|
||||
:::tip
|
||||
For new projects, `uv add` is recommended as it manages dependencies in your project's lockfile.
|
||||
:::
|
||||
2. Change into your new project directory:
|
||||
|
||||
1. Activate your virtual environment.
|
||||
```bash
|
||||
cd PROJECT_NAME
|
||||
```
|
||||
|
||||
2. Install OpenRAG:
|
||||
```bash
|
||||
uv pip install openrag
|
||||
```
|
||||
Because `uv` manages the virtual environment for you, you won't see a `(venv)` prompt.
|
||||
`uv` commands automatically use the project's virtual environment.
|
||||
|
||||
3. Run OpenRAG:
|
||||
```bash
|
||||
uv run openrag
|
||||
```
|
||||
If you encounter errors during installation, see [Troubleshoot OpenRAG](/support/troubleshoot).
|
||||
2. Add OpenRAG to your project:
|
||||
|
||||
* Add the latest version:
|
||||
|
||||
```bash
|
||||
uv add openrag
|
||||
```
|
||||
|
||||
* Add a specific version:
|
||||
|
||||
```bash
|
||||
uv add openrag==0.1.30
|
||||
```
|
||||
|
||||
* Add a local wheel:
|
||||
|
||||
```bash
|
||||
uv add path/to/openrag-VERSION-py3-none-any.whl
|
||||
```
|
||||
|
||||
For more options, see [Managing dependencies with `uv`](https://docs.astral.sh/uv/concepts/projects/dependencies/).
|
||||
|
||||
3. Start the OpenRAG TUI:
|
||||
|
||||
```bash
|
||||
uv run openrag
|
||||
```
|
||||
|
||||
### uv pip install {#uv-pip-install}
|
||||
|
||||
1. Activate your virtual environment.
|
||||
|
||||
2. Install the OpenRAG Python package:
|
||||
|
||||
```bash
|
||||
uv pip install openrag
|
||||
```
|
||||
|
||||
3. Start the OpenRAG TUI:
|
||||
|
||||
```bash
|
||||
uv run openrag
|
||||
```
|
||||
|
||||
## Set up OpenRAG with the TUI
|
||||
<!-- use partial? -->
|
||||
|
||||
When the Terminal User Interface (TUI) starts, you must complete the initial setup to configure OpenRAG.
|
||||
|
||||

|
||||
|
||||
## Next steps
|
||||
|
||||
* [Manage OpenRAG services](/manage-services)
|
||||
* [Chat](/chat)
|
||||
* [Upload documents](/ingestion)
|
||||
* Try some of OpenRAG's core features in the [quickstart](/quickstart#chat-with-documents).
|
||||
* Learn how to [manage OpenRAG services](/manage-services).
|
||||
* [Upload documents](/ingestion), and then use the [**Chat**](/chat) to explore your data.
|
||||
|
|
@ -11,37 +11,63 @@ You can use [`uvx`](https://docs.astral.sh/uv/guides/tools/#running-tools) to in
|
|||
The [automatic installer script](/install) also uses `uvx` to install OpenRAG.
|
||||
:::
|
||||
|
||||
Depending on your project structure, `uvx` might not be suitable for production deployments.
|
||||
This installation method is best for testing OpenRAG by running it outside of a Python project.
|
||||
For other installation methods, see [Choose an installation method](/install-options).
|
||||
|
||||
## Prerequisites
|
||||
|
||||
- For Microsoft Windows, you must use the Windows Subsystem for Linux (WSL).
|
||||
See [Install OpenRAG on Windows](/install-windows) before proceeding.
|
||||
|
||||
- Install [Python](https://www.python.org/downloads/release/python-3100/) version 3.13 or later.
|
||||
|
||||
- Install [uv](https://docs.astral.sh/uv/getting-started/installation/).
|
||||
|
||||
- Install [Podman](https://podman.io/docs/installation) (recommended) or [Docker](https://docs.docker.com/get-docker/).
|
||||
|
||||
- Install [Podman Compose](https://docs.podman.io/en/latest/markdown/podman-compose.1.html) or [Docker Compose](https://docs.docker.com/compose/install/).
|
||||
To use Docker Compose with Podman, you must alias Docker Compose commands to Podman commands.
|
||||
|
||||
- Gather the credentials and connection details for your preferred model providers.
|
||||
|
||||
- OpenAI: Create an [OpenAI API key](https://platform.openai.com/api-keys).
|
||||
- Anthropic language models: Create an [Anthropic API key](https://www.anthropic.com/docs/api/reference).
|
||||
- IBM watsonx.ai: Get your watsonx.ai API endpoint, IBM project ID, and IBM API key from your watsonx deployment.
|
||||
- Ollama: Use the [Ollama documentation](https://docs.ollama.com/) to set up your Ollama instance locally, in the cloud, or on a remote server, and then get your Ollama server's base URL.
|
||||
|
||||
You must have access to at least one language model and one embedding model.
|
||||
If your chosen provider offers both types, you can use the same provider for both models.
|
||||
If your provider offers only one type, such as Anthropic, you must select two providers.
|
||||
|
||||
- Optional: Install GPU support with an NVIDIA GPU, [CUDA](https://docs.nvidia.com/cuda/) support, and compatible NVIDIA drivers on the OpenRAG host machine. If you don't have GPU capabilities, OpenRAG provides an alternate CPU-only deployment.
|
||||
|
||||
## Install and run OpenRAG with uvx
|
||||
|
||||
1. Create a directory to store your OpenRAG configuration files and data, and then change to that directory:
|
||||
1. Create a directory to store your OpenRAG configuration files and data, and then change to that directory:
|
||||
|
||||
```bash
|
||||
mkdir openrag-workspace
|
||||
cd openrag-workspace
|
||||
```
|
||||
```bash
|
||||
mkdir openrag-workspace
|
||||
cd openrag-workspace
|
||||
```
|
||||
|
||||
:::tip
|
||||
If you want to use a pre-populated [`.env`](/reference/configuration) file for OpenRAG, copy it to this directory before invoking OpenRAG.
|
||||
:::
|
||||
2. Optional: If you want to use a pre-populated [`.env`](/reference/configuration) file for OpenRAG, copy it to this directory before invoking OpenRAG.
|
||||
|
||||
2. Invoke OpenRAG:
|
||||
```bash
|
||||
uvx openrag
|
||||
```
|
||||
3. Invoke OpenRAG:
|
||||
|
||||
You can invoke a specific version using any of the [`uvx` version specifiers](https://docs.astral.sh/uv/guides/tools/#requesting-specific-versions), such as `--from`:
|
||||
```bash
|
||||
uvx openrag
|
||||
```
|
||||
|
||||
```bash
|
||||
uvx --from openrag==0.1.30 openrag
|
||||
```
|
||||
You can invoke a specific version using any of the [`uvx` version specifiers](https://docs.astral.sh/uv/guides/tools/#requesting-specific-versions), such as `--from`:
|
||||
|
||||
Invoking OpenRAG with `uvx openrag` creates a cached, ephemeral environment for the TUI in your local `uv` cache.
|
||||
By invoking OpenRAG in a specific directory, your OpenRAG configuration files and data are stored separately from the `uv` cache.
|
||||
Clearing the `uv` cache doesn't remove your entire OpenRAG installation.
|
||||
After clearing the cache, you can re-invoke OpenRAG (`uvx openrag`) to restart the TUI with your preserved configuration and data.
|
||||
```bash
|
||||
uvx --from openrag==0.1.30 openrag
|
||||
```
|
||||
|
||||
Invoking OpenRAG with `uvx openrag` creates a cached, ephemeral environment for the TUI in your local `uv` cache.
|
||||
By invoking OpenRAG in a specific directory, your OpenRAG configuration files and data are stored separately from the `uv` cache.
|
||||
Clearing the `uv` cache doesn't remove your entire OpenRAG installation.
|
||||
After clearing the cache, you can re-invoke OpenRAG (`uvx openrag`) to restart the TUI with your preserved configuration and data.
|
||||
|
||||
If you encounter errors during installation, see [Troubleshoot OpenRAG](/support/troubleshoot).
|
||||
|
||||
|
|
@ -56,6 +82,6 @@ If it detects a `.env` file in the OpenRAG installation directory, it sources an
|
|||
|
||||
## Next steps
|
||||
|
||||
* [Manage OpenRAG services](/manage-services)
|
||||
* [Chat](/chat)
|
||||
* [Upload documents](/ingestion)
|
||||
* Try some of OpenRAG's core features in the [quickstart](/quickstart#chat-with-documents).
|
||||
* Learn how to [manage OpenRAG services](/manage-services).
|
||||
* [Upload documents](/ingestion), and then use the [**Chat**](/chat) to explore your data.
|
||||
|
|
@ -3,25 +3,38 @@ title: Install OpenRAG on Microsoft Windows
|
|||
slug: /install-windows
|
||||
---
|
||||
|
||||
import PartialWsl from '@site/docs/_partial-wsl-install.mdx';
|
||||
|
||||
If you're using Windows, you must install OpenRAG within the Windows Subsystem for Linux (WSL).
|
||||
|
||||
For guided configuration and simplified service management, install OpenRAG with services managed by the [Terminal User Interface (TUI)](/tui).
|
||||
For self-managed services, deploy OpenRAG with Docker or Podman inside your WSL distribution.
|
||||
|
||||
## Nested virtualization isn't supported
|
||||
|
||||
## Prepare your WSL environment
|
||||
OpenRAG isn't compatible with nested virtualization, which can cause networking issues.
|
||||
Don't install OpenRAG on a WSL distribution that is installed inside a Windows VM.
|
||||
Instead, install OpenRAG on your base OS or a non-nested Linux VM.
|
||||
|
||||
<PartialWsl />
|
||||
## Install OpenRAG in the WSL
|
||||
|
||||
## Install OpenRAG within the WSL
|
||||
1. [Install WSL](https://learn.microsoft.com/en-us/windows/wsl/install) with an Ubuntu distribution using WSL 2:
|
||||
|
||||
<!-- Use any install method to install OpenRAG within your WSL distribution. -->
|
||||
```powershell
|
||||
wsl --install -d Ubuntu
|
||||
```
|
||||
|
||||
## Next steps
|
||||
For new installations, the `wsl --install` command uses WSL 2 and Ubuntu by default.
|
||||
|
||||
* [Manage OpenRAG services](/manage-services)
|
||||
* [Chat](/chat)
|
||||
* [Upload documents](/ingestion)
|
||||
For existing WSL installations, you can [change the distribution](https://learn.microsoft.com/en-us/windows/wsl/install#change-the-default-linux-distribution-installed) and [check the WSL version](https://learn.microsoft.com/en-us/windows/wsl/install#upgrade-version-from-wsl-1-to-wsl-2).
|
||||
|
||||
2. [Start your WSL Ubuntu distribution](https://learn.microsoft.com/en-us/windows/wsl/install#ways-to-run-multiple-linux-distributions-with-wsl) if it doesn't start automatically.
|
||||
|
||||
3. [Set up a username and password for your WSL distribution](https://learn.microsoft.com/en-us/windows/wsl/setup/environment#set-up-your-linux-username-and-password).
|
||||
|
||||
4. [Install Docker Desktop for Windows with WSL 2](https://learn.microsoft.com/en-us/windows/wsl/tutorials/wsl-containers). When you reach the Docker Desktop **WSL integration** settings, make sure your Ubuntu distribution is enabled, and then click **Apply & Restart** to enable Docker support in WSL.
|
||||
|
||||
The Docker Desktop WSL integration makes Docker available within your WSL distribution.
|
||||
You don't need to install Docker or Podman separately in your WSL distribution before you install OpenRAG.
|
||||
|
||||
5. Install and run OpenRAG from within your WSL Ubuntu distribution.
|
||||
You can install OpenRAG in your WSL distribution using any of the [OpenRAG installation methods](/install-options).
|
||||
|
||||
## Troubleshoot OpenRAG in WSL
|
||||
|
||||
If you encounter issues with port forwarding or the Windows Firewall, you might need to adjust the [Hyper-V firewall settings](https://learn.microsoft.com/en-us/windows/security/operating-system-security/network-security/windows-firewall/hyper-v-firewall) to allow communication between your WSL distribution and the Windows host. For more troubleshooting advice for networking issues, see [Troubleshooting WSL common issues](https://learn.microsoft.com/en-us/windows/wsl/troubleshooting#common-issues).
|
||||
|
|
@ -13,44 +13,34 @@ For a fully guided installation and preview of OpenRAG's core features, try the
|
|||
|
||||
For guided configuration and simplified service management, install OpenRAG with services managed by the [Terminal User Interface (TUI)](/tui).
|
||||
|
||||
The installer script detects and installs any missing dependencies, and then installs OpenRAG with [`uvx`](https://docs.astral.sh/uv/guides/tools/#running-tools) in the directory where you run the script.
|
||||
The installer script installs `uv`, Docker or Podman, Docker Compose, and OpenRAG.
|
||||
|
||||
This installation method is best for running OpenRAG outside of a Python project.
|
||||
This installation method is best for testing OpenRAG by running it outside of a Python project.
|
||||
For other installation methods, see [Choose an installation method](/install-options).
|
||||
|
||||
## Prerequisites
|
||||
|
||||
- All OpenRAG installations require [Python](https://www.python.org/downloads/release/python-3100/) version 3.13 or later.
|
||||
- For Microsoft Windows, you must use the Windows Subsystem for Linux (WSL).
|
||||
See [Install OpenRAG on Windows](/install-windows) before proceeding.
|
||||
|
||||
- If you aren't using the automatic installer script, install the following:
|
||||
- Install [Python](https://www.python.org/downloads/release/python-3100/) version 3.13 or later.
|
||||
|
||||
- [uv](https://docs.astral.sh/uv/getting-started/installation/).
|
||||
- [Podman](https://podman.io/docs/installation) (recommended) or [Docker](https://docs.docker.com/get-docker/).
|
||||
- [`podman-compose`](https://docs.podman.io/en/latest/markdown/podman-compose.1.html) or [Docker Compose](https://docs.docker.com/compose/install/). To use Docker Compose with Podman, you must alias Docker Compose commands to Podman commands.
|
||||
|
||||
- To run OpenRAG on Microsoft Windows, you must use the Windows Subsystem for Linux (WSL).
|
||||
See [Install OpenRAG on Windows](/install-windows).
|
||||
|
||||
- Prepare model providers and credentials.
|
||||
|
||||
During [application onboarding](#application-onboarding), you must select language model and embedding model providers.
|
||||
If your chosen provider offers both types, you can use the same provider for both selections.
|
||||
If your provider offers only one type, such as Anthropic, you must select two providers.
|
||||
|
||||
Gather the credentials and connection details for your chosen model providers before starting onboarding:
|
||||
- Gather the credentials and connection details for your preferred model providers.
|
||||
|
||||
- OpenAI: Create an [OpenAI API key](https://platform.openai.com/api-keys).
|
||||
- Anthropic language models: Create an [Anthropic API key](https://www.anthropic.com/docs/api/reference).
|
||||
- IBM watsonx.ai: Get your watsonx.ai API endpoint, IBM project ID, and IBM API key from your watsonx deployment.
|
||||
- Ollama: Use the [Ollama documentation](https://docs.ollama.com/) to set up your Ollama instance locally, in the cloud, or on a remote server, and then get your Ollama server's base URL.
|
||||
|
||||
You must have access to at least one language model and one embedding model.
|
||||
If your chosen provider offers both types, you can use the same provider for both models.
|
||||
If your provider offers only one type, such as Anthropic, you must select two providers.
|
||||
|
||||
- Optional: Install GPU support with an NVIDIA GPU, [CUDA](https://docs.nvidia.com/cuda/) support, and compatible NVIDIA drivers on the OpenRAG host machine. If you don't have GPU capabilities, OpenRAG provides an alternate CPU-only deployment.
|
||||
|
||||
## Install OpenRAG {#install}
|
||||
## Run the installer script {#install}
|
||||
|
||||
The script detects and installs uv, Docker/Podman, and Docker Compose prerequisites, then it installs and runs OpenRAG with `uvx`.
|
||||
|
||||
1. Create a directory to store your OpenRAG configuration file, and then change to that directory:
|
||||
1. Create a directory to store your OpenRAG configuration files and data, and then change to that directory:
|
||||
|
||||
```bash
|
||||
mkdir openrag-workspace
|
||||
|
|
@ -71,6 +61,8 @@ The script detects and installs uv, Docker/Podman, and Docker Compose prerequisi
|
|||
```
|
||||
:::
|
||||
|
||||
The installer script installs OpenRAG with [`uvx`](https://docs.astral.sh/uv/guides/tools/#running-tools) in the directory where you run the script.
|
||||
|
||||
3. Wait while the installer script prepares your environment and installs OpenRAG.
|
||||
You might be prompted to install certain dependencies if they aren't already present in your environment.
|
||||
|
||||
|
|
@ -79,9 +71,8 @@ Once the environment is ready, the OpenRAG Terminal User Interface (TUI) starts.
|
|||
|
||||

|
||||
|
||||
The installer script uses `uvx`, which creates a cached, ephemeral environment in your local `uv` cache.
|
||||
Running the script in a specific directory, your OpenRAG configuration files and data are stored separately from the `uv` cache.
|
||||
Clearing the cache doesn't delete your entire OpenRAG installation, only the TUI environment.
|
||||
Because the installer script uses `uvx`, it creates a cached, ephemeral environment in your local `uv` cache, and your OpenRAG configuration files and data are stored separately from the `uv` cache.
|
||||
Clearing the cache doesn't delete your entire OpenRAG installation, only the temporary TUI environment.
|
||||
After clearing the cache, run `uvx openrag` to [access the TUI](/tui) and continue with your preserved configuration and data.
|
||||
|
||||
If you encounter errors during installation, see [Troubleshoot OpenRAG](/support/troubleshoot).
|
||||
|
|
@ -218,6 +209,6 @@ The first time you start the OpenRAG application, you must complete application
|
|||
|
||||
## Next steps
|
||||
|
||||
* [Manage OpenRAG services](/manage-services)
|
||||
* [Chat](/chat)
|
||||
* [Upload documents](/ingestion)
|
||||
* Try some of OpenRAG's core features in the [quickstart](/quickstart#chat-with-documents).
|
||||
* Learn how to [manage OpenRAG services](/manage-services).
|
||||
* [Upload documents](/ingestion), and then use the [**Chat**](/chat) to explore your data.
|
||||
|
|
@ -6,29 +6,20 @@ slug: /quickstart
|
|||
import Icon from "@site/src/components/icon/icon";
|
||||
import Tabs from '@theme/Tabs';
|
||||
import TabItem from '@theme/TabItem';
|
||||
import PartialWsl from '@site/docs/_partial-wsl-install.mdx';
|
||||
import PartialIntegrateChat from '@site/docs/_partial-integrate-chat.mdx';
|
||||
|
||||
Use this quickstart to install OpenRAG, and then try some of OpenRAG's core features.
|
||||
|
||||
## Prerequisites
|
||||
|
||||
This quickstart requires the following:
|
||||
|
||||
- An [OpenAI API key](https://platform.openai.com/api-keys).
|
||||
- Get an [OpenAI API key](https://platform.openai.com/api-keys).
|
||||
This quickstart uses OpenAI for simplicity.
|
||||
For other providers, see the complete [installation guide](/install).
|
||||
For other providers, see the other [installation methods](/install-options).
|
||||
|
||||
- [Python](https://www.python.org/downloads/release/python-3100/) version 3.13 or later.
|
||||
- Install [Python](https://www.python.org/downloads/release/python-3100/) version 3.13 or later.
|
||||
|
||||
- Microsoft Windows only: To run OpenRAG on Windows, you must use the Windows Subsystem for Linux (WSL).
|
||||
|
||||
<details>
|
||||
<summary>Install WSL for OpenRAG</summary>
|
||||
|
||||
<PartialWsl />
|
||||
|
||||
</details>
|
||||
- For Microsoft Windows, you must use the Windows Subsystem for Linux (WSL).
|
||||
See [Install OpenRAG on Windows](/install-windows) before proceeding.
|
||||
|
||||
## Install OpenRAG
|
||||
|
||||
|
|
|
|||
Loading…
Add table
Reference in a new issue