Install OpenRAG containers
OpenRAG has two Docker Compose files. Both files deploy the same applications and containers locally, but they are for different environments.
+Install OpenRAG containers
OpenRAG has two Docker Compose files. Both files deploy the same applications and containers locally, but they are for different environments:
-
@@ -22,12 +22,56 @@docker-compose.ymlis an OpenRAG deployment with GPU support for accelerated AI processing. This Docker Compose file requires an NVIDIA GPU with CUDA support.
Prerequisites
-
-
- Install Python Version 3.10 to 3.13 -
- Install uv -
- Install Podman (recommended) or Docker -
- Install Docker Compose. If using Podman, use podman-compose or alias Docker compose commands to Podman commands. -
- Optional: Create an OpenAI API key. You can provide this key during Application Onboarding or choose a different model provider. -
- Optional: Install GPU support with an NVIDIA GPU, CUDA support, and compatible NVIDIA drivers on the OpenRAG host machine. If you don't have GPU capabilities, OpenRAG provides an alternate CPU-only deployment. +
-
+
Install the following:
+-
+
- Python version 3.10 to 3.13. +
- uv. +
- Podman (recommended) or Docker. +
podman-composeor Docker Compose. To use Docker Compose with Podman, you must alias Docker Compose commands to Podman commands.
+
+ -
+
Microsoft Windows only: To run OpenRAG on Windows, you must use the Windows Subsystem for Linux (WSL).
++Install WSL for OpenRAG
-
+
-
+
Install WSL with the Ubuntu distribution using WSL 2:
++wsl --install -d UbuntuFor new installations, the
+wsl --installcommand uses WSL 2 and Ubuntu by default.For existing WSL installations, you can change the distribution and check the WSL version.
+
+ -
+
Start your WSL Ubuntu distribution if it doesn't start automatically.
+
+ - + + +
-
+
Install Docker Desktop for Windows with WSL 2. When you reach the Docker Desktop WSL integration settings, make sure your Ubuntu distribution is enabled, and then click Apply & Restart to enable Docker support in WSL.
+
+ -
+
Install and run OpenRAG from within your WSL Ubuntu distribution.
+
+
+If you encounter issues with port forwarding or the Windows Firewall, you might need to adjust the Hyper-V firewall settings to allow communication between your WSL distribution and the Windows host. For more troubleshooting advice for networking issues, see Troubleshooting WLS common issues.
+ -
+
-
+
Prepare model providers and credentials.
+During Application Onboarding, you must select language model and embedding model providers. +If your chosen provider offers both types, you can use the same provider for both selections. +If your provider offers only one type, such as Anthropic, you must select two providers.
+Gather the credentials and connection details for your chosen model providers before starting onboarding:
+-
+
- OpenAI: Create an OpenAI API key. +
- Anthropic language models: Create an Anthropic API key. +
- IBM watsonx.ai: Get your watsonx.ai API endpoint, IBM project ID, and IBM API key from your watsonx deployment. +
- Ollama: Use the Ollama documentation to set up your Ollama instance locally, in the cloud, or on a remote server, and then get your Ollama server's base URL. +
+ -
+
Optional: Install GPU support with an NVIDIA GPU, CUDA support, and compatible NVIDIA drivers on the OpenRAG host machine. This is required to use the GPU-accelerated Docker Compose file. If you choose not to use GPU support, you must use the CPU-only Docker Compose file instead.
+
Install OpenRAG with Docker Compose
To install OpenRAG with Docker Compose, do the following:
diff --git a/index.html b/index.html index 3587f3f8..a7bf943b 100644 --- a/index.html +++ b/index.html @@ -4,7 +4,7 @@If you prefer running Podman or Docker containers and manually editing .env files, see Install OpenRAG Containers.
Prerequisites
-
-
- Install Python Version 3.10 to 3.13 -
- Install uv -
- Install Podman (recommended) or Docker -
- Install Docker Compose. If using Podman, use podman-compose or alias Docker compose commands to Podman commands. -
- Optional: Create an OpenAI API key. During Application Onboarding, you can provide this key or choose a different model provider. -
- Optional: Install GPU support with an NVIDIA GPU, CUDA support, and compatible NVIDIA drivers on the OpenRAG host machine. If you don't have GPU capabilities, OpenRAG provides an alternate CPU-only deployment. +
-
+
All OpenRAG installations require Python version 3.10 to 3.13.
+
+ -
+
If you aren't using the automatic installer script, install the following:
+-
+
- uv. +
- Podman (recommended) or Docker. +
podman-composeor Docker Compose. To use Docker Compose with Podman, you must alias Docker Compose commands to Podman commands.
+
+ -
+
Microsoft Windows only: To run OpenRAG on Windows, you must use the Windows Subsystem for Linux (WSL).
++Install WSL for OpenRAG
-
+
-
+
Install WSL with the Ubuntu distribution using WSL 2:
++wsl --install -d UbuntuFor new installations, the
+wsl --installcommand uses WSL 2 and Ubuntu by default.For existing WSL installations, you can change the distribution and check the WSL version.
+
+ -
+
Start your WSL Ubuntu distribution if it doesn't start automatically.
+
+ - + + +
-
+
Install Docker Desktop for Windows with WSL 2. When you reach the Docker Desktop WSL integration settings, make sure your Ubuntu distribution is enabled, and then click Apply & Restart to enable Docker support in WSL.
+
+ -
+
Install and run OpenRAG from within your WSL Ubuntu distribution.
+
+
+If you encounter issues with port forwarding or the Windows Firewall, you might need to adjust the Hyper-V firewall settings to allow communication between your WSL distribution and the Windows host. For more troubleshooting advice for networking issues, see Troubleshooting WLS common issues.
+ -
+
-
+
Prepare model providers and credentials.
+During Application Onboarding, you must select language model and embedding model providers. +If your chosen provider offers both types, you can use the same provider for both selections. +If your provider offers only one type, such as Anthropic, you must select two providers.
+Gather the credentials and connection details for your chosen model providers before starting onboarding:
+-
+
- OpenAI: Create an OpenAI API key. +
- Anthropic language models: Create an Anthropic API key. +
- IBM watsonx.ai: Get your watsonx.ai API endpoint, IBM project ID, and IBM API key from your watsonx deployment. +
- Ollama: Use the Ollama documentation to set up your Ollama instance locally, in the cloud, or on a remote server, and then get your Ollama server's base URL. +
+ -
+
Optional: Install GPU support with an NVIDIA GPU, CUDA support, and compatible NVIDIA drivers on the OpenRAG host machine. If you don't have GPU capabilities, OpenRAG provides an alternate CPU-only deployment.
+
Install OpenRAG
-To use OpenRAG on Windows, use WSL (Windows Subsystem for Linux).
Choose an installation method based on your needs:
- For new users, the automatic installer script detects and installs prerequisites and then runs OpenRAG. diff --git a/knowledge/index.html b/knowledge/index.html index 5f7a776d..52555c04 100644 --- a/knowledge/index.html +++ b/knowledge/index.html @@ -4,7 +4,7 @@
-
+
An OpenAI API key. +This quickstart uses OpenAI for simplicity. +For other providers, see the complete installation guide.
+
+ -
+
Python version 3.10 to 3.13.
+
+ -
+
Microsoft Windows only: To run OpenRAG on Windows, you must use the Windows Subsystem for Linux (WSL).
++Install WSL for OpenRAG
-
+
-
+
Install WSL with the Ubuntu distribution using WSL 2:
++wsl --install -d UbuntuFor new installations, the
+wsl --installcommand uses WSL 2 and Ubuntu by default.For existing WSL installations, you can change the distribution and check the WSL version.
+
+ -
+
Start your WSL Ubuntu distribution if it doesn't start automatically.
+
+ - + + +
-
+
Install Docker Desktop for Windows with WSL 2. When you reach the Docker Desktop WSL integration settings, make sure your Ubuntu distribution is enabled, and then click Apply & Restart to enable Docker support in WSL.
+
+ -
+
Install and run OpenRAG from within your WSL Ubuntu distribution.
+
+
+If you encounter issues with port forwarding or the Windows Firewall, you might need to adjust the Hyper-V firewall settings to allow communication between your WSL distribution and the Windows host. For more troubleshooting advice for networking issues, see Troubleshooting WLS common issues.
+ -
+
Quickstart
Use this quickstart to install OpenRAG, and then try some of OpenRAG's core features.
Prerequisites
-This quickstart requires an OpenAI API key and Python version 3.10 to 3.13.
+This quickstart requires the following:
+-
+
Install OpenRAG
For this quickstart, install OpenRAG with the automatic installer script and basic setup:
-
diff --git a/reference/configuration/index.html b/reference/configuration/index.html
index 8d8eb0ac..f4771a9b 100644
--- a/reference/configuration/index.html
+++ b/reference/configuration/index.html
@@ -4,7 +4,7 @@