From 69a85d3e4815b9b19538ab79a5205f04eb255c48 Mon Sep 17 00:00:00 2001
From: Mendon Kissling <59585235+mendonk@users.noreply.github.com>
Date: Mon, 29 Sep 2025 14:36:01 -0400
Subject: [PATCH 1/4] move-docker-to-its-own-page
---
docs/docs/get-started/docker.mdx | 98 ++++++++++++++++++++++++--------
1 file changed, 73 insertions(+), 25 deletions(-)
diff --git a/docs/docs/get-started/docker.mdx b/docs/docs/get-started/docker.mdx
index a394bc69..84f0fca6 100644
--- a/docs/docs/get-started/docker.mdx
+++ b/docs/docs/get-started/docker.mdx
@@ -1,40 +1,88 @@
---
-title: Docker Deployment
+title: Docker deployment
slug: /get-started/docker
---
-# Docker Deployment
+There are two different Docker Compose files.
+They deploy the same applications and containers, but to different environments.
-## Standard Deployment
+- [`docker-compose.yml`](https://github.com/langflow-ai/openrag/blob/main/docker-compose.yml) is an OpenRAG deployment with GPU support for accelerated AI processing.
-```bash
-# Build and start all services
-docker compose build
-docker compose up -d
-```
+- [`docker-compose-cpu.yml`](https://github.com/langflow-ai/openrag/blob/main/docker-compose-cpu.yml) is a CPU-only version of OpenRAG for systems without GPU support. Use this Docker compose file for environments where GPU drivers aren't available.
-## CPU-Only Deployment
+To install OpenRAG with Docker Compose:
-For environments without GPU support:
+1. Clone the OpenRAG repository.
+ ```bash
+ git clone https://github.com/langflow-ai/openrag.git
+ cd openrag
+ ```
-```bash
-docker compose -f docker-compose-cpu.yml up -d
-```
+2. Copy the example `.env` file that is included in the repository root.
+ The example file includes all environment variables with comments to guide you in finding and setting their values.
+ ```bash
+ cp .env.example .env
+ ```
-## Force Rebuild
+ Alternatively, create a new `.env` file in the repository root.
+ ```
+ touch .env
+ ```
-If you need to reset state or rebuild everything:
+3. Set environment variables. The Docker Compose files are populated with values from your `.env`, so the following values are **required** to be set:
+
+ ```bash
+ OPENSEARCH_PASSWORD=your_secure_password
+ OPENAI_API_KEY=your_openai_api_key
+
+ LANGFLOW_SUPERUSER=admin
+ LANGFLOW_SUPERUSER_PASSWORD=your_langflow_password
+ LANGFLOW_SECRET_KEY=your_secret_key
+ ```
+ For more information on configuring OpenRAG with environment variables, see [Environment variables](/configure/configuration).
+ For additional configuration values, including `config.yaml`, see [Configuration](/configure/configuration).
+
+4. Deploy OpenRAG with Docker Compose based on your deployment type.
+
+ For GPU-enabled systems, run the following command:
+ ```bash
+ docker compose up -d
+ ```
+
+ For CPU-only systems, run the following command:
+ ```bash
+ docker compose -f docker-compose-cpu.yml up -d
+ ```
+
+ The OpenRAG Docker Compose file starts five containers:
+ | Container Name | Default Address | Purpose |
+ |---|---|---|
+ | OpenRAG Backend | http://localhost:8000 | FastAPI server and core functionality. |
+ | OpenRAG Frontend | http://localhost:3000 | React web interface for users. |
+ | Langflow | http://localhost:7860 | AI workflow engine and flow management. |
+ | OpenSearch | http://localhost:9200 | Vector database for document storage. |
+ | OpenSearch Dashboards | http://localhost:5601 | Database administration interface. |
+
+5. Verify installation by confirming all services are running.
+
+ ```bash
+ docker compose ps
+ ```
+
+ You can now access the application at:
+
+ - **Frontend**: http://localhost:3000
+ - **Backend API**: http://localhost:8000
+ - **Langflow**: http://localhost:7860
+
+Continue with the [Quickstart](/quickstart).
+
+## Rebuild all Docker containers
+
+If you need to reset state and rebuild all of your containers, run the following command.
+Your OpenSearch and Langflow databases will be lost.
+Documents stored in the `./documents` directory will persist, since the directory is mounted as a volume in the OpenRAG backend container.
```bash
docker compose up --build --force-recreate --remove-orphans
```
-
-## Service URLs
-
-After deployment, services are available at:
-
-- Frontend: http://localhost:3000
-- Backend API: http://localhost:8000
-- Langflow: http://localhost:7860
-- OpenSearch: http://localhost:9200
-- OpenSearch Dashboards: http://localhost:5601
From 030b73c6abc5b0a0ace65367ce5db001e7c53b6b Mon Sep 17 00:00:00 2001
From: Mendon Kissling <59585235+mendonk@users.noreply.github.com>
Date: Mon, 29 Sep 2025 14:37:01 -0400
Subject: [PATCH 2/4] links
---
docs/docs/get-started/install.mdx | 122 +-----------------------------
1 file changed, 4 insertions(+), 118 deletions(-)
diff --git a/docs/docs/get-started/install.mdx b/docs/docs/get-started/install.mdx
index dcb5c5f1..a9192cf4 100644
--- a/docs/docs/get-started/install.mdx
+++ b/docs/docs/get-started/install.mdx
@@ -10,7 +10,7 @@ OpenRAG can be installed in multiple ways:
* [**Python wheel**](#install-python-wheel): Install the OpenRAG Python wheel and use the [OpenRAG Terminal User Interface (TUI)](/get-started/tui) to install, run, and configure your OpenRAG deployment without running Docker commands.
-* [**Docker Compose**](#install-and-run-docker): Clone the OpenRAG repository and deploy OpenRAG with Docker Compose, including all services and dependencies.
+* [**Docker Compose**](/docker): Clone the OpenRAG repository and deploy OpenRAG with Docker Compose, including all services and dependencies.
## Prerequisites
@@ -79,46 +79,8 @@ For more information on virtual environments, see [uv](https://docs.astral.sh/uv
Command completed successfully
```
-7. To open the OpenRAG application, click **Open App**, press 6, or navigate to `http://localhost:3000`.
- The application opens.
-8. Select your language model and embedding model provider, and complete the required fields.
- **Your provider can only be selected once, and you must use the same provider for your language model and embedding model.**
- The language model can be changed, but the embeddings model cannot be changed.
- To change your provider selection, you must restart OpenRAG and delete the `config.yml` file.
-
-
-
- 9. If you already entered a value for `OPENAI_API_KEY` in the TUI in Step 5, enable **Get API key from environment variable**.
- 10. Under **Advanced settings**, select your **Embedding Model** and **Language Model**.
- 11. To load 2 sample PDFs, enable **Sample dataset**.
- This is recommended, but not required.
- 12. Click **Complete**.
-
-
-
- 9. Complete the fields for **watsonx.ai API Endpoint**, **IBM API key**, and **IBM Project ID**.
- These values are found in your IBM watsonx deployment.
- 10. Under **Advanced settings**, select your **Embedding Model** and **Language Model**.
- 11. To load 2 sample PDFs, enable **Sample dataset**.
- This is recommended, but not required.
- 12. Click **Complete**.
-
-
-
- 9. Enter your Ollama server's base URL address.
- The default Ollama server address is `http://localhost:11434`.
- Since OpenRAG is running in a container, you may need to change `localhost` to access services outside of the container. For example, change `http://localhost:11434` to `http://host.docker.internal:11434` to connect to Ollama.
- OpenRAG automatically sends a test connection to your Ollama server to confirm connectivity.
- 10. Select the **Embedding Model** and **Language Model** your Ollama server is running.
- OpenRAG automatically lists the available models from your Ollama server.
- 11. To load 2 sample PDFs, enable **Sample dataset**.
- This is recommended, but not required.
- 12. Click **Complete**.
-
-
-
-
-13. Continue with the [Quickstart](/quickstart).
+7. To open the OpenRAG application, click **Open App** or press 6.
+8. Continue with the [Quickstart](/quickstart).
### Advanced Setup {#advanced-setup}
@@ -138,80 +100,4 @@ The `LANGFLOW_PUBLIC_URL` controls where the Langflow web interface can be acces
The `WEBHOOK_BASE_URL` controls where the endpoint for `/connectors/CONNECTOR_TYPE/webhook` will be available.
This connection enables real-time document synchronization with external services.
-For example, for Google Drive file synchronization the webhook URL is `/connectors/google_drive/webhook`.
-
-## Docker {#install-and-run-docker}
-
-There are two different Docker Compose files.
-They deploy the same applications and containers, but to different environments.
-
-- [`docker-compose.yml`](https://github.com/langflow-ai/openrag/blob/main/docker-compose.yml) is an OpenRAG deployment with GPU support for accelerated AI processing.
-
-- [`docker-compose-cpu.yml`](https://github.com/langflow-ai/openrag/blob/main/docker-compose-cpu.yml) is a CPU-only version of OpenRAG for systems without GPU support. Use this Docker compose file for environments where GPU drivers aren't available.
-
-To install OpenRAG with Docker Compose:
-
-1. Clone the OpenRAG repository.
- ```bash
- git clone https://github.com/langflow-ai/openrag.git
- cd openrag
- ```
-
-2. Copy the example `.env` file that is included in the repository root.
- The example file includes all environment variables with comments to guide you in finding and setting their values.
- ```bash
- cp .env.example .env
- ```
-
- Alternatively, create a new `.env` file in the repository root.
- ```
- touch .env
- ```
-
-3. Set environment variables. The Docker Compose files are populated with values from your `.env`, so the following values are **required** to be set:
-
- ```bash
- OPENSEARCH_PASSWORD=your_secure_password
- OPENAI_API_KEY=your_openai_api_key
-
- LANGFLOW_SUPERUSER=admin
- LANGFLOW_SUPERUSER_PASSWORD=your_langflow_password
- LANGFLOW_SECRET_KEY=your_secret_key
- ```
- For more information on configuring OpenRAG with environment variables, see [Environment variables](/configure/configuration).
- For additional configuration values, including `config.yaml`, see [Configuration](/configure/configuration).
-
-4. Deploy OpenRAG with Docker Compose based on your deployment type.
-
- For GPU-enabled systems, run the following command:
- ```bash
- docker compose up -d
- ```
-
- For CPU-only systems, run the following command:
- ```bash
- docker compose -f docker-compose-cpu.yml up -d
- ```
-
- The OpenRAG Docker Compose file starts five containers:
- | Container Name | Default Address | Purpose |
- |---|---|---|
- | OpenRAG Backend | http://localhost:8000 | FastAPI server and core functionality. |
- | OpenRAG Frontend | http://localhost:3000 | React web interface for users. |
- | Langflow | http://localhost:7860 | AI workflow engine and flow management. |
- | OpenSearch | http://localhost:9200 | Vector database for document storage. |
- | OpenSearch Dashboards | http://localhost:5601 | Database administration interface. |
-
-5. Verify installation by confirming all services are running.
-
- ```bash
- docker compose ps
- ```
-
- You can now access the application at:
-
- - **Frontend**: http://localhost:3000
- - **Backend API**: http://localhost:8000
- - **Langflow**: http://localhost:7860
-
-Continue with the Quickstart.
\ No newline at end of file
+For example, for Google Drive file synchronization the webhook URL is `/connectors/google_drive/webhook`.
\ No newline at end of file
From d88730acb3234ad7d2fe3ca9846c00c93871fc36 Mon Sep 17 00:00:00 2001
From: Mendon Kissling <59585235+mendonk@users.noreply.github.com>
Date: Mon, 29 Sep 2025 14:37:47 -0400
Subject: [PATCH 3/4] link
---
docs/docs/get-started/install.mdx | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/docs/docs/get-started/install.mdx b/docs/docs/get-started/install.mdx
index a9192cf4..e78f4df5 100644
--- a/docs/docs/get-started/install.mdx
+++ b/docs/docs/get-started/install.mdx
@@ -10,7 +10,7 @@ OpenRAG can be installed in multiple ways:
* [**Python wheel**](#install-python-wheel): Install the OpenRAG Python wheel and use the [OpenRAG Terminal User Interface (TUI)](/get-started/tui) to install, run, and configure your OpenRAG deployment without running Docker commands.
-* [**Docker Compose**](/docker): Clone the OpenRAG repository and deploy OpenRAG with Docker Compose, including all services and dependencies.
+* [**Docker Compose**](get-started/docker): Clone the OpenRAG repository and deploy OpenRAG with Docker Compose, including all services and dependencies.
## Prerequisites
From a88c6a9ed5dfce9468db21d1a42545d143fbd25c Mon Sep 17 00:00:00 2001
From: Mendon Kissling <59585235+mendonk@users.noreply.github.com>
Date: Mon, 29 Sep 2025 14:44:34 -0400
Subject: [PATCH 4/4] revert-onboarding
---
docs/docs/get-started/install.mdx | 42 +++++++++++++++++++++++++++++--
1 file changed, 40 insertions(+), 2 deletions(-)
diff --git a/docs/docs/get-started/install.mdx b/docs/docs/get-started/install.mdx
index e78f4df5..27cafb44 100644
--- a/docs/docs/get-started/install.mdx
+++ b/docs/docs/get-started/install.mdx
@@ -79,8 +79,46 @@ For more information on virtual environments, see [uv](https://docs.astral.sh/uv
Command completed successfully
```
-7. To open the OpenRAG application, click **Open App** or press 6.
-8. Continue with the [Quickstart](/quickstart).
+7. To open the OpenRAG application, click **Open App**, press 6, or navigate to `http://localhost:3000`.
+ The application opens.
+8. Select your language model and embedding model provider, and complete the required fields.
+ **Your provider can only be selected once, and you must use the same provider for your language model and embedding model.**
+ The language model can be changed, but the embeddings model cannot be changed.
+ To change your provider selection, you must restart OpenRAG and delete the `config.yml` file.
+
+
+
+ 9. If you already entered a value for `OPENAI_API_KEY` in the TUI in Step 5, enable **Get API key from environment variable**.
+ 10. Under **Advanced settings**, select your **Embedding Model** and **Language Model**.
+ 11. To load 2 sample PDFs, enable **Sample dataset**.
+ This is recommended, but not required.
+ 12. Click **Complete**.
+
+
+
+ 9. Complete the fields for **watsonx.ai API Endpoint**, **IBM API key**, and **IBM Project ID**.
+ These values are found in your IBM watsonx deployment.
+ 10. Under **Advanced settings**, select your **Embedding Model** and **Language Model**.
+ 11. To load 2 sample PDFs, enable **Sample dataset**.
+ This is recommended, but not required.
+ 12. Click **Complete**.
+
+
+
+ 9. Enter your Ollama server's base URL address.
+ The default Ollama server address is `http://localhost:11434`.
+ Since OpenRAG is running in a container, you may need to change `localhost` to access services outside of the container. For example, change `http://localhost:11434` to `http://host.docker.internal:11434` to connect to Ollama.
+ OpenRAG automatically sends a test connection to your Ollama server to confirm connectivity.
+ 10. Select the **Embedding Model** and **Language Model** your Ollama server is running.
+ OpenRAG automatically lists the available models from your Ollama server.
+ 11. To load 2 sample PDFs, enable **Sample dataset**.
+ This is recommended, but not required.
+ 12. Click **Complete**.
+
+
+
+
+13. Continue with the [Quickstart](/quickstart).
### Advanced Setup {#advanced-setup}