Merge pull request #778 from langflow-ai/docs-issue-765

Docs: GPU mode tips
This commit is contained in:
April I. Murphy 2026-01-15 07:27:06 -08:00 committed by GitHub
commit 211184ca9d
No known key found for this signature in database
GPG key ID: B5690EEEBB952194
4 changed files with 29 additions and 11 deletions

View file

@ -0,0 +1,5 @@
GPU acceleration isn't required for most use cases.
OpenRAG's CPU-only deployment doesn't prevent you from using GPU acceleration in external services, such as Ollama servers.
GPU acceleration is required only for specific use cases, typically involving customization of the ingestion flows or ingestion logic.
For example, writing alternate ingest logic in OpenRAG that uses GPUs directly in the container, or customizing the ingestion flows to use Langflow's Docling component with GPU acceleration instead of OpenRAG's `docling serve` service.

View file

@ -15,4 +15,6 @@ If a provider offers only one type, you must select two providers.
<PartialOllamaModels />
:::
* Optional: Install GPU support with an NVIDIA GPU, [CUDA](https://docs.nvidia.com/cuda/) support, and compatible NVIDIA drivers on the OpenRAG host machine. If you don't have GPU capabilities, OpenRAG provides an alternate CPU-only deployment.
* Optional: Install GPU support with an NVIDIA GPU, [CUDA](https://docs.nvidia.com/cuda/) support, and compatible NVIDIA drivers on the OpenRAG host machine.
If you don't have GPU capabilities, OpenRAG provides an alternate CPU-only deployment that is suitable for most use cases.
The default CPU-only deployment doesn't prevent you from using GPU acceleration in external services, such as Ollama servers.

View file

@ -12,6 +12,7 @@ import PartialPrereqWindows from '@site/docs/_partial-prereq-windows.mdx';
import PartialPrereqPython from '@site/docs/_partial-prereq-python.mdx';
import PartialInstallNextSteps from '@site/docs/_partial-install-next-steps.mdx';
import PartialOllamaModels from '@site/docs/_partial-ollama-models.mdx';
import PartialGpuModeTip from '@site/docs/_partial-gpu-mode-tip.mdx';
To manage your own OpenRAG services, deploy OpenRAG with Docker or Podman.
@ -116,7 +117,17 @@ The following variables are required or recommended:
3. Deploy the OpenRAG containers locally using the appropriate Docker Compose configuration for your environment:
* **GPU-accelerated deployment**: If your host machine has an NVIDIA GPU with CUDA support and compatible NVIDIA drivers, use the base `docker-compose.yml` file with the `docker-compose.gpu.yml` override.
* **CPU-only deployment** (default, recommended): If your host machine doesn't have NVIDIA GPU support, use the base `docker-compose.yml` file:
```bash title="Docker"
docker compose up -d
```
```bash title="Podman"
podman compose up -d
```
* **GPU-accelerated deployment**: If your host machine has an NVIDIA GPU with CUDA support and compatible NVIDIA drivers, use the base `docker-compose.yml` file with the `docker-compose.gpu.yml` override:
```bash title="Docker"
docker compose -f docker-compose.yml -f docker-compose.gpu.yml up -d
@ -126,15 +137,9 @@ The following variables are required or recommended:
podman compose -f docker-compose.yml -f docker-compose.gpu.yml up -d
```
* **CPU-only deployment** (default): If your host machine doesn't have NVIDIA GPU support, use the base `docker-compose.yml` file.
```bash title="Docker"
docker compose up -d
```
```bash title="Podman"
podman compose up -d
```
:::tip
<PartialGpuModeTip />
:::
4. Wait for the OpenRAG containers to start, and then confirm that all containers are running:

View file

@ -3,6 +3,8 @@ title: Use the TUI
slug: /tui
---
import PartialGpuModeTip from '@site/docs/_partial-gpu-mode-tip.mdx';
The OpenRAG Terminal User Interface (TUI) provides a simplified and guided experience for configuring, managing, and monitoring your OpenRAG deployment directly from the terminal.
![OpenRAG TUI Interface](@site/static/img/openrag_tui_dec_2025.png)
@ -36,6 +38,10 @@ In the TUI, click **Status**, and then click **Switch to GPU Mode** or **Switch
This change requires restarting all OpenRAG services because each mode has its own `docker-compose` file.
:::tip
<PartialGpuModeTip />
:::
## Exit the OpenRAG TUI
To exit the OpenRAG TUI, press <kbd>q</kbd> on the TUI main page.