Merge branch 'main' into onboarding-cleanup

This commit is contained in:
Mike Fortman 2025-10-28 13:38:44 -05:00 committed by GitHub
commit 2f3c1eceaf
No known key found for this signature in database
GPG key ID: B5690EEEBB952194
6 changed files with 162 additions and 106 deletions

111
README.md
View file

@ -28,100 +28,37 @@ OpenRAG is a comprehensive Retrieval-Augmented Generation platform that enables
Use the OpenRAG Terminal User Interface (TUI) to manage your OpenRAG installation without complex command-line operations.
To launch OpenRAG with the TUI, do the following:
To quickly install and start OpenRAG, run `uvx openrag`.
1. Clone the OpenRAG repository.
```bash
git clone https://github.com/langflow-ai/openrag.git
cd openrag
```
To first set up a project and then install OpenRAG, do the following:
2. To start the TUI, from the repository root, run:
```bash
# Install dependencies first
uv sync
# Launch the TUI
uv run openrag
```
1. Create a new project with a virtual environment using `uv init`.
The TUI opens and guides you through OpenRAG setup.
```bash
uv init YOUR_PROJECT_NAME
cd YOUR_PROJECT_NAME
```
The `(venv)` prompt doesn't change, but `uv` commands will automatically use the project's virtual environment.
For more information on virtual environments, see the [uv documentation](https://docs.astral.sh/uv/pip/environments).
2. Ensure all dependencies are installed and updated in your virtual environment.
```bash
uv sync
```
3. Install and start the OpenRAG TUI.
```bash
uvx openrag
```
To install a specific version of the Langflow package, add the required version to the command, such as `uvx --from openrag==0.1.25 openrag`.
For the full TUI installation guide, see [TUI](https://docs.openr.ag/install).
## Docker installation
## Docker or Podman installation
If you prefer to use Docker to run OpenRAG, the repository includes two Docker Compose `.yml` files.
They deploy the same applications and containers locally, but to different environments.
- [`docker-compose.yml`](https://github.com/langflow-ai/openrag/blob/main/docker-compose.yml) is an OpenRAG deployment for environments with GPU support. GPU support requires an NVIDIA GPU with CUDA support and compatible NVIDIA drivers installed on the OpenRAG host machine.
- [`docker-compose-cpu.yml`](https://github.com/langflow-ai/openrag/blob/main/docker-compose-cpu.yml) is a CPU-only version of OpenRAG for systems without GPU support. Use this Docker compose file for environments where GPU drivers aren't available.
Both Docker deployments depend on `docling serve` to be running on port `5001` on the host machine. This enables [Mac MLX](https://opensource.apple.com/projects/mlx/) support for document processing. Installing OpenRAG with the TUI starts `docling serve` automatically, but for a Docker deployment you must manually start the `docling serve` process.
To install OpenRAG with Docker:
1. Clone the OpenRAG repository.
```bash
git clone https://github.com/langflow-ai/openrag.git
cd openrag
```
2. Install dependencies.
```bash
uv sync
```
3. Start `docling serve` on the host machine.
```bash
uv run python scripts/docling_ctl.py start --port 5001
```
4. Confirm `docling serve` is running.
```
uv run python scripts/docling_ctl.py status
```
Successful result:
```bash
Status: running
Endpoint: http://127.0.0.1:5001
Docs: http://127.0.0.1:5001/docs
PID: 27746
```
5. Build and start all services.
For the GPU-accelerated deployment, run:
```bash
docker compose build
docker compose up -d
```
For environments without GPU support, run:
```bash
docker compose -f docker-compose-cpu.yml up -d
```
The OpenRAG Docker Compose file starts five containers:
| Container Name | Default Address | Purpose |
|---|---|---|
| OpenRAG Backend | http://localhost:8000 | FastAPI server and core functionality. |
| OpenRAG Frontend | http://localhost:3000 | React web interface for users. |
| Langflow | http://localhost:7860 | AI workflow engine and flow management. |
| OpenSearch | http://localhost:9200 | Vector database for document storage. |
| OpenSearch Dashboards | http://localhost:5601 | Database administration interface. |
6. Access the OpenRAG application at `http://localhost:3000` and continue with the [Quickstart](https://docs.openr.ag/quickstart).
To stop `docling serve`, run:
```bash
uv run python scripts/docling_ctl.py stop
```
For more information, see [Install with Docker](https://docs.openr.ag/get-started/docker).
For more information, see [Install OpenRAG containers](https://docs.openr.ag/get-started/docker).
## Troubleshooting

View file

@ -17,7 +17,7 @@ Instead of starting OpenRAG using Docker commands and manually editing values in
Once OpenRAG is running, use the TUI to monitor your application, control your containers, and retrieve logs.
If you prefer running Docker commands and manually editing `.env` files, see [Install with Docker](/get-started/docker).
If you prefer running Podman or Docker containers and manually editing `.env` files, see [Install OpenRAG Containers](/get-started/docker).
## Prerequisites
@ -30,12 +30,12 @@ If you prefer running Docker commands and manually editing `.env` files, see [In
## Install the OpenRAG Python wheel {#install-python-wheel}
:::important
The `.whl` file is currently available as an internal download during public preview, and will be published to PyPI in a future release.
:::
The OpenRAG wheel installs the Terminal User Interface (TUI) for configuring and running OpenRAG.
To quickly install and start OpenRAG, run `uvx openrag`.
To first set up a project and then install OpenRAG, do the following:
1. Create a new project with a virtual environment using `uv init`.
```bash
@ -46,26 +46,45 @@ The OpenRAG wheel installs the Terminal User Interface (TUI) for configuring and
The `(venv)` prompt doesn't change, but `uv` commands will automatically use the project's virtual environment.
For more information on virtual environments, see the [uv documentation](https://docs.astral.sh/uv/pip/environments).
2. Add the local OpenRAG wheel to your project's virtual environment.
```bash
uv add PATH/TO/openrag-VERSION-py3-none-any.whl
```
Replace `PATH/TO/` and `VERSION` with the path and version of your downloaded OpenRAG `.whl` file.
For example, if your `.whl` file is in the `~/Downloads` directory, the command is `uv add ~/Downloads/openrag-0.1.8-py3-none-any.whl`.
3. Ensure all dependencies are installed and updated in your virtual environment.
2. Ensure all dependencies are installed and updated in your virtual environment.
```bash
uv sync
```
4. Start the OpenRAG TUI.
3. Install and start the OpenRAG TUI.
```bash
uv run openrag
uvx openrag
```
To install a specific version of the Langflow package, add the required version to the command, such as `uvx --from openrag==0.1.25 openrag`.
5. Continue with [Set up OpenRAG with the TUI](#setup).
<details closed>
<summary>Install a local wheel without uvx</summary>
If you downloaded the OpenRAG wheel to your local machine, follow these steps:
1. Add the wheel to your project's virtual environment.
```bash
uv add PATH/TO/openrag-VERSION-py3-none-any.whl
```
Replace `PATH/TO/` and `VERSION` with the path and version of your downloaded OpenRAG `.whl` file.
For example, if your `.whl` file is in the `~/Downloads` directory:
```bash
uv add ~/Downloads/openrag-0.1.8-py3-none-any.whl
```
2. Run OpenRAG.
```bash
uv run openrag
```
</details>
4. Continue with [Set up OpenRAG with the TUI](#setup).
## Set up OpenRAG with the TUI {#setup}

View file

@ -83,4 +83,43 @@ The **OpenRAG Backend** is the central orchestration service that coordinates al
**Third Party Services** like **Google Drive** connect to the **OpenRAG Backend** through OAuth authentication, allowing synchronication of cloud storage with the OpenSearch knowledge base.
The **OpenRAG Frontend** provides the user interface for interacting with the system.
The **OpenRAG Frontend** provides the user interface for interacting with the system.
## Performance expectations
On a local VM with 7 vCPUs and 8GiB RAM, OpenRAG ingested approximately 5.03 GB across 1,083 files in about 42 minutes.
This equates to approximately 2.4 documents per second.
You can generally expect equal or better performance on developer laptops and significantly faster on servers.
Throughput scales with CPU cores, memory, storage speed, and configuration choices such as embedding model, chunk size and overlap, and concurrency.
This test returned 12 errors (approximately 1.1%).
All errors were filespecific, and they didn't stop the pipeline.
Ingestion dataset:
* Total files: 1,083 items mounted
* Total size on disk: 5,026,474,862 bytes (approximately 5.03 GB)
Hardware specifications:
* Machine: Apple M4 Pro
* Podman VM:
* Name: `podman-machine-default`
* Type: `applehv`
* vCPUs: 7
* Memory: 8 GiB
* Disk size: 100 GiB
Test results:
```text
2025-09-24T22:40:45.542190Z /app/src/main.py:231 Ingesting default documents when ready disable_langflow_ingest=False
2025-09-24T22:40:45.546385Z /app/src/main.py:270 Using Langflow ingestion pipeline for default documents file_count=1082
...
2025-09-24T23:19:44.866365Z /app/src/main.py:351 Langflow ingestion completed success_count=1070 error_count=12 total_files=1082
```
Elapsed time: ~42 minutes 15 seconds (2,535 seconds)
Throughput: ~2.4 documents/second

View file

@ -497,6 +497,18 @@ def copy_compose_files(*, force: bool = False) -> None:
def run_tui():
"""Run the OpenRAG TUI application."""
# Check for native Windows before launching TUI
from .utils.platform import PlatformDetector
platform_detector = PlatformDetector()
if platform_detector.is_native_windows():
print("\n" + "=" * 60)
print("⚠️ Native Windows Not Supported")
print("=" * 60)
print(platform_detector.get_wsl_recommendation())
print("=" * 60 + "\n")
sys.exit(1)
app = None
try:
# Keep bundled assets aligned with the packaged versions

View file

@ -15,6 +15,7 @@ from rich.text import Text
from ..managers.container_manager import ContainerManager
from ..utils.clipboard import copy_text_to_clipboard
from ..utils.platform import PlatformDetector
class DiagnosticsScreen(Screen):
@ -52,6 +53,7 @@ class DiagnosticsScreen(Screen):
def __init__(self):
super().__init__()
self.container_manager = ContainerManager()
self.platform_detector = PlatformDetector()
self._logger = logging.getLogger("openrag.diagnostics")
self._status_timer = None
@ -199,6 +201,23 @@ class DiagnosticsScreen(Screen):
"""Get system information text."""
info_text = Text()
# Platform information
info_text.append("Platform Information\n", style="bold")
info_text.append("=" * 30 + "\n")
info_text.append(f"System: {self.platform_detector.platform_system}\n")
info_text.append(f"Machine: {self.platform_detector.platform_machine}\n")
# Windows-specific warning
if self.platform_detector.is_native_windows():
info_text.append("\n")
info_text.append("⚠️ Native Windows Detected\n", style="bold yellow")
info_text.append("-" * 30 + "\n")
info_text.append(self.platform_detector.get_wsl_recommendation())
info_text.append("\n")
info_text.append("\n")
# Container runtime information
runtime_info = self.container_manager.get_runtime_info()
info_text.append("Container Runtime Information\n", style="bold")

View file

@ -30,6 +30,15 @@ class PlatformDetector:
self.platform_system = platform.system()
self.platform_machine = platform.machine()
def is_native_windows(self) -> bool:
"""
Check if running on native Windows (not WSL).
Returns True if running on native Windows, False otherwise.
WSL environments will return False since they identify as Linux.
"""
return self.platform_system == "Windows"
def detect_runtime(self) -> RuntimeInfo:
"""Detect available container runtime and compose capabilities."""
# First check if we have podman installed
@ -166,6 +175,23 @@ class PlatformDetector:
) as e:
return False, 0, f"Error checking Podman VM memory: {e}"
def get_wsl_recommendation(self) -> str:
"""Get recommendation message for native Windows users to use WSL."""
return """
Running on native Windows detected.
For the best experience, we recommend using Windows Subsystem for Linux (WSL).
To set up WSL:
1. Open PowerShell or Command Prompt as Administrator
2. Run: wsl --install
3. Restart your computer
4. Set up your Linux distribution (Ubuntu recommended)
5. Install Docker or Podman in WSL
Learn more: https://docs.microsoft.com/en-us/windows/wsl/install
"""
def get_installation_instructions(self) -> str:
if self.platform_system == "Darwin":
return """
@ -200,6 +226,10 @@ Docker Desktop for Windows:
Or Podman Desktop:
https://podman-desktop.io/downloads
For better performance, consider using WSL:
Run: wsl --install
https://docs.microsoft.com/en-us/windows/wsl/install
"""
else:
return """