spacing and finish docker page

This commit is contained in:
April M 2025-12-05 12:03:05 -08:00
parent 4538433720
commit 15f2e2270a
7 changed files with 139 additions and 123 deletions

View file

@ -228,6 +228,7 @@ All errors were file-specific, and they didn't stop the pipeline.
* Machine: Apple M4 Pro * Machine: Apple M4 Pro
* Podman VM: * Podman VM:
* Name: podman-machine-default * Name: podman-machine-default
* Type: applehv * Type: applehv
* vCPUs: 7 * vCPUs: 7

View file

@ -82,22 +82,30 @@ The following variables are required or recommended:
* **Google**: Provide your Google OAuth Client ID and Google OAuth Client Secret. You can generate these in the [Google Cloud Console](https://console.cloud.google.com/apis/credentials). For more information, see the [Google OAuth client documentation](https://developers.google.com/identity/protocols/oauth2). * **Google**: Provide your Google OAuth Client ID and Google OAuth Client Secret. You can generate these in the [Google Cloud Console](https://console.cloud.google.com/apis/credentials). For more information, see the [Google OAuth client documentation](https://developers.google.com/identity/protocols/oauth2).
* **Microsoft**: For the Microsoft OAuth Client ID and Microsoft OAuth Client Secret, provide [Azure application registration credentials for SharePoint and OneDrive](https://learn.microsoft.com/en-us/onedrive/developer/rest-api/getting-started/app-registration?view=odsp-graph-online). For more information, see the [Microsoft Graph OAuth client documentation](https://learn.microsoft.com/en-us/onedrive/developer/rest-api/getting-started/graph-oauth). * **Microsoft**: For the Microsoft OAuth Client ID and Microsoft OAuth Client Secret, provide [Azure application registration credentials for SharePoint and OneDrive](https://learn.microsoft.com/en-us/onedrive/developer/rest-api/getting-started/app-registration?view=odsp-graph-online). For more information, see the [Microsoft Graph OAuth client documentation](https://learn.microsoft.com/en-us/onedrive/developer/rest-api/getting-started/graph-oauth).
For more information and variables, see [Environment variables](/reference/configuration). For more information and variables, see [OpenRAG environment variables](/reference/configuration).
6. Start `docling serve` on the host machine. ## Start services
OpenRAG Docker installations require that `docling serve` is running on port 5001 on the host machine.
This enables [Mac MLX](https://opensource.apple.com/projects/mlx/) support for document processing. 1. Start `docling serve` on port 5001 on the host machine:
```bash ```bash
uv run python scripts/docling_ctl.py start --port 5001 uv run python scripts/docling_ctl.py start --port 5001
``` ```
7. Confirm `docling serve` is running. Docling cannot run inside a Docker container due to system-level dependencies, so you must manage it as a separate service on the host machine.
``` For more information, see [Stop, start, and inspect native services](/manage-services#start-native-services).
This port is required to deploy OpenRAG successfully; don't use a different port.
Additionally, this enables the [MLX framework](https://opensource.apple.com/projects/mlx/) for accelerated performance on Apple Silicon Mac machines.
2. Confirm `docling serve` is running.
```bash
uv run python scripts/docling_ctl.py status uv run python scripts/docling_ctl.py status
``` ```
Make sure the response shows that `docling serve` is running, for example: If `docling serve` is running, the output includes the status, address, and process ID (PID):
```bash ```bash
Status: running Status: running
Endpoint: http://127.0.0.1:5001 Endpoint: http://127.0.0.1:5001
@ -105,10 +113,10 @@ The following variables are required or recommended:
PID: 27746 PID: 27746
``` ```
8. Deploy OpenRAG locally with the appropriate Docker Compose file for your environment. 3. Deploy the OpenRAG containers locally using the appropriate Docker Compose file for your environment.
Both files deploy the same services. Both files deploy the same services.
* [`docker-compose.yml`](https://github.com/langflow-ai/openrag/blob/main/docker-compose.yml) is an OpenRAG deployment with GPU support for accelerated AI processing. This Docker Compose file requires an NVIDIA GPU with [CUDA](https://docs.nvidia.com/cuda/) support. * [`docker-compose.yml`](https://github.com/langflow-ai/openrag/blob/main/docker-compose.yml): If your host machine has an NVIDIA GPU with CUDA support and compatible NVIDIA drivers, you can use this file to deploy OpenRAG with accelerated processing.
* Docker: * Docker:
@ -124,7 +132,7 @@ Both files deploy the same services.
podman compose up -d podman compose up -d
``` ```
* [`docker-compose-cpu.yml`](https://github.com/langflow-ai/openrag/blob/main/docker-compose-cpu.yml) is a CPU-only version of OpenRAG for systems without NVIDIA GPU support. Use this Docker Compose file for environments where GPU drivers aren't available. * [`docker-compose-cpu.yml`](https://github.com/langflow-ai/openrag/blob/main/docker-compose-cpu.yml): If your host machine doesn't have NVIDIA GPU support, use this file for a CPU-only OpenRAG deployment.
* Docker: * Docker:
@ -138,17 +146,7 @@ Both files deploy the same services.
podman compose -f docker-compose-cpu.yml up -d podman compose -f docker-compose-cpu.yml up -d
``` ```
The OpenRAG Docker Compose file starts five containers: 4. Wait for the OpenRAG containers to start, and then confirm that all containers are running:
| Container Name | Default address | Purpose |
|---|---|---|
| OpenRAG Backend | http://localhost:8000 | FastAPI server and core functionality. |
| OpenRAG Frontend | http://localhost:3000 | React web interface for user interaction. |
| Langflow | http://localhost:7860 | [AI workflow engine](/agents). |
| OpenSearch | http://localhost:9200 | Datastore for [knowledge](/knowledge). |
| OpenSearch Dashboards | http://localhost:5601 | OpenSearch database administration interface. |
9. Wait while the containers start, and then confirm all containers are running:
* Docker Compose: * Docker Compose:
@ -162,9 +160,19 @@ Both files deploy the same services.
podman compose ps podman compose ps
``` ```
If all containers are running, you can access your OpenRAG services at their addresses. The OpenRAG Docker Compose files deploy the following containers:
10. Access the OpenRAG frontend at `http://localhost:3000` to continue with [application onboarding](#application-onboarding). | Container Name | Default address | Purpose |
|---|---|---|
| OpenRAG Backend | http://localhost:8000 | FastAPI server and core functionality. |
| OpenRAG Frontend | http://localhost:3000 | React web interface for user interaction. |
| Langflow | http://localhost:7860 | [AI workflow engine](/agents). |
| OpenSearch | http://localhost:9200 | Datastore for [knowledge](/knowledge). |
| OpenSearch Dashboards | http://localhost:5601 | OpenSearch database administration interface. |
When the containers are running, you can access your OpenRAG services at their addresses.
5. Access the OpenRAG frontend at `http://localhost:3000`, and then continue with [application onboarding](#application-onboarding).
<PartialOnboarding /> <PartialOnboarding />

View file

@ -50,7 +50,14 @@ podman machine start
## Port conflicts ## Port conflicts
Ensure ports 3000, 7860, 8000, 9200, 5601 are available. With the default [configuration](/reference/configuration), OpenRAG requires the following ports to be available on the host machine:
* 3000: Langflow application
* 5001: Docling local ingestion service
* 5601: OpenSearch Dashboards
* 7860: Docling UI
* 8000: Docling API
* 9200: OpenSearch service
## OCR ingestion fails (easyocr not installed) ## OCR ingestion fails (easyocr not installed)