Merge pull request #276 from langflow-ai/docs-updates

docs: updates to reflect app changes
This commit is contained in:
Nate McCall 2025-10-24 05:14:05 +13:00 committed by GitHub
commit d36623e434
No known key found for this signature in database
GPG key ID: B5690EEEBB952194
7 changed files with 21 additions and 23 deletions

View file

@ -49,10 +49,10 @@ To launch OpenRAG with the TUI, do the following:
For the full TUI guide, see [TUI](https://docs.openr.ag/get-started/tui).
## Docker Deployment
## Docker installation
If you prefer to use Docker to run OpenRAG, the repository includes two Docker Compose `.yml` files.
They deploy the same applications and containers, but to different environments.
They deploy the same applications and containers locally, but to different environments.
- [`docker-compose.yml`](https://github.com/langflow-ai/openrag/blob/main/docker-compose.yml) is an OpenRAG deployment for environments with GPU support. GPU support requires an NVIDIA GPU with CUDA support and compatible NVIDIA drivers installed on the OpenRAG host machine.
@ -60,7 +60,7 @@ They deploy the same applications and containers, but to different environments.
Both Docker deployments depend on `docling serve` to be running on port `5001` on the host machine. This enables [Mac MLX](https://opensource.apple.com/projects/mlx/) support for document processing. Installing OpenRAG with the TUI starts `docling serve` automatically, but for a Docker deployment you must manually start the `docling serve` process.
To deploy OpenRAG with Docker:
To install OpenRAG with Docker:
1. Clone the OpenRAG repository.
```bash
@ -121,7 +121,7 @@ To deploy OpenRAG with Docker:
uv run python scripts/docling_ctl.py stop
```
For more information, see [Deploy with Docker](https://docs.openr.ag/get-started/docker).
For more information, see [Install with Docker](https://docs.openr.ag/get-started/docker).
## Troubleshooting

View file

@ -5,16 +5,12 @@ import TabItem from '@theme/TabItem';
The first time you start OpenRAG, whether using the TUI or a `.env` file, you must complete application onboarding.
Most values from onboarding can be changed later in the OpenRAG **Settings** page, but there are important restrictions.
The **language model provider** and **embeddings model provider** can only be selected at onboarding, and you must use the same provider for your language model and embedding model.
To change your provider selection later, you must completely reinstall OpenRAG.
The **language model** can be changed later in **Settings**, but the **embeddings model** cannot be changed later.
Values from onboarding can be changed later in the OpenRAG **Settings** page.
<Tabs groupId="Provider">
<TabItem value="OpenAI" label="OpenAI" default>
1. Enable **Get API key from environment variable** to automatically enter your key from the TUI-generated `.env` file.
Alternatively, paste an OpenAI API key into the field.
2. Under **Advanced settings**, select your **Embedding Model** and **Language Model**.
3. To load 2 sample PDFs, enable **Sample dataset**.
This is recommended, but not required.

View file

@ -39,7 +39,7 @@ The files are loaded into your OpenSearch database, and appear in the Knowledge
### Ingest files through OAuth connectors {#oauth-ingestion}
OpenRAG supports Google Drive, OneDrive, and AWS S3 as OAuth connectors for seamless document synchronization.
OpenRAG supports Google Drive, OneDrive, and Sharepoint as OAuth connectors for seamless document synchronization.
OAuth integration allows individual users to connect their personal cloud storage accounts to OpenRAG. Each user must separately authorize OpenRAG to access their own cloud storage files. When a user connects a cloud service, they are redirected to authenticate with that service provider and grant OpenRAG permission to sync documents from their personal cloud storage.

View file

@ -1,12 +1,12 @@
---
title: Deploy with Docker
title: Install with Docker
slug: /get-started/docker
---
import PartialOnboarding from '@site/docs/_partial-onboarding.mdx';
There are two different Docker Compose files.
They deploy the same applications and containers, but to different environments.
They deploy the same applications and containers locally, but to different environments.
- [`docker-compose.yml`](https://github.com/langflow-ai/openrag/blob/main/docker-compose.yml) is an OpenRAG deployment with GPU support for accelerated AI processing.
@ -23,7 +23,7 @@ Both Docker deployments depend on `docling serve` to be running on port `5001` o
- Create an [OpenAI API key](https://platform.openai.com/api-keys). This key is **required** to start OpenRAG, but you can choose a different model provider during [Application Onboarding](#application-onboarding).
- Optional: GPU support requires an NVIDIA GPU with CUDA support and compatible NVIDIA drivers installed on the OpenRAG host machine. If you don't have GPU capabilities, OpenRAG provides an alternate CPU-only deployment.
## Deploy OpenRAG with Docker Compose
## Install OpenRAG with Docker Compose
To install OpenRAG with Docker Compose, do the following:
@ -82,7 +82,7 @@ The following values are **required** to be set:
PID: 27746
```
7. Deploy OpenRAG with Docker Compose based on your deployment type.
7. Deploy OpenRAG locally with Docker Compose based on your deployment type.
For GPU-enabled systems, run the following commands:
```bash

View file

@ -93,8 +93,9 @@ For OAuth setup, use **Advanced Setup**.
1. To install OpenRAG with **Advanced Setup**, click **Advanced Setup** or press <kbd>2</kbd>.
2. Click **Generate Passwords** to generate passwords for OpenSearch and Langflow.
3. Paste your OpenAI API key in the OpenAI API key field.
4. Add your client and secret values for Google, Azure, or AWS OAuth.
These values can be found in your OAuth provider.
4. Add your client and secret values for Google or Microsoft OAuth.
These values can be found with your OAuth provider.
For more information, see the [Google OAuth client](https://developers.google.com/identity/protocols/oauth2) or [Microsoft Graph OAuth client](https://learn.microsoft.com/en-us/onedrive/developer/rest-api/getting-started/graph-oauth) documentation.
5. The OpenRAG TUI presents redirect URIs for your OAuth app.
These are the URLs your OAuth provider will redirect back to after user sign-in.
Register these redirect values with your OAuth provider as they are presented in the TUI.
@ -107,8 +108,8 @@ For OAuth setup, use **Advanced Setup**.
Command completed successfully
```
8. To open the OpenRAG application, click **Open App**, press <kbd>6</kbd>, or navigate to `http://localhost:3000`.
You will be presented with your provider's OAuth sign-in screen, and be redirected to the redirect URI after sign-in.
Continue with Application Onboarding.
You are presented with your provider's OAuth sign-in screen.
After sign-in, you are redirected to the redirect URI.
Two additional variables are available for Advanced Setup:
@ -116,7 +117,10 @@ For OAuth setup, use **Advanced Setup**.
The `WEBHOOK_BASE_URL` controls where the endpoint for `/connectors/CONNECTOR_TYPE/webhook` will be available.
This connection enables real-time document synchronization with external services.
For example, for Google Drive file synchronization the webhook URL is `/connectors/google_drive/webhook`.
Supported webhook endpoints:
- Google Drive: `/connectors/google_drive/webhook`
- OneDrive: `/connectors/onedrive/webhook`
- SharePoint: `/connectors/sharepoint/webhook`
9. Continue with [Application Onboarding](#application-onboarding).
</TabItem>

View file

@ -44,8 +44,6 @@ If you aren't getting the results you need, you can further tune the knowledge i
To modify the knowledge ingestion or Agent behavior, click <Icon name="Settings2" aria-hidden="true"/> **Settings**.
In this example, you'll try a different LLM to demonstrate how the Agent's response changes.
You can only change the **Language model**, and not the **Model provider** that you started with in OpenRAG.
If you're using Ollama, you can use any installed model.
1. To edit the Agent's behavior, click **Edit in Langflow**.
You can more quickly access the **Language Model** and **Agent Instructions** fields in this page, but for illustration purposes, navigate to the Langflow visual builder.

View file

@ -33,7 +33,7 @@ const sidebars = {
{
type: "doc",
id: "get-started/docker",
label: "Deploy with Docker"
label: "Install with Docker"
},
{
type: "doc",