Merge pull request #294 from langflow-ai/docs-review
docs: docs peer review
This commit is contained in:
commit
d8bea3bbb2
10 changed files with 156 additions and 373 deletions
|
|
@ -47,7 +47,7 @@ To launch OpenRAG with the TUI, do the following:
|
|||
|
||||
The TUI opens and guides you through OpenRAG setup.
|
||||
|
||||
For the full TUI guide, see [TUI](https://docs.openr.ag/get-started/tui).
|
||||
For the full TUI installation guide, see [TUI](https://docs.openr.ag/install).
|
||||
|
||||
## Docker installation
|
||||
|
||||
|
|
|
|||
|
|
@ -7,6 +7,8 @@ The first time you start OpenRAG, whether using the TUI or a `.env` file, you mu
|
|||
|
||||
Values from onboarding can be changed later in the OpenRAG **Settings** page.
|
||||
|
||||
Choose one LLM provider and complete only those steps:
|
||||
|
||||
<Tabs groupId="Provider">
|
||||
<TabItem value="OpenAI" label="OpenAI" default>
|
||||
1. Enable **Get API key from environment variable** to automatically enter your key from the TUI-generated `.env` file.
|
||||
|
|
|
|||
|
|
@ -3,25 +3,24 @@ title: Install OpenRAG containers
|
|||
slug: /get-started/docker
|
||||
---
|
||||
|
||||
import Tabs from '@theme/Tabs';
|
||||
import TabItem from '@theme/TabItem';
|
||||
import PartialOnboarding from '@site/docs/_partial-onboarding.mdx';
|
||||
|
||||
There are two different Docker Compose files.
|
||||
They deploy the same applications and containers locally, but to different environments.
|
||||
OpenRAG has two Docker Compose files. Both files deploy the same applications and containers locally, but they are for different environments.
|
||||
|
||||
- [`docker-compose.yml`](https://github.com/langflow-ai/openrag/blob/main/docker-compose.yml) is an OpenRAG deployment with GPU support for accelerated AI processing.
|
||||
- [`docker-compose.yml`](https://github.com/langflow-ai/openrag/blob/main/docker-compose.yml) is an OpenRAG deployment with GPU support for accelerated AI processing. This Docker Compose file requires an NVIDIA GPU with [CUDA](https://docs.nvidia.com/cuda/) support.
|
||||
|
||||
- [`docker-compose-cpu.yml`](https://github.com/langflow-ai/openrag/blob/main/docker-compose-cpu.yml) is a CPU-only version of OpenRAG for systems without GPU support. Use this Docker compose file for environments where GPU drivers aren't available.
|
||||
|
||||
Both Docker deployments depend on `docling serve` to be running on port `5001` on the host machine. This enables [Mac MLX](https://opensource.apple.com/projects/mlx/) support for document processing. Installing OpenRAG with the TUI starts `docling serve` automatically, but for a Docker deployment you must manually start the `docling serve` process.
|
||||
- [`docker-compose-cpu.yml`](https://github.com/langflow-ai/openrag/blob/main/docker-compose-cpu.yml) is a CPU-only version of OpenRAG for systems without NVIDIA GPU support. Use this Docker Compose file for environments where GPU drivers aren't available.
|
||||
|
||||
## Prerequisites
|
||||
|
||||
- [Python Version 3.10 to 3.13](https://www.python.org/downloads/release/python-3100/)
|
||||
- [uv](https://docs.astral.sh/uv/getting-started/installation/)
|
||||
- [Podman](https://podman.io/docs/installation) (recommended) or [Docker](https://docs.docker.com/get-docker/) installed
|
||||
- [Docker Compose](https://docs.docker.com/compose/install/) installed. If you're using Podman, use [podman-compose](https://docs.podman.io/en/latest/markdown/podman-compose.1.html) or alias Docker compose commands to Podman commands.
|
||||
- Install [Python Version 3.10 to 3.13](https://www.python.org/downloads/release/python-3100/)
|
||||
- Install [uv](https://docs.astral.sh/uv/getting-started/installation/)
|
||||
- Install [Podman](https://podman.io/docs/installation) (recommended) or [Docker](https://docs.docker.com/get-docker/)
|
||||
- Install [Docker Compose](https://docs.docker.com/compose/install/). If using Podman, use [podman-compose](https://docs.podman.io/en/latest/markdown/podman-compose.1.html) or alias Docker compose commands to Podman commands.
|
||||
- Create an [OpenAI API key](https://platform.openai.com/api-keys). This key is **required** to start OpenRAG, but you can choose a different model provider during [Application Onboarding](#application-onboarding).
|
||||
- Optional: GPU support requires an NVIDIA GPU with CUDA support and compatible NVIDIA drivers installed on the OpenRAG host machine. If you don't have GPU capabilities, OpenRAG provides an alternate CPU-only deployment.
|
||||
- Optional: Install GPU support with an NVIDIA GPU, [CUDA](https://docs.nvidia.com/cuda/) support, and compatible NVIDIA drivers on the OpenRAG host machine. If you don't have GPU capabilities, OpenRAG provides an alternate CPU-only deployment.
|
||||
|
||||
## Install OpenRAG with Docker Compose
|
||||
|
||||
|
|
@ -49,8 +48,7 @@ To install OpenRAG with Docker Compose, do the following:
|
|||
touch .env
|
||||
```
|
||||
|
||||
4. Set environment variables. The Docker Compose files will be populated with values from your `.env`.
|
||||
The following values are **required** to be set:
|
||||
4. The Docker Compose files are populated with the values from your .env. The following values must be set:
|
||||
|
||||
```bash
|
||||
OPENSEARCH_PASSWORD=your_secure_password
|
||||
|
|
@ -63,7 +61,8 @@ The following values are **required** to be set:
|
|||
For more information on configuring OpenRAG with environment variables, see [Environment variables](/reference/configuration).
|
||||
|
||||
5. Start `docling serve` on the host machine.
|
||||
Both Docker deployments depend on `docling serve` to be running on port `5001` on the host machine. This enables [Mac MLX](https://opensource.apple.com/projects/mlx/) support for document processing.
|
||||
OpenRAG Docker installations require that `docling serve` is running on port 5001 on the host machine.
|
||||
This enables [Mac MLX](https://opensource.apple.com/projects/mlx/) support for document processing.
|
||||
|
||||
```bash
|
||||
uv run python scripts/docling_ctl.py start --port 5001
|
||||
|
|
@ -74,7 +73,7 @@ The following values are **required** to be set:
|
|||
uv run python scripts/docling_ctl.py status
|
||||
```
|
||||
|
||||
Successful result:
|
||||
Make sure the response shows that `docling serve` is running, for example:
|
||||
```bash
|
||||
Status: running
|
||||
Endpoint: http://127.0.0.1:5001
|
||||
|
|
@ -84,16 +83,21 @@ The following values are **required** to be set:
|
|||
|
||||
7. Deploy OpenRAG locally with Docker Compose based on your deployment type.
|
||||
|
||||
For GPU-enabled systems, run the following commands:
|
||||
<Tabs groupId="Compose file">
|
||||
<TabItem value="docker-compose.yml" label="docker-compose.yml" default>
|
||||
```bash
|
||||
docker compose build
|
||||
docker compose up -d
|
||||
```
|
||||
|
||||
For environments without GPU support, run:
|
||||
```
|
||||
</TabItem>
|
||||
<TabItem value="docker-compose-cpu.yml" label="docker-compose-cpu.yml">
|
||||
|
||||
```bash
|
||||
docker compose -f docker-compose-cpu.yml up -d
|
||||
```
|
||||
|
||||
</TabItem>
|
||||
</Tabs>
|
||||
|
||||
The OpenRAG Docker Compose file starts five containers:
|
||||
| Container Name | Default Address | Purpose |
|
||||
|
|
@ -110,7 +114,7 @@ The following values are **required** to be set:
|
|||
docker compose ps
|
||||
```
|
||||
|
||||
You can now access the application at:
|
||||
You can now access OpenRAG at the following endpoints:
|
||||
|
||||
- **Frontend**: http://localhost:3000
|
||||
- **Backend API**: http://localhost:8000
|
||||
|
|
@ -129,7 +133,7 @@ uv run python scripts/docling_ctl.py stop
|
|||
## Container management commands
|
||||
|
||||
Manage your OpenRAG containers with the following commands.
|
||||
These commands are also available in the TUI's [Status menu](/get-started/tui#status).
|
||||
These commands are also available in the TUI's [Status menu](/install#status).
|
||||
|
||||
### Upgrade containers
|
||||
|
||||
|
|
|
|||
|
|
@ -9,16 +9,24 @@ import PartialOnboarding from '@site/docs/_partial-onboarding.mdx';
|
|||
|
||||
[Install the OpenRAG Python wheel](#install-python-wheel), and then run the [OpenRAG Terminal User Interface(TUI)](#setup) to start your OpenRAG deployment with a guided setup process.
|
||||
|
||||
If you prefer running Docker commands and manually editing `.env` files, see [Deploy with Docker](/get-started/docker).
|
||||
The OpenRAG Terminal User Interface (TUI) allows you to set up, configure, and monitor your OpenRAG deployment directly from the terminal, on any operating system.
|
||||
|
||||

|
||||
|
||||
Instead of starting OpenRAG using Docker commands and manually editing values in the `.env` file, the TUI walks you through the setup. It prompts for variables where required, creates a `.env` file for you, and then starts OpenRAG.
|
||||
|
||||
Once OpenRAG is running, use the TUI to monitor your application, control your containers, and retrieve logs.
|
||||
|
||||
If you prefer running Docker commands and manually editing `.env` files, see [Install with Docker](/get-started/docker).
|
||||
|
||||
## Prerequisites
|
||||
|
||||
- [Python Version 3.10 to 3.13](https://www.python.org/downloads/release/python-3100/)
|
||||
- [uv](https://docs.astral.sh/uv/getting-started/installation/)
|
||||
- [Podman](https://podman.io/docs/installation) (recommended) or [Docker](https://docs.docker.com/get-docker/) installed
|
||||
- [Docker Compose](https://docs.docker.com/compose/install/) installed. If using Podman, use [podman-compose](https://docs.podman.io/en/latest/markdown/podman-compose.1.html) or alias Docker compose commands to Podman commands.
|
||||
- Install [Python Version 3.10 to 3.13](https://www.python.org/downloads/release/python-3100/)
|
||||
- Install [uv](https://docs.astral.sh/uv/getting-started/installation/)
|
||||
- Install [Podman](https://podman.io/docs/installation) (recommended) or [Docker](https://docs.docker.com/get-docker/)
|
||||
- Install [Docker Compose](https://docs.docker.com/compose/install/). If using Podman, use [podman-compose](https://docs.podman.io/en/latest/markdown/podman-compose.1.html) or alias Docker compose commands to Podman commands.
|
||||
- Create an [OpenAI API key](https://platform.openai.com/api-keys). This key is **required** to start OpenRAG, but you can choose a different model provider during [Application Onboarding](#application-onboarding).
|
||||
- Optional: GPU support requires an NVIDIA GPU with [CUDA](https://docs.nvidia.com/cuda/) support and compatible NVIDIA drivers installed on the OpenRAG host machine. If you don't have GPU capabilities, OpenRAG provides an alternate CPU-only deployment.
|
||||
- Optional: Install GPU support with an NVIDIA GPU, [CUDA](https://docs.nvidia.com/cuda/) support, and compatible NVIDIA drivers on the OpenRAG host machine. If you don't have GPU capabilities, OpenRAG provides an alternate CPU-only deployment.
|
||||
|
||||
## Install the OpenRAG Python wheel {#install-python-wheel}
|
||||
|
||||
|
|
@ -57,7 +65,7 @@ The OpenRAG wheel installs the Terminal User Interface (TUI) for configuring and
|
|||
uv run openrag
|
||||
```
|
||||
|
||||
5. Continue with [Setup OpenRAG with the TUI](#setup).
|
||||
5. Continue with [Set up OpenRAG with the TUI](#setup).
|
||||
|
||||
## Set up OpenRAG with the TUI {#setup}
|
||||
|
||||
|
|
@ -65,19 +73,19 @@ The TUI creates a `.env` file in your OpenRAG directory root and starts OpenRAG.
|
|||
If the TUI detects a `.env` file in the OpenRAG root directory, it sources any variables from the `.env` file.
|
||||
If the TUI detects OAuth credentials, it enforces the **Advanced Setup** path.
|
||||
|
||||
**Basic Setup** generates all of the required values for OpenRAG except the OpenAI API key.
|
||||
**Basic Setup** does not set up OAuth connections for ingestion from cloud providers.
|
||||
For OAuth setup, use **Advanced Setup**.
|
||||
|
||||
**Basic Setup** and **Advanced Setup** enforce the same authentication settings for the Langflow server, but manage document access differently. For more information, see [Authentication and document access](/knowledge#auth).
|
||||
|
||||
<Tabs groupId="Setup method">
|
||||
<TabItem value="Basic setup" label="Basic setup" default>
|
||||
|
||||
**Basic Setup** generates all of the required values for OpenRAG except the OpenAI API key.
|
||||
**Basic Setup** does not set up OAuth connections for ingestion from cloud providers.
|
||||
For OAuth setup, use **Advanced Setup**.
|
||||
For information about the difference between basic (no auth) and OAuth in OpenRAG, see [Authentication and document access](/knowledge#auth).
|
||||
|
||||
1. To install OpenRAG with **Basic Setup**, click **Basic Setup** or press <kbd>1</kbd>.
|
||||
2. Click **Generate Passwords** to generate passwords for OpenSearch and Langflow.
|
||||
3. Paste your OpenAI API key in the OpenAI API key field.
|
||||
4. Click **Save Configuration**.
|
||||
Your passwords are saved in the `.env` file used to start OpenRAG.
|
||||
5. To start OpenRAG, click **Start Container Services**.
|
||||
Startup pulls container images and runs them, so it can take some time.
|
||||
When startup is complete, the TUI displays the following:
|
||||
|
|
@ -126,4 +134,60 @@ For OAuth setup, use **Advanced Setup**.
|
|||
</TabItem>
|
||||
</Tabs>
|
||||
|
||||
<PartialOnboarding />
|
||||
<PartialOnboarding />
|
||||
|
||||
## Manage OpenRAG containers with the TUI
|
||||
|
||||
After installation, the TUI can deploy, manage, and upgrade your OpenRAG containers.
|
||||
|
||||
### Start container services
|
||||
|
||||
Click **Start Container Services** to start the OpenRAG containers.
|
||||
The TUI automatically detects your container runtime, and then checks if your machine has compatible GPU support by checking for `CUDA`, `NVIDIA_SMI`, and Docker/Podman runtime support. This check determines which Docker Compose file OpenRAG uses.
|
||||
The TUI then pulls the images and deploys the containers with the following command.
|
||||
```bash
|
||||
docker compose up -d
|
||||
```
|
||||
|
||||
If images are missing, the TUI runs `docker compose pull`, then runs `docker compose up -d`.
|
||||
|
||||
### Start native services
|
||||
|
||||
A "native" service in OpenRAG refers to a service run natively on your machine, and not within a container.
|
||||
The `docling serve` process is a native service in OpenRAG, because it's a document processing service that is run on your local machine, and controlled separately from the containers.
|
||||
|
||||
To start or stop `docling serve` or any other native services, in the TUI main menu, click **Start Native Services** or **Stop Native Services**.
|
||||
|
||||
To view the status, port, or PID of a native service, in the TUI main menu, click [Status](#status).
|
||||
|
||||
### Status
|
||||
|
||||
The **Status** menu displays information on your container deployment.
|
||||
Here you can check container health, find your service ports, view logs, and upgrade your containers.
|
||||
|
||||
To view streaming logs, select the container you want to view, and press <kbd>l</kbd>.
|
||||
To copy your logs, click **Copy to Clipboard**.
|
||||
|
||||
To **upgrade** your containers, click **Upgrade**.
|
||||
**Upgrade** runs `docker compose pull` and then `docker compose up -d --force-recreate`.
|
||||
The first command pulls the latest images of OpenRAG.
|
||||
The second command recreates the containers with your data persisted.
|
||||
|
||||
To **reset** your containers, click **Reset**.
|
||||
Reset gives you a completely fresh start.
|
||||
Reset deletes all of your data, including OpenSearch data, uploaded documents, and authentication.
|
||||
**Reset** runs two commands.
|
||||
It first stops and removes all containers, volumes, and local images.
|
||||
```
|
||||
docker compose down --volumes --remove-orphans --rmi local
|
||||
```
|
||||
|
||||
When the first command is complete, OpenRAG removes any additional Docker objects with `prune`.
|
||||
|
||||
```
|
||||
docker system prune -f
|
||||
```
|
||||
|
||||
## Diagnostics
|
||||
|
||||
The **Diagnostics** menu provides health monitoring for your container runtimes and monitoring of your OpenSearch security.
|
||||
|
|
@ -11,33 +11,30 @@ Get started with OpenRAG by loading your knowledge, swapping out your language m
|
|||
|
||||
## Prerequisites
|
||||
|
||||
- [Install and start OpenRAG](/install)
|
||||
- Install and start OpenRAG with the [TUI](/install) or [Docker](/get-started/docker)
|
||||
|
||||
## Find your way around
|
||||
## Load and chat with your own documents
|
||||
|
||||
1. In OpenRAG, click <Icon name="MessageSquare" aria-hidden="true"/> **Chat**.
|
||||
The chat is powered by the OpenRAG OpenSearch Agent.
|
||||
For more information, see [Langflow Agents](/agents).
|
||||
2. Ask `What documents are available to you?`
|
||||
The agent responds with a message summarizing the documents that OpenRAG loads by default, which are PDFs about evaluating data quality when using LLMs in health care.
|
||||
The agent responds with a message summarizing the documents that OpenRAG loads by default.
|
||||
Knowledge is stored in OpenSearch.
|
||||
For more information, see [Knowledge](/knowledge).
|
||||
3. To confirm the agent is correct, click <Icon name="Library" aria-hidden="true"/> **Knowledge**.
|
||||
3. To confirm the agent is correct about the default knowledge, click <Icon name="Library" aria-hidden="true"/> **Knowledge**.
|
||||
The **Knowledge** page lists the documents OpenRAG has ingested into the OpenSearch vector database.
|
||||
Click on a document to display the chunks derived from splitting the default documents into the vector database.
|
||||
|
||||
## Add your own knowledge
|
||||
|
||||
1. To add documents to your knowledge base, click <Icon name="Plus" aria-hidden="true"/> **Add Knowledge**.
|
||||
* Select **Add File** to add a single file from your local machine (mapped with the Docker volume mount).
|
||||
* Select **Process Folder** to process an entire folder of documents from your local machine (mapped with the Docker volume mount).
|
||||
4. To add documents to your knowledge base, click <Icon name="Plus" aria-hidden="true"/> **Add Knowledge**.
|
||||
* Select **Add File** to add a single file from your local machine.
|
||||
* Select **Process Folder** to process an entire folder of documents from your local machine.
|
||||
* Select your cloud storage provider to add knowledge from an OAuth-connected storage provider. For more information, see [OAuth ingestion](/knowledge#oauth-ingestion).
|
||||
2. Return to the Chat window and ask a question about your loaded data.
|
||||
5. Return to the Chat window and ask a question about your loaded data.
|
||||
For example, with a manual about a PC tablet loaded, ask `How do I connect this device to WiFI?`
|
||||
The agent responds with a message indicating it now has your knowledge as context for answering questions.
|
||||
3. Click the <Icon name="Gear" aria-hidden="true"/> **Function Call: search_documents (tool_call)** that is printed in the Playground.
|
||||
These events log the agent's request to the tool and the tool's response, so you have direct visibility into your agent's functionality.
|
||||
If you aren't getting the results you need, you can further tune the knowledge ingestion and agent behavior in the next section.
|
||||
6. Click <Icon name="Gear" aria-hidden="true"/> **Function Call: search_documents (tool_call)**.
|
||||
This log describes how the agent uses tools.
|
||||
This is helpful for troubleshooting when the agent isn't responding as expected.
|
||||
|
||||
## Swap out the language model to modify agent behavior {#change-components}
|
||||
|
||||
|
|
@ -48,58 +45,40 @@ In this example, you'll try a different LLM to demonstrate how the Agent's respo
|
|||
1. To edit the Agent's behavior, click **Edit in Langflow**.
|
||||
You can more quickly access the **Language Model** and **Agent Instructions** fields in this page, but for illustration purposes, navigate to the Langflow visual builder.
|
||||
2. OpenRAG warns you that you're entering Langflow. Click **Proceed**.
|
||||
|
||||
3. The OpenRAG OpenSearch Agent flow appears.
|
||||
The OpenRAG OpenSearch Agent flow appears in a new browser window.
|
||||

|
||||
|
||||
4. In the **Language Model** component, under **Model**, select a different OpenAI model.
|
||||
5. Save your flow with <kbd>Command+S</kbd>.
|
||||
6. In OpenRAG, start a new conversation by clicking the <Icon name="Plus" aria-hidden="true"/> in the **Conversations** tab.
|
||||
7. Ask the same question as before to demonstrate how a different language model changes the results.
|
||||
3. Find the **Language Model** component, and then change the **Model Name** field to a different OpenAI model.
|
||||
4. Save your flow with <kbd>Command+S</kbd> (Mac) or <kbd>Ctrl+S</kbd> (Windows).
|
||||
5. Return to the OpenRAG browser window, and start a new conversation by clicking <Icon name="Plus" aria-hidden="true"/> in the **Conversations** tab.
|
||||
6. Ask the same question you asked before to see how the response differs between models.
|
||||
|
||||
## Integrate OpenRAG into your application
|
||||
|
||||
To integrate OpenRAG into your application, use the [Langflow API](https://docs.langflow.org/api-reference-api-examples).
|
||||
Make requests with Python, TypeScript, or any HTTP client to run one of OpenRAG's default flows and get a response, and then modify the flow further to improve results. Langflow provides code snippets to help you get started.
|
||||
Langflow in OpenRAG includes pre-built flows that you can integrate into your applications using the [Langflow API](https://docs.langflow.org/api-reference-api-examples).
|
||||
|
||||
1. Create a [Langflow API key](https://docs.langflow.org/api-keys-and-authentication).
|
||||
<details>
|
||||
<summary>Create a Langflow API key</summary>
|
||||
|
||||
A Langflow API key is a user-specific token you can use with Langflow.
|
||||
It is **only** used for sending requests to the Langflow server.
|
||||
It does **not** access to OpenRAG.
|
||||
|
||||
To create a Langflow API key, do the following:
|
||||
|
||||
1. In Langflow, click your user icon, and then select **Settings**.
|
||||
2. Click **Langflow API Keys**, and then click <Icon name="Plus" aria-hidden="true"/> **Add New**.
|
||||
3. Name your key, and then click **Create API Key**.
|
||||
4. Copy the API key and store it securely.
|
||||
5. To use your Langflow API key in a request, set a `LANGFLOW_API_KEY` environment variable in your terminal, and then include an `x-api-key` header or query parameter with your request.
|
||||
For example:
|
||||
|
||||
```bash
|
||||
# Set variable
|
||||
export LANGFLOW_API_KEY="sk..."
|
||||
|
||||
# Send request
|
||||
curl --request POST \
|
||||
--url "http://LANGFLOW_SERVER_ADDRESS/api/v1/run/FLOW_ID" \
|
||||
--header "Content-Type: application/json" \
|
||||
--header "x-api-key: $LANGFLOW_API_KEY" \
|
||||
--data '{
|
||||
"output_type": "chat",
|
||||
"input_type": "chat",
|
||||
"input_value": "Hello"
|
||||
}'
|
||||
```
|
||||
|
||||
</details>
|
||||
2. To navigate to the OpenRAG OpenSearch Agent flow, click <Icon name="Settings2" aria-hidden="true"/> **Settings**, and then click **Edit in Langflow** in the OpenRAG OpenSearch Agent flow.
|
||||
3. Click **Share**, and then click **API access**.
|
||||
The Langflow API accepts Python, TypeScript, or curl requests to run flows and get responses. You can use these flows as-is or modify them to better suit your needs.
|
||||
|
||||
The default code in the API access pane constructs a request with the Langflow server `url`, `headers`, and a `payload` of request data. The code snippets automatically include the `LANGFLOW_SERVER_ADDRESS` and `FLOW_ID` values for the flow. Replace these values if you're using the code for a different server or flow. The default Langflow server address is http://localhost:7860.
|
||||
In this section, you'll run the OpenRAG OpenSearch Agent flow and get a response using the API.
|
||||
|
||||
1. To navigate to the OpenRAG OpenSearch Agent flow in Langflow, click <Icon name="Settings2" aria-hidden="true"/> **Settings**, and then click **Edit in Langflow** in the OpenRAG OpenSearch Agent flow.
|
||||
2. Create a [Langflow API key](https://docs.langflow.org/api-keys-and-authentication).
|
||||
|
||||
A Langflow API key is a user-specific token you can use with Langflow.
|
||||
It is **only** used for sending requests to the Langflow server.
|
||||
It does **not** access OpenRAG.
|
||||
|
||||
To create a Langflow API key, do the following:
|
||||
|
||||
1. Open Langflow, click your user icon, and then select **Settings**.
|
||||
2. Click **Langflow API Keys**, and then click <Icon name="Plus" aria-hidden="true"/> **Add New**.
|
||||
3. Name your key, and then click **Create API Key**.
|
||||
4. Copy the API key and store it securely.
|
||||
|
||||
3. Langflow includes code snippets for the request to the Langflow API.
|
||||
To retrieve the code snippet, click **Share**, and then click **API access**.
|
||||
|
||||
The default code in the API access pane constructs a request with the Langflow server `url`, `headers`, and a `payload` of request data. The code snippets automatically include the `LANGFLOW_SERVER_ADDRESS` and `FLOW_ID` values for the flow.
|
||||
|
||||
<Tabs>
|
||||
<TabItem value="python" label="Python">
|
||||
|
|
@ -171,12 +150,12 @@ Make requests with Python, TypeScript, or any HTTP client to run one of OpenRAG'
|
|||
curl --request POST \
|
||||
--url 'http://LANGFLOW_SERVER_ADDRESS/api/v1/run/FLOW_ID?stream=false' \
|
||||
--header 'Content-Type: application/json' \
|
||||
--header "x-api-key: LANGFLOW_API_KEY" \
|
||||
--header "x-api-key: LANGFLOW_API_KEY" \
|
||||
--data '{
|
||||
"output_type": "chat",
|
||||
"input_type": "chat",
|
||||
"input_value": "hello world!",
|
||||
}'
|
||||
"output_type": "chat",
|
||||
"input_type": "chat",
|
||||
"input_value": "hello world!"
|
||||
}'
|
||||
```
|
||||
|
||||
</TabItem>
|
||||
|
|
@ -185,176 +164,6 @@ Make requests with Python, TypeScript, or any HTTP client to run one of OpenRAG'
|
|||
4. Copy the snippet, paste it in a script file, and then run the script to send the request. If you are using the curl snippet, you can run the command directly in your terminal.
|
||||
|
||||
If the request is successful, the response includes many details about the flow run, including the session ID, inputs, outputs, components, durations, and more.
|
||||
The following is an example of a response from running the **Simple Agent** template flow:
|
||||
|
||||
<details>
|
||||
<summary>Result</summary>
|
||||
|
||||
```json
|
||||
{
|
||||
"session_id": "29deb764-af3f-4d7d-94a0-47491ed241d6",
|
||||
"outputs": [
|
||||
{
|
||||
"inputs": {
|
||||
"input_value": "hello world!"
|
||||
},
|
||||
"outputs": [
|
||||
{
|
||||
"results": {
|
||||
"message": {
|
||||
"text_key": "text",
|
||||
"data": {
|
||||
"timestamp": "2025-06-16 19:58:23 UTC",
|
||||
"sender": "Machine",
|
||||
"sender_name": "AI",
|
||||
"session_id": "29deb764-af3f-4d7d-94a0-47491ed241d6",
|
||||
"text": "Hello world! 🌍 How can I assist you today?",
|
||||
"files": [],
|
||||
"error": false,
|
||||
"edit": false,
|
||||
"properties": {
|
||||
"text_color": "",
|
||||
"background_color": "",
|
||||
"edited": false,
|
||||
"source": {
|
||||
"id": "Agent-ZOknz",
|
||||
"display_name": "Agent",
|
||||
"source": "gpt-4o-mini"
|
||||
},
|
||||
"icon": "bot",
|
||||
"allow_markdown": false,
|
||||
"positive_feedback": null,
|
||||
"state": "complete",
|
||||
"targets": []
|
||||
},
|
||||
"category": "message",
|
||||
"content_blocks": [
|
||||
{
|
||||
"title": "Agent Steps",
|
||||
"contents": [
|
||||
{
|
||||
"type": "text",
|
||||
"duration": 2,
|
||||
"header": {
|
||||
"title": "Input",
|
||||
"icon": "MessageSquare"
|
||||
},
|
||||
"text": "**Input**: hello world!"
|
||||
},
|
||||
{
|
||||
"type": "text",
|
||||
"duration": 226,
|
||||
"header": {
|
||||
"title": "Output",
|
||||
"icon": "MessageSquare"
|
||||
},
|
||||
"text": "Hello world! 🌍 How can I assist you today?"
|
||||
}
|
||||
],
|
||||
"allow_markdown": true,
|
||||
"media_url": null
|
||||
}
|
||||
],
|
||||
"id": "f3d85d9a-261c-4325-b004-95a1bf5de7ca",
|
||||
"flow_id": "29deb764-af3f-4d7d-94a0-47491ed241d6",
|
||||
"duration": null
|
||||
},
|
||||
"default_value": "",
|
||||
"text": "Hello world! 🌍 How can I assist you today?",
|
||||
"sender": "Machine",
|
||||
"sender_name": "AI",
|
||||
"files": [],
|
||||
"session_id": "29deb764-af3f-4d7d-94a0-47491ed241d6",
|
||||
"timestamp": "2025-06-16T19:58:23+00:00",
|
||||
"flow_id": "29deb764-af3f-4d7d-94a0-47491ed241d6",
|
||||
"error": false,
|
||||
"edit": false,
|
||||
"properties": {
|
||||
"text_color": "",
|
||||
"background_color": "",
|
||||
"edited": false,
|
||||
"source": {
|
||||
"id": "Agent-ZOknz",
|
||||
"display_name": "Agent",
|
||||
"source": "gpt-4o-mini"
|
||||
},
|
||||
"icon": "bot",
|
||||
"allow_markdown": false,
|
||||
"positive_feedback": null,
|
||||
"state": "complete",
|
||||
"targets": []
|
||||
},
|
||||
"category": "message",
|
||||
"content_blocks": [
|
||||
{
|
||||
"title": "Agent Steps",
|
||||
"contents": [
|
||||
{
|
||||
"type": "text",
|
||||
"duration": 2,
|
||||
"header": {
|
||||
"title": "Input",
|
||||
"icon": "MessageSquare"
|
||||
},
|
||||
"text": "**Input**: hello world!"
|
||||
},
|
||||
{
|
||||
"type": "text",
|
||||
"duration": 226,
|
||||
"header": {
|
||||
"title": "Output",
|
||||
"icon": "MessageSquare"
|
||||
},
|
||||
"text": "Hello world! 🌍 How can I assist you today?"
|
||||
}
|
||||
],
|
||||
"allow_markdown": true,
|
||||
"media_url": null
|
||||
}
|
||||
],
|
||||
"duration": null
|
||||
}
|
||||
},
|
||||
"artifacts": {
|
||||
"message": "Hello world! 🌍 How can I assist you today?",
|
||||
"sender": "Machine",
|
||||
"sender_name": "AI",
|
||||
"files": [],
|
||||
"type": "object"
|
||||
},
|
||||
"outputs": {
|
||||
"message": {
|
||||
"message": "Hello world! 🌍 How can I assist you today?",
|
||||
"type": "text"
|
||||
}
|
||||
},
|
||||
"logs": {
|
||||
"message": []
|
||||
},
|
||||
"messages": [
|
||||
{
|
||||
"message": "Hello world! 🌍 How can I assist you today?",
|
||||
"sender": "Machine",
|
||||
"sender_name": "AI",
|
||||
"session_id": "29deb764-af3f-4d7d-94a0-47491ed241d6",
|
||||
"stream_url": null,
|
||||
"component_id": "ChatOutput-aF5lw",
|
||||
"files": [],
|
||||
"type": "text"
|
||||
}
|
||||
],
|
||||
"timedelta": null,
|
||||
"duration": null,
|
||||
"component_display_name": "Chat Output",
|
||||
"component_id": "ChatOutput-aF5lw",
|
||||
"used_frozen_result": false
|
||||
}
|
||||
]
|
||||
}
|
||||
]
|
||||
}
|
||||
```
|
||||
</details>
|
||||
|
||||
To further explore the API, see:
|
||||
|
||||
|
|
|
|||
|
|
@ -1,90 +0,0 @@
|
|||
---
|
||||
title: Terminal User Interface (TUI) commands
|
||||
slug: /get-started/tui
|
||||
---
|
||||
|
||||
The OpenRAG Terminal User Interface (TUI) allows you to set up, configure, and monitor your OpenRAG deployment directly from the terminal, on any operating system.
|
||||
|
||||

|
||||
|
||||
Instead of starting OpenRAG using Docker commands and manually editing values in the `.env` file, the TUI walks you through the setup. It prompts for variables where required, creates a `.env` file for you, and then starts OpenRAG.
|
||||
|
||||
Once OpenRAG is running, use the TUI to monitor your application, control your containers, and retrieve logs.
|
||||
|
||||
## Start the TUI
|
||||
|
||||
To start the TUI, run the following commands from the directory where you installed OpenRAG.
|
||||
|
||||
```bash
|
||||
uv sync
|
||||
uv run openrag
|
||||
```
|
||||
|
||||
The TUI Welcome Screen offers basic and advanced setup options.
|
||||
For more information on setup values during installation, see [Install OpenRAG](/install).
|
||||
|
||||
## Navigation
|
||||
|
||||
The TUI accepts mouse input or keyboard commands.
|
||||
|
||||
- <kbd>Arrow keys</kbd>: move between options
|
||||
- <kbd>Tab</kbd>/<kbd>Shift+Tab</kbd>: switch fields and buttons
|
||||
- <kbd>Enter</kbd>: select/confirm
|
||||
- <kbd>Escape</kbd>: back
|
||||
- <kbd>Q</kbd>: quit
|
||||
- <kbd>Number keys (1-4)</kbd>: quick access to main screens
|
||||
|
||||
## Container management
|
||||
|
||||
The TUI can deploy, manage, and upgrade your OpenRAG containers.
|
||||
|
||||
### Start container services
|
||||
|
||||
Click **Start Container Services** to start the OpenRAG containers.
|
||||
The TUI automatically detects your container runtime, and then checks if your machine has compatible GPU support by checking for `CUDA`, `NVIDIA_SMI`, and Docker/Podman runtime support. This check determines which Docker Compose file OpenRAG uses.
|
||||
The TUI then pulls the images and deploys the containers with the following command.
|
||||
```bash
|
||||
docker compose up -d
|
||||
```
|
||||
If images are missing, the TUI runs `docker compose pull`, then runs `docker compose up -d`.
|
||||
|
||||
### Start native services
|
||||
|
||||
A "native" service in OpenRAG refers to a service run natively on your machine, and not within a container.
|
||||
The `docling serve` process is a native service in OpenRAG, because it's a document processing service that is run on your local machine, and controlled separately from the containers.
|
||||
|
||||
To start or stop `docling serve` or any other native services, in the TUI main menu, click **Start Native Services** or **Stop Native Services**.
|
||||
|
||||
To view the status, port, or PID of a native service, in the TUI main menu, click [Status](#status).
|
||||
|
||||
### Status
|
||||
|
||||
The **Status** menu displays information on your container deployment.
|
||||
Here you can check container health, find your service ports, view logs, and upgrade your containers.
|
||||
|
||||
To view streaming logs, select the container you want to view, and press <kbd>l</kbd>.
|
||||
To copy your logs, click **Copy to Clipboard**.
|
||||
|
||||
To **upgrade** your containers, click **Upgrade**.
|
||||
**Upgrade** runs `docker compose pull` and then `docker compose up -d --force-recreate`.
|
||||
The first command pulls the latest images of OpenRAG.
|
||||
The second command recreates the containers with your data persisted.
|
||||
|
||||
To **reset** your containers, click **Reset**.
|
||||
Reset gives you a completely fresh start.
|
||||
Reset deletes all of your data, including OpenSearch data, uploaded documents, and authentication.
|
||||
**Reset** runs two commands.
|
||||
It first stops and removes all containers, volumes, and local images.
|
||||
```
|
||||
docker compose down --volumes --remove-orphans --rmi local
|
||||
```
|
||||
|
||||
When the first command is complete, OpenRAG removes any additional Docker objects with `prune`.
|
||||
|
||||
```
|
||||
docker system prune -f
|
||||
```
|
||||
|
||||
## Diagnostics
|
||||
|
||||
The **Diagnostics** menu provides health monitoring for your container runtimes and monitoring of your OpenSearch security.
|
||||
|
|
@ -3,18 +3,17 @@ title: What is OpenRAG?
|
|||
slug: /
|
||||
---
|
||||
|
||||
OpenRAG is an open-source package for building agentic RAG systems.
|
||||
It supports integration with a wide range of orchestration tools, vector databases, and LLM providers.
|
||||
OpenRAG is an open-source package for building agentic RAG systems that integrates with a wide range of orchestration tools, vector databases, and LLM providers.
|
||||
|
||||
OpenRAG connects and amplifies three popular, proven open-source projects into one powerful platform:
|
||||
|
||||
* [Langflow](https://docs.langflow.org) - Langflow is a powerful tool to build and deploy AI agents and MCP servers. It supports all major LLMs, vector databases and a growing library of AI tools.
|
||||
* [Langflow](https://docs.langflow.org): Langflow is a popular tool for building and deploying AI agents and MCP servers. It supports all major LLMs, vector databases, and a growing library of AI tools.
|
||||
|
||||
* [OpenSearch](https://docs.opensearch.org/latest/) - OpenSearch is a community-driven, Apache 2.0-licensed open source search and analytics suite that makes it easy to ingest, search, visualize, and analyze data.
|
||||
* [OpenSearch](https://docs.opensearch.org/latest/): OpenSearch is a community-driven, Apache 2.0-licensed open source search and analytics suite that makes it easy to ingest, search, visualize, and analyze data.
|
||||
|
||||
* [Docling](https://docling-project.github.io/docling/) - Docling simplifies document processing, parsing diverse formats — including advanced PDF understanding — and providing seamless integrations with the gen AI ecosystem.
|
||||
* [Docling](https://docling-project.github.io/docling/): Docling simplifies document processing, parsing diverse formats — including advanced PDF understanding — and providing seamless integrations with the gen AI ecosystem.
|
||||
|
||||
OpenRAG builds on Langflow's familiar interface while adding OpenSearch for vector storage and Docling for simplified document parsing, with opinionated flows that serve as ready-to-use recipes for ingestion, retrieval, and generation from popular sources like OneDrive, Google Drive, and AWS.
|
||||
OpenRAG builds on Langflow's familiar interface while adding OpenSearch for vector storage and Docling for simplified document parsing, with opinionated flows that serve as ready-to-use recipes for ingestion, retrieval, and generation from popular sources like Google Drive, OneDrive, and Sharepoint.
|
||||
|
||||
What's more, every part of the stack is swappable. Write your own custom components in Python, try different language models, and customize your flows to build an agentic RAG system.
|
||||
|
||||
|
|
|
|||
|
|
@ -7,9 +7,9 @@ import Icon from "@site/src/components/icon/icon";
|
|||
import Tabs from '@theme/Tabs';
|
||||
import TabItem from '@theme/TabItem';
|
||||
|
||||
OpenRAG recognizes [supported environment variables](#supported-environment-variables) from the following sources:
|
||||
OpenRAG recognizes environment variables from the following sources:
|
||||
|
||||
* [Environment variables](#supported-environment-variables) - Values set in the `.env` file.
|
||||
* [Environment variables](#configure-environment-variables) - Values set in the `.env` file.
|
||||
* [Langflow runtime overrides](#langflow-runtime-overrides) - Langflow components may tweak environment variables at runtime.
|
||||
* [Default or fallback values](#default-values-and-fallbacks) - These values are default or fallback values if OpenRAG doesn't find a value.
|
||||
|
||||
|
|
|
|||
|
|
@ -36,11 +36,6 @@ const sidebars = {
|
|||
id: "get-started/quickstart",
|
||||
label: "Quickstart"
|
||||
},
|
||||
{
|
||||
type: "doc",
|
||||
id: "get-started/tui",
|
||||
label: "Terminal User Interface (TUI)"
|
||||
},
|
||||
{
|
||||
type: "doc",
|
||||
id: "core-components/agents",
|
||||
|
|
|
|||
BIN
docs/static/img/opensearch-agent-flow.png
vendored
BIN
docs/static/img/opensearch-agent-flow.png
vendored
Binary file not shown.
|
Before Width: | Height: | Size: 1,004 KiB After Width: | Height: | Size: 1 MiB |
Loading…
Add table
Reference in a new issue