environment variables

This commit is contained in:
April M 2025-12-05 11:23:20 -08:00
parent bfb028808f
commit 4538433720
15 changed files with 111 additions and 99 deletions

View file

@ -4,8 +4,8 @@ import TabItem from '@theme/TabItem';
1. Open the **OpenRAG OpenSearch Agent** flow in the Langflow visual editor: From the **Chat** window, click <Icon name="Settings2" aria-hidden="true"/> **Settings**, click **Edit in Langflow**, and then click **Proceed**. 1. Open the **OpenRAG OpenSearch Agent** flow in the Langflow visual editor: From the **Chat** window, click <Icon name="Settings2" aria-hidden="true"/> **Settings**, click **Edit in Langflow**, and then click **Proceed**.
2. Create a [Langflow API key](https://docs.langflow.org/api-keys-and-authentication), which is a user-specific token required to send requests to the Langflow server. 2. Optional: If you don't want to use the Langflow API key that is generated automatically when you install OpenRAG, you can create a [Langflow API key](https://docs.langflow.org/api-keys-and-authentication).
This key doesn't grant access to OpenRAG. This key doesn't grant access to OpenRAG; it is only for authenticating with the Langflow API.
1. In the Langflow visual editor, click your user icon in the header, and then select **Settings**. 1. In the Langflow visual editor, click your user icon in the header, and then select **Settings**.
2. Click **Langflow API Keys**, and then click <Icon name="Plus" aria-hidden="true"/> **Add New**. 2. Click **Langflow API Keys**, and then click <Icon name="Plus" aria-hidden="true"/> **Add New**.

View file

@ -31,7 +31,7 @@ Anthropic doesn't provide embedding models. If you select Anthropic for your lan
3. Click **Complete**. 3. Click **Complete**.
4. Select a provider for embeddings, provide the required information, and then select the embedding model you want to use. 4. Select a provider for embeddings, provide the required information, and then select the embedding model you want to use.
For information about another provider's credentials and settings, see the instructions for your chosen provider. For information about another provider's credentials and settings, see the instructions for that provider.
5. Continue through the overview slides for a brief introduction to OpenRAG, or click <Icon name="ArrowRight" aria-hidden="true"/> **Skip overview**. 5. Continue through the overview slides for a brief introduction to OpenRAG, or click <Icon name="ArrowRight" aria-hidden="true"/> **Skip overview**.
The overview demonstrates some basic functionality that is covered in the [quickstart](/quickstart#chat-with-documents) and in other parts of the OpenRAG documentation. The overview demonstrates some basic functionality that is covered in the [quickstart](/quickstart#chat-with-documents) and in other parts of the OpenRAG documentation.
@ -46,7 +46,7 @@ The overview demonstrates some basic functionality that is covered in the [quick
3. Click **Complete**. 3. Click **Complete**.
4. Select a provider for embeddings, provide the required information, and then select the embedding model you want to use. 4. Select a provider for embeddings, provide the required information, and then select the embedding model you want to use.
For information about another provider's credentials and settings, see the instructions for your chosen provider. For information about another provider's credentials and settings, see the instructions for that provider.
5. Continue through the overview slides for a brief introduction to OpenRAG, or click <Icon name="ArrowRight" aria-hidden="true"/> **Skip overview**. 5. Continue through the overview slides for a brief introduction to OpenRAG, or click <Icon name="ArrowRight" aria-hidden="true"/> **Skip overview**.
The overview demonstrates some basic functionality that is covered in the [quickstart](/quickstart#chat-with-documents) and in other parts of the OpenRAG documentation. The overview demonstrates some basic functionality that is covered in the [quickstart](/quickstart#chat-with-documents) and in other parts of the OpenRAG documentation.
@ -97,7 +97,7 @@ The overview demonstrates some basic functionality that is covered in the [quick
3. Click **Complete**. 3. Click **Complete**.
4. Select a provider for embeddings, provide the required information, and then select the embedding model you want to use. 4. Select a provider for embeddings, provide the required information, and then select the embedding model you want to use.
For information about another provider's credentials and settings, see the instructions for your chosen provider. For information about another provider's credentials and settings, see the instructions for that provider.
5. Continue through the overview slides for a brief introduction to OpenRAG, or click <Icon name="ArrowRight" aria-hidden="true"/> **Skip overview**. 5. Continue through the overview slides for a brief introduction to OpenRAG, or click <Icon name="ArrowRight" aria-hidden="true"/> **Skip overview**.
The overview demonstrates some basic functionality that is covered in the [quickstart](/quickstart#chat-with-documents) and in other parts of the OpenRAG documentation. The overview demonstrates some basic functionality that is covered in the [quickstart](/quickstart#chat-with-documents) and in other parts of the OpenRAG documentation.

View file

@ -1,12 +1,12 @@
- Gather the credentials and connection details for your preferred model providers. * Gather the credentials and connection details for your preferred model providers.
You must have access to at least one language model and one embedding model.
If a provider offers both types, you can use the same provider for both models.
If a provider offers only one type, you must select two providers.
- OpenAI: Create an [OpenAI API key](https://platform.openai.com/api-keys). * **OpenAI**: Create an [OpenAI API key](https://platform.openai.com/api-keys).
- Anthropic language models: Create an [Anthropic API key](https://www.anthropic.com/docs/api/reference). * **Anthropic**: Create an [Anthropic API key](https://www.anthropic.com/docs/api/reference).
- IBM watsonx.ai: Get your watsonx.ai API endpoint, IBM project ID, and IBM API key from your watsonx deployment. Anthropic provides language models only; you must select an additional provider for embeddings.
- Ollama: Use the [Ollama documentation](https://docs.ollama.com/) to set up your Ollama instance locally, in the cloud, or on a remote server, and then get your Ollama server's base URL. * **IBM watsonx.ai**: Get your watsonx.ai API endpoint, IBM project ID, and IBM API key from your watsonx deployment.
* **Ollama**: Deploy an [Ollama instance and models](https://docs.ollama.com/) locally, in the cloud, or on a remote server, and then get your Ollama server's base URL and the names of the models that you want to use.
You must have access to at least one language model and one embedding model. * Optional: Install GPU support with an NVIDIA GPU, [CUDA](https://docs.nvidia.com/cuda/) support, and compatible NVIDIA drivers on the OpenRAG host machine. If you don't have GPU capabilities, OpenRAG provides an alternate CPU-only deployment.
If your chosen provider offers both types, you can use the same provider for both models.
If your provider offers only one type, such as Anthropic, you must select two providers.
- Optional: Install GPU support with an NVIDIA GPU, [CUDA](https://docs.nvidia.com/cuda/) support, and compatible NVIDIA drivers on the OpenRAG host machine. If you don't have GPU capabilities, OpenRAG provides an alternate CPU-only deployment.

View file

@ -1,6 +1,6 @@
- Install [uv](https://docs.astral.sh/uv/getting-started/installation/). * Install [uv](https://docs.astral.sh/uv/getting-started/installation/).
- Install [Podman](https://podman.io/docs/installation) (recommended) or [Docker](https://docs.docker.com/get-docker/). * Install [Podman](https://podman.io/docs/installation) (recommended) or [Docker](https://docs.docker.com/get-docker/).
- Install [`podman-compose`](https://docs.podman.io/en/latest/markdown/podman-compose.1.html) or [Docker Compose](https://docs.docker.com/compose/install/). * Install [`podman-compose`](https://docs.podman.io/en/latest/markdown/podman-compose.1.html) or [Docker Compose](https://docs.docker.com/compose/install/).
To use Docker Compose with Podman, you must alias Docker Compose commands to Podman commands. To use Docker Compose with Podman, you must alias Docker Compose commands to Podman commands.

View file

@ -1 +1 @@
- Install [Python](https://www.python.org/downloads/release/python-3100/) version 3.13 or later. * Install [Python](https://www.python.org/downloads/release/python-3100/) version 3.13 or later.

View file

@ -1,2 +1,2 @@
- For Microsoft Windows, you must use the Windows Subsystem for Linux (WSL). * For Microsoft Windows, you must use the Windows Subsystem for Linux (WSL).
See [Install OpenRAG on Windows](/install-windows) before proceeding. See [Install OpenRAG on Windows](/install-windows) before proceeding.

View file

@ -19,8 +19,8 @@ If OpenRAG detects OAuth credentials during setup, it recommends **Advanced Setu
The OpenSearch password is required. The OpenSearch password is required.
The Langflow password is optional. The Langflow password is recommended but optional.
If the Langflow password is empty, Langflow runs in [autologin mode](https://docs.langflow.org/api-keys-and-authentication#langflow-auto-login) without password authentication. If the Langflow password is empty, the Langflow server starts without authentication enabled. For more information, see [Langflow settings](/reference/configuration#langflow-settings).
3. Optional: Enter your OpenAI API key, or leave this field empty if you want to configure model provider credentials later during application onboarding. 3. Optional: Enter your OpenAI API key, or leave this field empty if you want to configure model provider credentials later during application onboarding.
@ -57,8 +57,8 @@ If OpenRAG detects OAuth credentials during setup, it recommends **Advanced Setu
The OpenSearch password is required. The OpenSearch password is required.
The Langflow password is optional. The Langflow password is recommended but optional.
If the Langflow password is empty, Langflow runs in [autologin mode](https://docs.langflow.org/api-keys-and-authentication#langflow-auto-login) without password authentication. If the Langflow password is empty, the Langflow server starts without authentication enabled. For more information, see [Langflow settings](/reference/configuration#langflow-settings).
3. Optional: Enter your OpenAI API key, or leave this field empty if you want to configure model provider credentials later during application onboarding. 3. Optional: Enter your OpenAI API key, or leave this field empty if you want to configure model provider credentials later during application onboarding.
@ -76,7 +76,7 @@ If OpenRAG detects OAuth credentials during setup, it recommends **Advanced Setu
6. Click **Save Configuration**. 6. Click **Save Configuration**.
Your passwords, API key (if provided), and OAuth credentials (if provided) are stored in the `.env` file in your OpenRAG installation directory. Your passwords, API key, and OAuth credentials, if provided, are stored in the `.env` file in your OpenRAG installation directory.
If you modified any credentials that were pulled from an existing `.env` file, those values are updated in the `.env` file. If you modified any credentials that were pulled from an existing `.env` file, those values are updated in the `.env` file.
7. Click **Start All Services** to start the OpenRAG services that run in containers. 7. Click **Start All Services** to start the OpenRAG services that run in containers.
@ -105,10 +105,10 @@ If OpenRAG detects OAuth credentials during setup, it recommends **Advanced Setu
* `WEBHOOK_BASE_URL`: Sets the base address for the following OpenRAG OAuth connector endpoints: * `WEBHOOK_BASE_URL`: Sets the base address for the following OpenRAG OAuth connector endpoints:
- Amazon S3: Not applicable. * Amazon S3: Not applicable.
- Google Drive: `WEBHOOK_BASE_URL/connectors/google_drive/webhook` * Google Drive: `WEBHOOK_BASE_URL/connectors/google_drive/webhook`
- OneDrive: `WEBHOOK_BASE_URL/connectors/onedrive/webhook` * OneDrive: `WEBHOOK_BASE_URL/connectors/onedrive/webhook`
- SharePoint: `WEBHOOK_BASE_URL/connectors/sharepoint/webhook` * SharePoint: `WEBHOOK_BASE_URL/connectors/sharepoint/webhook`
12. Continue with [application onboarding](#application-onboarding). 12. Continue with [application onboarding](#application-onboarding).

View file

@ -34,12 +34,12 @@ When you [install OpenRAG](/install-options), you provide the initial configurat
This includes authentication credentials for OpenSearch and OAuth connectors. This includes authentication credentials for OpenSearch and OAuth connectors.
This configuration determines how OpenRAG authenticates with OpenSearch and controls access to documents in your knowledge base: This configuration determines how OpenRAG authenticates with OpenSearch and controls access to documents in your knowledge base:
* **No-auth mode (basic setup)**: If you choose **Basic Setup** in the [TUI](/tui), or your `.env` file doesn't include OAuth credentials, then OpenRAG runs in no-auth mode. * **No-auth mode (basic setup)**: If you select **Basic Setup** in the [TUI](/tui), or your `.env` file doesn't include OAuth credentials, then the OpenRAG OpenSearch instance runs in no-auth mode.
This mode uses one anonymous JWT token for OpenSearch authentication. This mode uses one anonymous JWT token for OpenSearch authentication.
There is no differentiation between users; all users that access your OpenRAG instance can access all documents uploaded to your knowledge base. There is no differentiation between users; all users that access your OpenRAG instance can access all documents uploaded to your knowledge base.
* **OAuth mode (advanced setup)**: If you choose **Advanced Setup** in the [TUI](/tui), or your `.env` file includes OAuth credentials, then OpenRAG runs in OAuth mode. * **OAuth mode (advanced setup)**: If you select **Advanced Setup** in the [TUI](/tui), or your `.env` file includes OAuth credentials, then the OpenRAG OpenSearch instance runs in OAuth mode.
This mode uses a unique JWT token for each OpenRAG user, and each document is tagged with user ownership. This mode uses a unique JWT token for each OpenRAG user, and each document is tagged with user ownership.
Documents are filtered by user owner; users see only the documents that they uploaded or have access to through their cloud storage accounts. Documents are filtered by user owner; users see only the documents that they uploaded or have access to through their cloud storage accounts.

View file

@ -26,59 +26,65 @@ Use this installation method if you don't want to [use the Terminal User Interfa
<PartialPrereqNoScript /> <PartialPrereqNoScript />
## Install OpenRAG with Docker Compose ## Prepare your deployment
To install OpenRAG with Docker Compose, do the following: 1. Clone the OpenRAG repository:
1. Clone the OpenRAG repository.
```bash ```bash
git clone https://github.com/langflow-ai/openrag.git git clone https://github.com/langflow-ai/openrag.git
```
2. Change to the root of the cloned repository:
```bash
cd openrag cd openrag
``` ```
2. Install dependencies. 3. Install dependencies:
```bash ```bash
uv sync uv sync
``` ```
3. Copy the example `.env` file included in the repository root. 4. Create a `.env` file at the root of the cloned repository.
The example file includes all environment variables with comments to guide you in finding and setting their values.
You can create an empty file or copy the repository's [`.env.example`](https://github.com/langflow-ai/openrag/blob/main/.env.example) file.
The example file contains some of the [OpenRAG environment variables](/reference/configuration) to get you started with configuring your deployment.
```bash ```bash
cp .env.example .env cp .env.example .env
``` ```
Alternatively, create a new `.env` file in the repository root. 5. Edit the `.env` file to configure your deployment using [OpenRAG environment variables](/reference/configuration).
``` The OpenRAG Docker Compose files pull values from your `.env` file to configure the OpenRAG containers.
touch .env The following variables are required or recommended:
```
4. The Docker Compose files are populated with the values from your `.env` file. * **`OPENSEARCH_PASSWORD` (Required)**: Sets the OpenSearch administrator password. It must adhere to the [OpenSearch password complexity requirements](https://docs.opensearch.org/latest/security/configuration/demo-configuration/#setting-up-a-custom-admin-password).
The `OPENSEARCH_PASSWORD` value must be set.
`OPENSEARCH_PASSWORD` can be automatically generated when using the TUI, but for a Docker Compose installation, you can set it manually instead. To generate an OpenSearch admin password, see the [OpenSearch documentation](https://docs.opensearch.org/latest/security/configuration/demo-configuration/#setting-up-a-custom-admin-password).
The following values are optional: * **`LANGFLOW_SUPERUSER`**: The username for the Langflow administrator user. Defaults to `admin` if not set.
```env * **`LANGFLOW_SUPERUSER_PASSWORD` (Strongly recommended)**: Sets the Langflow administrator password, and determines the Langflow server's default authentication mode. If not set, the Langflow server starts without authentication enabled. For more information, see [Langflow settings](/reference/configuration#langflow-settings).
OPENAI_API_KEY=your_openai_api_key
LANGFLOW_SECRET_KEY=your_secret_key
```
`OPENAI_API_KEY` is optional. You can provide it during [application onboarding](#application-onboarding) or choose a different model provider. If you want to set it in your `.env` file, you can find your OpenAI API key in your [OpenAI account](https://platform.openai.com/api-keys). * **`LANGFLOW_SECRET_KEY` (Strongly recommended)**: A secret encryption key for internal Langflow operations. It is recommended to [generate your own Langflow secret key](https://docs.langflow.org/api-keys-and-authentication#langflow-secret-key). If not set, Langflow generates a secret key automatically.
`LANGFLOW_SECRET_KEY` is optional. Langflow will auto-generate it if not set. For more information, see the [Langflow documentation](https://docs.langflow.org/api-keys-and-authentication#langflow-secret-key). * **Model provider credentials**: Provide credentials for your preferred model providers. If not set in the `.env` file, you must configure at least one provider during [application onboarding](#application-onboarding).
The following Langflow configuration values are optional but important to consider: * `OPENAI_API_KEY`
* `ANTHROPIC_API_KEY`
* `OLLAMA_ENDPOINT`
* `WATSONX_API_KEY`
* `WATSONX_ENDPOINT`
* `WATSONX_PROJECT_ID`
```env * **OAuth provider credentials**: To upload documents from external storage, such as Google Drive, set the required OAuth credentials for the connectors that you want to use. You can [manage OAuth credentials](/ingestion#oauth-ingestion) later, but it is recommended to configure them during initial set up so you don't have to rebuild the containers.
LANGFLOW_SUPERUSER=admin
LANGFLOW_SUPERUSER_PASSWORD=your_langflow_password
```
`LANGFLOW_SUPERUSER` defaults to `admin`. You can omit it or set it to a different username. `LANGFLOW_SUPERUSER_PASSWORD` is optional. If omitted, Langflow runs in [autologin mode](https://docs.langflow.org/api-keys-and-authentication#langflow-auto-login) with no password required. If set, Langflow requires password authentication. * **Amazon**: Provide your AWS Access Key ID and AWS Secret Access Key with access to your S3 instance. For more information, see the AWS documentation on [Configuring access to AWS applications](https://docs.aws.amazon.com/singlesignon/latest/userguide/manage-your-applications.html).
* **Google**: Provide your Google OAuth Client ID and Google OAuth Client Secret. You can generate these in the [Google Cloud Console](https://console.cloud.google.com/apis/credentials). For more information, see the [Google OAuth client documentation](https://developers.google.com/identity/protocols/oauth2).
* **Microsoft**: For the Microsoft OAuth Client ID and Microsoft OAuth Client Secret, provide [Azure application registration credentials for SharePoint and OneDrive](https://learn.microsoft.com/en-us/onedrive/developer/rest-api/getting-started/app-registration?view=odsp-graph-online). For more information, see the [Microsoft Graph OAuth client documentation](https://learn.microsoft.com/en-us/onedrive/developer/rest-api/getting-started/graph-oauth).
For more information on configuring OpenRAG with environment variables, see [Environment variables](/reference/configuration). For more information and variables, see [Environment variables](/reference/configuration).
5. Start `docling serve` on the host machine. 6. Start `docling serve` on the host machine.
OpenRAG Docker installations require that `docling serve` is running on port 5001 on the host machine. OpenRAG Docker installations require that `docling serve` is running on port 5001 on the host machine.
This enables [Mac MLX](https://opensource.apple.com/projects/mlx/) support for document processing. This enables [Mac MLX](https://opensource.apple.com/projects/mlx/) support for document processing.
@ -86,7 +92,7 @@ To install OpenRAG with Docker Compose, do the following:
uv run python scripts/docling_ctl.py start --port 5001 uv run python scripts/docling_ctl.py start --port 5001
``` ```
6. Confirm `docling serve` is running. 7. Confirm `docling serve` is running.
``` ```
uv run python scripts/docling_ctl.py status uv run python scripts/docling_ctl.py status
``` ```
@ -99,7 +105,7 @@ To install OpenRAG with Docker Compose, do the following:
PID: 27746 PID: 27746
``` ```
7. Deploy OpenRAG locally with the appropriate Docker Compose file for your environment. 8. Deploy OpenRAG locally with the appropriate Docker Compose file for your environment.
Both files deploy the same services. Both files deploy the same services.
* [`docker-compose.yml`](https://github.com/langflow-ai/openrag/blob/main/docker-compose.yml) is an OpenRAG deployment with GPU support for accelerated AI processing. This Docker Compose file requires an NVIDIA GPU with [CUDA](https://docs.nvidia.com/cuda/) support. * [`docker-compose.yml`](https://github.com/langflow-ai/openrag/blob/main/docker-compose.yml) is an OpenRAG deployment with GPU support for accelerated AI processing. This Docker Compose file requires an NVIDIA GPU with [CUDA](https://docs.nvidia.com/cuda/) support.
@ -142,7 +148,7 @@ Both files deploy the same services.
| OpenSearch | http://localhost:9200 | Datastore for [knowledge](/knowledge). | | OpenSearch | http://localhost:9200 | Datastore for [knowledge](/knowledge). |
| OpenSearch Dashboards | http://localhost:5601 | OpenSearch database administration interface. | | OpenSearch Dashboards | http://localhost:5601 | OpenSearch database administration interface. |
8. Wait while the containers start, and then confirm all containers are running: 9. Wait while the containers start, and then confirm all containers are running:
* Docker Compose: * Docker Compose:
@ -158,7 +164,7 @@ Both files deploy the same services.
If all containers are running, you can access your OpenRAG services at their addresses. If all containers are running, you can access your OpenRAG services at their addresses.
9. Access the OpenRAG frontend at `http://localhost:3000` to continue with [application onboarding](#application-onboarding). 10. Access the OpenRAG frontend at `http://localhost:3000` to continue with [application onboarding](#application-onboarding).
<PartialOnboarding /> <PartialOnboarding />

View file

@ -1,12 +1,12 @@
--- ---
title: Choose an installation method title: Select an installation method
slug: /install-options slug: /install-options
--- ---
The [OpenRAG architecture](/#openrag-architecture) is lightweight and container-based with a central OpenRAG backend that orchestrates the various services and external connectors. The [OpenRAG architecture](/#openrag-architecture) is lightweight and container-based with a central OpenRAG backend that orchestrates the various services and external connectors.
Depending on your use case, OpenRAG can assist with service management, or you can manage the services yourself. Depending on your use case, OpenRAG can assist with service management, or you can manage the services yourself.
Choose the installation method that best fits your needs: Select the installation method that best fits your needs:
* **Use the [Terminal User Interface (TUI)](/tui) to manage services**: For guided configuration and simplified service management, install OpenRAG with TUI-managed services. * **Use the [Terminal User Interface (TUI)](/tui) to manage services**: For guided configuration and simplified service management, install OpenRAG with TUI-managed services.
@ -23,9 +23,9 @@ Choose the installation method that best fits your needs:
The first time you start OpenRAG, you must complete application onboarding. The first time you start OpenRAG, you must complete application onboarding.
This is required for all installation methods because it prepares the minimum required configuration for OpenRAG to run. This is required for all installation methods because it prepares the minimum required configuration for OpenRAG to run.
For TUI-managed services, you must also complete initial setup before you start the OpenRAG services. For TUI-managed services, you must also complete initial setup before you start the OpenRAG services.
For more information, see the instructions for your chosen installation method. For more information, see the instructions for your preferred installation method.
Your OpenRAG configuration is stored in a `.env` file in the OpenRAG installation directory. Your OpenRAG configuration is stored in a `.env` file in the OpenRAG installation directory.
When using TUI-managed services, the TUI prompts you for any missing values during setup and onboarding, and any values detected in a preexisting `.env` file are automatically populated. When using TUI-managed services, the TUI prompts you for any missing values during setup and onboarding, and any values detected in a preexisting `.env` file are automatically populated.
When using self-managed services, you must predefine these values in a `.env` file, as you would for any Docker or Podman deployment. When using self-managed services, you must predefine these values in a `.env` file, as you would for any Docker or Podman deployment.
For more information, see the instructions for your chosen installation method and [Environment variables](/reference/configuration). For more information, see the instructions for your preferred installation method and [Environment variables](/reference/configuration).

View file

@ -17,7 +17,7 @@ For guided configuration and simplified service management, install OpenRAG with
You can use [`uv`](https://docs.astral.sh/uv/getting-started/installation/) to install OpenRAG as a managed or unmanaged dependency in a new or existing Python project. You can use [`uv`](https://docs.astral.sh/uv/getting-started/installation/) to install OpenRAG as a managed or unmanaged dependency in a new or existing Python project.
For other installation methods, see [Choose an installation method](/install-options). For other installation methods, see [Select an installation method](/install-options).
## Prerequisites ## Prerequisites

View file

@ -22,7 +22,7 @@ The [automatic installer script](/install) also uses `uvx` to install OpenRAG.
::: :::
This installation method is best for testing OpenRAG by running it outside of a Python project. This installation method is best for testing OpenRAG by running it outside of a Python project.
For other installation methods, see [Choose an installation method](/install-options). For other installation methods, see [Select an installation method](/install-options).
## Prerequisites ## Prerequisites

View file

@ -21,7 +21,7 @@ For guided configuration and simplified service management, install OpenRAG with
The installer script installs `uv`, Docker or Podman, Docker Compose, and OpenRAG. The installer script installs `uv`, Docker or Podman, Docker Compose, and OpenRAG.
This installation method is best for testing OpenRAG by running it outside of a Python project. This installation method is best for testing OpenRAG by running it outside of a Python project.
For other installation methods, see [Choose an installation method](/install-options). For other installation methods, see [Select an installation method](/install-options).
## Prerequisites ## Prerequisites

View file

@ -16,7 +16,7 @@ Use this quickstart to install OpenRAG, and then try some of OpenRAG's core feat
<PartialPrereqWindows /> <PartialPrereqWindows />
- Get an [OpenAI API key](https://platform.openai.com/api-keys). * Get an [OpenAI API key](https://platform.openai.com/api-keys).
This quickstart uses OpenAI for simplicity. This quickstart uses OpenAI for simplicity.
For other providers, see the other [installation methods](/install-options). For other providers, see the other [installation methods](/install-options).

View file

@ -90,36 +90,42 @@ Control how OpenRAG [processes and ingests documents](/ingestion) into your know
| `OPENRAG_DOCUMENTS_PATHS` | `./openrag-documents` | Document paths for ingestion. | | `OPENRAG_DOCUMENTS_PATHS` | `./openrag-documents` | Document paths for ingestion. |
| `PICTURE_DESCRIPTIONS_ENABLED` | `false` | Enable picture descriptions. | | `PICTURE_DESCRIPTIONS_ENABLED` | `false` | Enable picture descriptions. |
### Langflow settings ### Langflow settings {#langflow-settings}
Configure Langflow authentication. Configure the OpenRAG Langflow server's authentication, contact point, and built-in flow definitions.
:::info
The `LANGFLOW_SUPERUSER_PASSWORD` is set in your `.env` file, and this value determines the default values for several other Langflow authentication variables.
If the `LANGFLOW_SUPERUSER_PASSWORD` variable isn't set, then the Langflow server starts _without_ authentication enabled.
For better security, it is recommended to set `LANGFLOW_SUPERUSER_PASSWORD` so the [Langflow server starts with authentication enabled](https://docs.langflow.org/api-keys-and-authentication#start-a-langflow-server-with-authentication-enabled).
:::
| Variable | Default | Description | | Variable | Default | Description |
|----------|---------|-------------| |----------|---------|-------------|
| `LANGFLOW_AUTO_LOGIN` | `False` | Enable auto-login for Langflow. | | `LANGFLOW_AUTO_LOGIN` | Determined by `LANGFLOW_SUPERUSER_PASSWORD` | Whether to enable [auto-login mode](https://docs.langflow.org/api-keys-and-authentication#langflow-auto-login) for the Langflow visual editor and CLI. If `LANGFLOW_SUPERUSER_PASSWORD` isn't set, then `LANGFLOW_AUTO_LOGIN` is `True` and auto-login mode is enabled. If `LANGFLOW_SUPERUSER_PASSWORD` is set, then `LANGFLOW_AUTO_LOGIN` is `False` and auto-login mode is disabled. Langflow API calls always require authentication with a Langflow API key regardless of the auto-login setting. |
| `LANGFLOW_CHAT_FLOW_ID` | Built-in flow ID | This value is automatically set to the ID of the chat [flow](/agents). The default value is found in [`.env.example`](https://github.com/langflow-ai/openrag/blob/main/.env.example). Only change this value if you explicitly don't want to use this built-in flow. | | `LANGFLOW_ENABLE_SUPERUSER_CLI` | Determined by `LANGFLOW_SUPERUSER_PASSWORD` | Whether to enable the [Langflow CLI `langflow superuser` command](https://docs.langflow.org/api-keys-and-authentication#langflow-enable-superuser-cli). If `LANGFLOW_SUPERUSER_PASSWORD` isn't set, then `LANGFLOW_ENABLE_SUPERUSER_CLI` is `True` and superuser accounts can be created with the Langflow CLI. If `LANGFLOW_SUPERUSER_PASSWORD` is set, then `LANGFLOW_ENABLE_SUPERUSER_CLI` is `False` and the `langflow superuser` command is disabled. |
| `LANGFLOW_ENABLE_SUPERUSER_CLI` | `False` | Enable superuser privileges for Langflow CLI commands. | | `LANGFLOW_NEW_USER_IS_ACTIVE` | Determined by `LANGFLOW_SUPERUSER_PASSWORD` | Whether new [Langflow user accounts are active by default](https://docs.langflow.org/api-keys-and-authentication#langflow-new-user-is-active). If `LANGFLOW_SUPERUSER_PASSWORD` isn't set, then `LANGFLOW_NEW_USER_IS_ACTIVE` is `True` and new user accounts are active by default. If `LANGFLOW_SUPERUSER_PASSWORD` is set, then `LANGFLOW_NEW_USER_IS_ACTIVE` is `False` and new user accounts are inactive by default. |
| `LANGFLOW_INGEST_FLOW_ID` | Built-in flow ID | This value is automatically set to the ID of the ingestion [flow](/agents). The default value is found in [`.env.example`](https://github.com/langflow-ai/openrag/blob/main/.env.example). Only change this value if you explicitly don't want to use this built-in flow. | | `LANGFLOW_PUBLIC_URL` | `http://localhost:7860` | Public URL for the Langflow instance. Forms the base URL for Langflow API calls and other interfaces with your OpenRAG Langflow instance. |
| `LANGFLOW_KEY` | Automatically generated | Explicit Langflow API key. | | `LANGFLOW_KEY` | Automatically generated | A Langflow API key to run flows with Langflow API calls. Because Langflow API keys are server-specific, allow OpenRAG to generate this key initially. You can create additional Langflow API keys after deploying OpenRAG. |
| `LANGFLOW_NEW_USER_IS_ACTIVE` | `False` | Whether new Langflow users are active by default. | | `LANGFLOW_SECRET_KEY` | Automatically generated | Secret encryption key for Langflow internal operations. It is recommended to [generate your own Langflow secret key](https://docs.langflow.org/api-keys-and-authentication#langflow-secret-key) for this variable. If not set, Langflow generates a secret key automatically. |
| `LANGFLOW_PUBLIC_URL` | `http://localhost:7860` | Public URL for the Langflow instance. | | `LANGFLOW_SUPERUSER` | `admin` | Username for the Langflow administrator user. |
| `LANGFLOW_SECRET_KEY` | Not set | Secret key for Langflow internal operations. | | `LANGFLOW_SUPERUSER_PASSWORD` | Not set | Langflow administrator password. If not set, the Langflow server starts _without_ authentication enabled. It is recommended to set `LANGFLOW_SUPERUSER_PASSWORD` so the [Langflow server starts with authentication enabled](https://docs.langflow.org/api-keys-and-authentication#start-a-langflow-server-with-authentication-enabled). |
| `LANGFLOW_SUPERUSER` | None, must be explicitly set | Langflow admin username. Required. |
| `LANGFLOW_SUPERUSER_PASSWORD` | None, must be explicitly set | Langflow admin password. Required. |
| `LANGFLOW_URL` | `http://localhost:7860` | URL for the Langflow instance. | | `LANGFLOW_URL` | `http://localhost:7860` | URL for the Langflow instance. |
| `NUDGES_FLOW_ID` | Built-in flow ID | This value is automatically set to the ID of the nudges [flow](/agents). The default value is found in [`.env.example`](https://github.com/langflow-ai/openrag/blob/main/.env.example). Only change this value if you explicitly don't want to use this built-in flow. | | `LANGFLOW_CHAT_FLOW_ID`, `LANGFLOW_INGEST_FLOW_ID`, `NUDGES_FLOW_ID` | Built-in flow IDs | These variables are set automatically to the IDs of the chat, ingestion, and nudges [flows](/agents). The default values are found in [`.env.example`](https://github.com/langflow-ai/openrag/blob/main/.env.example). Only change these values if you want to replace a built-in flow with your own custom flow. The flow JSON must be present in your version of the OpenRAG codebase. For example, if you [deploy self-managed services](/docker), you can add the flow JSON to your local clone of the OpenRAG repository before deploying OpenRAG. |
| `SYSTEM_PROMPT` | `You are a helpful AI assistant with access to a knowledge base. Answer questions based on the provided context.` | System prompt instructions for the agent driving the **Chat** flow. | | `SYSTEM_PROMPT` | `You are a helpful AI assistant with access to a knowledge base. Answer questions based on the provided context.` | System prompt instructions for the agent driving the **Chat** flow. |
### OAuth provider settings ### OAuth provider settings
Configure OAuth providers and external service integrations. Configure [OAuth providers](/ingestion#oauth-ingestion) and external service integrations.
| Variable | Default | Description | | Variable | Default | Description |
|----------|---------|-------------| |----------|---------|-------------|
| `AWS_ACCESS_KEY_ID` / `AWS_SECRET_ACCESS_KEY` | - | AWS integrations. | | `AWS_ACCESS_KEY_ID`<br/>`AWS_SECRET_ACCESS_KEY` | Not set | Enable access to AWS S3 with an [AWS OAuth app](https://docs.aws.amazon.com/singlesignon/latest/userguide/manage-your-applications.html) integration. |
| `GOOGLE_OAUTH_CLIENT_ID` / `GOOGLE_OAUTH_CLIENT_SECRET` | - | Google OAuth authentication. | | `GOOGLE_OAUTH_CLIENT_ID`<br/>`GOOGLE_OAUTH_CLIENT_SECRET` | Not set | Enable the [Google OAuth client](https://developers.google.com/identity/protocols/oauth2) integration. You can generate these values in the [Google Cloud Console](https://console.cloud.google.com/apis/credentials). |
| `MICROSOFT_GRAPH_OAUTH_CLIENT_ID` / `MICROSOFT_GRAPH_OAUTH_CLIENT_SECRET` | - | Microsoft OAuth. | | `MICROSOFT_GRAPH_OAUTH_CLIENT_ID`<br/>`MICROSOFT_GRAPH_OAUTH_CLIENT_SECRET` | Not set | Enable the [Microsoft Graph OAuth client](https://learn.microsoft.com/en-us/onedrive/developer/rest-api/getting-started/graph-oauth) integration by providing [Azure application registration credentials for SharePoint and OneDrive](https://learn.microsoft.com/en-us/onedrive/developer/rest-api/getting-started/app-registration?view=odsp-graph-online). |
| `WEBHOOK_BASE_URL` | - | Base URL for webhook endpoints. | | `WEBHOOK_BASE_URL` | Not set | Base URL for OAuth connector webhook endpoints. If not set, a default base URL is used. |
### OpenSearch settings ### OpenSearch settings
@ -127,10 +133,10 @@ Configure OpenSearch database authentication.
| Variable | Default | Description | | Variable | Default | Description |
|----------|---------|-------------| |----------|---------|-------------|
| `OPENSEARCH_HOST` | `localhost` | OpenSearch host. | | `OPENSEARCH_HOST` | `localhost` | OpenSearch instance host. |
| `OPENSEARCH_PASSWORD` | - | Password for OpenSearch admin user. Required. | | `OPENSEARCH_PORT` | `9200` | OpenSearch instance port. |
| `OPENSEARCH_PORT` | `9200` | OpenSearch port. | | `OPENSEARCH_USERNAME` | `admin` | OpenSearch administrator username. |
| `OPENSEARCH_USERNAME` | `admin` | OpenSearch username. | | `OPENSEARCH_PASSWORD` | Must be set at start up | Required. OpenSearch administrator password. Must adhere to the [OpenSearch password complexity requirements](https://docs.opensearch.org/latest/security/configuration/demo-configuration/#setting-up-a-custom-admin-password). You must set this directly in the `.env` or in the TUI's [**Basic/Advanced Setup**(/install#setup)]. |
### System settings ### System settings
@ -141,8 +147,8 @@ Configure general system components, session management, and logging.
| `LANGFLOW_KEY_RETRIES` | `15` | Number of retries for Langflow key generation. | | `LANGFLOW_KEY_RETRIES` | `15` | Number of retries for Langflow key generation. |
| `LANGFLOW_KEY_RETRY_DELAY` | `2.0` | Delay between retries in seconds. | | `LANGFLOW_KEY_RETRY_DELAY` | `2.0` | Delay between retries in seconds. |
| `LANGFLOW_VERSION` | `OPENRAG_VERSION` | Langflow Docker image version. By default, OpenRAG uses the `OPENRAG_VERSION` for the Langflow Docker image version. | | `LANGFLOW_VERSION` | `OPENRAG_VERSION` | Langflow Docker image version. By default, OpenRAG uses the `OPENRAG_VERSION` for the Langflow Docker image version. |
| `LOG_FORMAT` | Disabled | Set to `json` to enabled JSON-formatted log output. | | `LOG_FORMAT` | Not set | Set to `json` to enabled JSON-formatted log output. If not set, the default format is used. |
| `LOG_LEVEL` | `INFO` | Logging level (DEBUG, INFO, WARNING, ERROR). | | `LOG_LEVEL` | `INFO` | Logging level. Can be one of `DEBUG`, `INFO`, `WARNING`, or `ERROR`. `DEBUG` provides the most detailed logs but can impact performance. |
| `MAX_WORKERS` | `1` | Maximum number of workers for document processing. | | `MAX_WORKERS` | `1` | Maximum number of workers for document processing. |
| `OPENRAG_VERSION` | `latest` | The version of the OpenRAG Docker images to run. For more information, see [Upgrade OpenRAG](/upgrade) | | `OPENRAG_VERSION` | `latest` | The version of the OpenRAG Docker images to run. For more information, see [Upgrade OpenRAG](/upgrade) |
| `SERVICE_NAME` | `openrag` | Service name for logging. | | `SERVICE_NAME` | `openrag` | Service name for logging. |