diff --git a/docs/docs/_partial-factory-reset-warning.mdx b/docs/docs/_partial-factory-reset-warning.mdx
index 582125fb..97f2ca7e 100644
--- a/docs/docs/_partial-factory-reset-warning.mdx
+++ b/docs/docs/_partial-factory-reset-warning.mdx
@@ -2,7 +2,7 @@
This is a destructive action that does the following:
* Destroys all OpenRAG containers, volumes, and local images with `docker compose down --volumes --remove-orphans --rmi local`.
-* Prunes any additional Docker objects with `docker system prune -f`.
+* Prunes any additional container objects with `docker system prune -f`.
* Deletes the contents of OpenRAG's `config` and `./opensearch-data` directories.
* Deletes the `conversations.json` file.
diff --git a/docs/docs/_partial-onboarding.mdx b/docs/docs/_partial-onboarding.mdx
index feca9955..2ef5c4ad 100644
--- a/docs/docs/_partial-onboarding.mdx
+++ b/docs/docs/_partial-onboarding.mdx
@@ -2,13 +2,13 @@ import Icon from "@site/src/components/icon/icon";
import Tabs from '@theme/Tabs';
import TabItem from '@theme/TabItem';
-## Application onboarding
+## Complete the application onboarding process
-The first time you start the OpenRAG application, you must complete application onboarding to select language and embedding models that are essential for OpenRAG features like the [**Chat**](/chat).
+The first time you start the OpenRAG application, you must complete the application onboarding process to select language and embedding models that are essential for OpenRAG features like the [**Chat**](/chat).
Some of these variables, such as the embedding models, can be changed seamlessly after onboarding.
Others are immutable and require you to destroy and recreate the OpenRAG containers.
-For more information, see [Environment variables](/reference/configuration).
+For more information, see the [OpenRAG environment variables reference](/reference/configuration).
You can use different providers for your language model and embedding model, such as Anthropic for the language model and OpenAI for the embedding model.
Additionally, you can set multiple embedding models.
@@ -22,9 +22,9 @@ You only need to complete onboarding for your preferred providers.
Anthropic doesn't provide embedding models. If you select Anthropic for your language model, you must select a different provider for the embedding model.
:::
-1. Enter your Anthropic API key, or enable **Get API key from environment variable** to pull the key from your OpenRAG `.env` file.
+1. Enter your Anthropic API key, or enable **Get API key from environment variable** to pull the key from your [OpenRAG `.env` file](/reference/configuration).
- If you haven't set `ANTHROPIC_API_KEY` in your `.env` file, you must enter the key manually.
+ If you set `ANTHROPIC_API_KEY` in your OpenRAG `.env` file, this value can be populated automatically.
2. Under **Advanced settings**, select the language model that you want to use.
@@ -36,7 +36,7 @@ For information about another provider's credentials and settings, see the instr
5. Click **Complete**.
After you configure the embedding model, OpenRAG uses your credentials and models to ingest some [initial documents](/knowledge#default-documents). This tests the connection, and it allows you to ask OpenRAG about itself in the [**Chat**](/chat).
- If there is a problem with the model configuration, an error occurs and you are redirected back to application onboarding.
+ If there is a problem with the model configuration, an error occurs and you are redirected back to the application onboarding screen.
Verify that the credential is valid and has access to the selected model, and then click **Complete** to retry ingestion.
6. Continue through the overview slides for a brief introduction to OpenRAG, or click **Skip overview**.
@@ -47,6 +47,8 @@ The overview demonstrates some basic functionality that is covered in the [quick
1. Use the values from your IBM watsonx deployment for the **watsonx.ai API Endpoint**, **IBM Project ID**, and **IBM API key** fields.
+ If you set `WATSONX_API_KEY`, `WATSONX_API_URL`, or `WATSONX_PROJECT_ID` in your [OpenRAG `.env` file](/reference/configuration), these values can be populated automatically.
+
2. Under **Advanced settings**, select the language model that you want to use.
3. Click **Complete**.
@@ -57,7 +59,7 @@ For information about another provider's credentials and settings, see the instr
5. Click **Complete**.
After you configure the embedding model, OpenRAG uses your credentials and models to ingest some [initial documents](/knowledge#default-documents). This tests the connection, and it allows you to ask OpenRAG about itself in the [**Chat**](/chat).
- If there is a problem with the model configuration, an error occurs and you are redirected back to application onboarding.
+ If there is a problem with the model configuration, an error occurs and you are redirected back to the application onboarding screen.
Verify that the credentials are valid and have access to the selected model, and then click **Complete** to retry ingestion.
6. Continue through the overview slides for a brief introduction to OpenRAG, or click **Skip overview**.
@@ -76,7 +78,7 @@ The recommendations given here are a reasonable starting point for users with at
The OpenRAG team recommends the OpenAI `gpt-oss:20b` lanuage model and the [`nomic-embed-text`](https://ollama.com/library/nomic-embed-text) embedding model.
However, `gpt-oss:20b` uses 16GB of RAM, so consider using Ollama Cloud or running Ollama on a remote machine.
-1. [Install Ollama locally or on a remote server](https://docs.ollama.com/) or [run models in Ollama Cloud](https://docs.ollama.com/cloud).
+1. [Install Ollama locally or on a remote server](https://docs.ollama.com/), or [run models in Ollama Cloud](https://docs.ollama.com/cloud).
If you are running a remote server, it must be accessible from your OpenRAG deployment.
@@ -98,7 +100,7 @@ However, `gpt-oss:20b` uses 16GB of RAM, so consider using Ollama Cloud or runni
4. Click **Complete**.
After you configure the embedding model, OpenRAG uses the address and models to ingest some [initial documents](/knowledge#default-documents). This tests the connection, and it allows you to ask OpenRAG about itself in the [**Chat**](/chat).
- If there is a problem with the model configuration, an error occurs and you are redirected back to application onboarding.
+ If there is a problem with the model configuration, an error occurs and you are redirected back to the application onboarding screen.
Verify that the server address is valid, and that the selected model is running on the server.
Then, click **Complete** to retry ingestion.
@@ -108,9 +110,9 @@ The overview demonstrates some basic functionality that is covered in the [quick
-1. Enter your OpenAI API key, or enable **Get API key from environment variable** to pull the key from your OpenRAG `.env` file.
+1. Enter your OpenAI API key, or enable **Get API key from environment variable** to pull the key from your [OpenRAG `.env` file](/reference/configuration).
- If you entered an OpenAI API key during setup, enable **Get API key from environment variable**.
+ If you set `OPENAI_API_KEY` in your OpenRAG `.env` file, this value can be populated automatically.
2. Under **Advanced settings**, select the language model that you want to use.
@@ -122,7 +124,7 @@ For information about another provider's credentials and settings, see the instr
5. Click **Complete**.
After you configure the embedding model, OpenRAG uses your credentials and models to ingest some [initial documents](/knowledge#default-documents). This tests the connection, and it allows you to ask OpenRAG about itself in the [**Chat**](/chat).
- If there is a problem with the model configuration, an error occurs and you are redirected back to application onboarding.
+ If there is a problem with the model configuration, an error occurs and you are redirected back to the application onboarding screen.
Verify that the credential is valid and has access to the selected model, and then click **Complete** to retry ingestion.
6. Continue through the overview slides for a brief introduction to OpenRAG, or click **Skip overview**.
diff --git a/docs/docs/_partial-opensearch-auth-mode.mdx b/docs/docs/_partial-opensearch-auth-mode.mdx
new file mode 100644
index 00000000..e7b2ae99
--- /dev/null
+++ b/docs/docs/_partial-opensearch-auth-mode.mdx
@@ -0,0 +1,11 @@
+* **No-auth mode**: If you select **Basic Setup** in the [TUI](/tui), or your [OpenRAG `.env` file](/reference/configuration) doesn't include OAuth credentials, then the OpenRAG OpenSearch instance runs in no-auth mode.
+
+ This mode uses one anonymous JWT token for OpenSearch authentication.
+ There is no differentiation between users; all users that access your OpenRAG instance can access all documents uploaded to your knowledge base.
+
+* **OAuth mode**: If you select **Advanced Setup** in the [TUI](/tui), or your [OpenRAG `.env` file](/reference/configuration) includes OAuth credentials, then the OpenRAG OpenSearch instance runs in OAuth mode.
+
+ This mode uses a unique JWT token for each OpenRAG user, and each document is tagged with user ownership.
+ Documents are filtered by user owner; users see only the documents that they uploaded or have access to through their cloud storage accounts.
+
+ To enable OAuth mode after initial setup, see [Ingest files with OAuth connectors](/ingestion#oauth-ingestion).
\ No newline at end of file
diff --git a/docs/docs/_partial-setup.mdx b/docs/docs/_partial-setup.mdx
index c33833d6..9af2ecbf 100644
--- a/docs/docs/_partial-setup.mdx
+++ b/docs/docs/_partial-setup.mdx
@@ -1,8 +1,11 @@
import Tabs from '@theme/Tabs';
import TabItem from '@theme/TabItem';
+import PartialOpenSearchAuthMode from '@site/docs/_partial-opensearch-auth-mode.mdx';
You can use either **Basic Setup** or **Advanced Setup** to configure OpenRAG.
-This choice determines [how OpenRAG authenticates with OpenSearch and controls access to documents](/knowledge#auth).
+This choice determines how OpenRAG authenticates with your deployment's [OpenSearch instance](/knowledge), and it controls user access to documents stored in your OpenSearch knowledge base:
+
+
:::info
You must use **Advanced Setup** if you want to [use OAuth connectors to upload documents from cloud storage](/ingestion#oauth-ingestion).
@@ -22,11 +25,19 @@ If OpenRAG detects OAuth credentials during setup, it recommends **Advanced Setu
The Langflow password is recommended but optional.
If the Langflow password is empty, the Langflow server starts without authentication enabled. For more information, see [Langflow settings](/reference/configuration#langflow-settings).
-3. Optional: Enter your OpenAI API key, or leave this field empty if you want to configure model provider credentials later during application onboarding.
+3. Optional: Enter your OpenAI API key, or leave this field empty to provide model provider credentials during the application onboarding process.
+
+ There is no material difference between providing the key now or during the [application onboarding process](#application-onboarding).
+ If you provide a key now, it can be populated automatically during the application onboarding process if you select the OpenAI model provider, and then enable **Get API key from environment variable**.
+
+ OpenRAG's core functionality requires access to language and embedding models.
+ By default, OpenRAG uses OpenAI models.
+ If you aren't sure which models or providers to use, you must provide an OpenAI API key to use OpenRAG's default model configuration.
+ If you want to use a different model provider, you can leave this field empty.
4. Click **Save Configuration**.
- Your passwords and API key, if provided, are stored in the `.env` file in your OpenRAG installation directory.
+ Your passwords and API key, if provided, are stored in the [OpenRAG `.env` file](/reference/configuration) in your OpenRAG installation directory.
If you modified any credentials that were pulled from an existing `.env` file, those values are updated in the `.env` file.
5. Click **Start All Services** to start the OpenRAG services that run in containers.
@@ -46,7 +57,7 @@ If OpenRAG detects OAuth credentials during setup, it recommends **Advanced Setu
* From the TUI main menu, click **Open App**.
* In your browser, navigate to `localhost:3000`.
-8. Continue with [application onboarding](#application-onboarding).
+8. Continue with the [application onboarding process](#application-onboarding).
@@ -60,9 +71,17 @@ If OpenRAG detects OAuth credentials during setup, it recommends **Advanced Setu
The Langflow password is recommended but optional.
If the Langflow password is empty, the Langflow server starts without authentication enabled. For more information, see [Langflow settings](/reference/configuration#langflow-settings).
-3. Optional: Enter your OpenAI API key, or leave this field empty if you want to configure model provider credentials later during application onboarding.
+3. Optional: Enter your OpenAI API key, or leave this field empty to provide model provider credentials during the application onboarding process.
-4. To upload documents from external storage, such as Google Drive, add the required OAuth credentials for the connectors that you want to use. These settings can be populated automatically if OpenRAG detects these credentials in a `.env` file in the OpenRAG installation directory.
+ There is no material difference between providing the key now or during the [application onboarding process](#application-onboarding).
+ If you provide a key now, it can be populated automatically during the application onboarding process if you select the OpenAI model provider, and then enable **Get API key from environment variable**.
+
+ OpenRAG's core functionality requires access to language and embedding models.
+ By default, OpenRAG uses OpenAI models.
+ If you aren't sure which models or providers to use, you must provide an OpenAI API key to use OpenRAG's default model configuration.
+ If you want to use a different model provider, you can leave this field empty.
+
+4. To upload documents from external storage, such as Google Drive, add the required OAuth credentials for the connectors that you want to use. These settings can be populated automatically if OpenRAG detects these credentials in an [OpenRAG `.env` file](/reference/configuration) in the OpenRAG installation directory.
* **Amazon**: Provide your AWS Access Key ID and AWS Secret Access Key with access to your S3 instance. For more information, see the AWS documentation on [Configuring access to AWS applications](https://docs.aws.amazon.com/singlesignon/latest/userguide/manage-your-applications.html).
* **Google**: Provide your Google OAuth Client ID and Google OAuth Client Secret. You can generate these in the [Google Cloud Console](https://console.cloud.google.com/apis/credentials). For more information, see the [Google OAuth client documentation](https://developers.google.com/identity/protocols/oauth2).
@@ -76,7 +95,7 @@ Register these redirect values with your OAuth provider as they are presented in
6. Click **Save Configuration**.
- Your passwords, API key, and OAuth credentials, if provided, are stored in the `.env` file in your OpenRAG installation directory.
+ Your passwords, API key, and OAuth credentials, if provided, are stored in the [OpenRAG `.env` file](/reference/configuration) in your OpenRAG installation directory.
If you modified any credentials that were pulled from an existing `.env` file, those values are updated in the `.env` file.
7. Click **Start All Services** to start the OpenRAG services that run in containers.
@@ -101,16 +120,16 @@ Register these redirect values with your OAuth provider as they are presented in
11. If required, you can edit the following additional environment variables.
Only change these variables if your OpenRAG deployment has a non-default network configuration, such as a reverse proxy or custom domain.
- * `LANGFLOW_PUBLIC_URL`: Sets the base address to access the Langflow web interface. This is where users interact with flows in a browser.
+ * `LANGFLOW_PUBLIC_URL`: Sets the base address to access the Langflow web interface. This is where users interact with flows in a browser.
+ * `WEBHOOK_BASE_URL`: Sets the base address for the following OpenRAG OAuth connector endpoints:
+ * Amazon S3: Not applicable.
+ * Google Drive: `WEBHOOK_BASE_URL/connectors/google_drive/webhook`
+ * OneDrive: `WEBHOOK_BASE_URL/connectors/onedrive/webhook`
+ * SharePoint: `WEBHOOK_BASE_URL/connectors/sharepoint/webhook`
- * `WEBHOOK_BASE_URL`: Sets the base address for the following OpenRAG OAuth connector endpoints:
-
- * Amazon S3: Not applicable.
- * Google Drive: `WEBHOOK_BASE_URL/connectors/google_drive/webhook`
- * OneDrive: `WEBHOOK_BASE_URL/connectors/onedrive/webhook`
- * SharePoint: `WEBHOOK_BASE_URL/connectors/sharepoint/webhook`
-
-12. Continue with [application onboarding](#application-onboarding).
+12. Continue with the [application onboarding process](#application-onboarding).
-
\ No newline at end of file
+
+
+
\ No newline at end of file
diff --git a/docs/docs/core-components/agents.mdx b/docs/docs/core-components/agents.mdx
index b1124155..d242552d 100644
--- a/docs/docs/core-components/agents.mdx
+++ b/docs/docs/core-components/agents.mdx
@@ -32,7 +32,7 @@ For example, to view and edit the built-in **Chat** flow (the **OpenRAG OpenSear
If prompted to acknowledge that you are entering Langflow, click **Proceed**.
- If Langflow requests login information, enter the `LANGFLOW_SUPERUSER` and `LANGFLOW_SUPERUSER_PASSWORD` from the `.env` file in your OpenRAG installation directory.
+ If Langflow requests login information, enter the `LANGFLOW_SUPERUSER` and `LANGFLOW_SUPERUSER_PASSWORD` from your [OpenRAG `.env` file](/reference/configuration) in your OpenRAG installation directory.

@@ -63,7 +63,7 @@ Explore the [Langflow documentation](https://docs.langflow.org/) to learn more a
By default, OpenRAG is pinned to the latest Langflow Docker image for stability.
-If necessary, you can set a specific Langflow version with the [`LANGFLOW_VERSION`](/reference/configuration). However, there are risks to changing this setting:
+If necessary, you can set a specific Langflow version with the `LANGFLOW_VERSION` [environment variable](/reference/configuration). However, there are risks to changing this setting:
* The [Langflow documentation](https://docs.langflow.org/) describes the functionality present in the latest release of the Langflow OSS Python package. If your `LANGFLOW_VERSION` is different, the Langflow documentation might not align with the features and default settings in your OpenRAG installation.
diff --git a/docs/docs/core-components/ingestion.mdx b/docs/docs/core-components/ingestion.mdx
index 9fe95857..7a69e3dd 100644
--- a/docs/docs/core-components/ingestion.mdx
+++ b/docs/docs/core-components/ingestion.mdx
@@ -80,7 +80,7 @@ You can do this during [installation](/install#setup), or you can add the creden
3. The TUI presents redirect URIs for your OAuth app that you must register with your OAuth provider.
These are the URLs your OAuth provider will redirect back to after users authenticate and grant access to their cloud storage.
-4. Click **Save Configuration** to add the OAuth credentials to your OpenRAG [`.env`](/reference/configuration) file.
+4. Click **Save Configuration** to add the OAuth credentials to your [OpenRAG `.env` file](/reference/configuration).
5. Click **Start All Services** to restart the OpenRAG containers with OAuth enabled.
@@ -90,7 +90,7 @@ You should be prompted to sign in to your OAuth provider before being redirected
-If you [installed OpenRAG with self-managed services](/docker), set OAuth credentials in the `.env` file for Docker Compose.
+If you [installed OpenRAG with self-managed services](/docker), set OAuth credentials in your [OpenRAG `.env` file](/reference/configuration).
You can do this during [initial set up](/docker#setup), or you can add the credentials afterwards:
@@ -98,7 +98,7 @@ You can do this during [initial set up](/docker#setup), or you can add the crede
-2. Edit the `.env` file for Docker Compose to add the OAuth credentials for the cloud storage providers that you want to use:
+2. Edit your OpenRAG `.env` file to add the OAuth credentials for the cloud storage providers that you want to use:
* **Amazon**: Provide your AWS Access Key ID and AWS Secret Access Key with access to your S3 instance. For more information, see the AWS documentation on [Configuring access to AWS applications](https://docs.aws.amazon.com/singlesignon/latest/userguide/manage-your-applications.html).
diff --git a/docs/docs/core-components/knowledge.mdx b/docs/docs/core-components/knowledge.mdx
index 09ce6d7d..31c6d8a1 100644
--- a/docs/docs/core-components/knowledge.mdx
+++ b/docs/docs/core-components/knowledge.mdx
@@ -4,6 +4,7 @@ slug: /knowledge
---
import Icon from "@site/src/components/icon/icon";
+import PartialOpenSearchAuthMode from '@site/docs/_partial-opensearch-auth-mode.mdx';
OpenRAG includes a built-in [OpenSearch](https://docs.opensearch.org/latest/) instance that serves as the underlying datastore for your _knowledge_ (documents).
This specialized database is used to store and retrieve your documents and the associated vector data (embeddings).
@@ -27,7 +28,7 @@ Click a document to view the chunks produced from splitting the document during
### Default documents {#default-documents}
By default, OpenRAG includes some initial documents about OpenRAG.
-These documents are ingested automatically during [application onboarding](/install#application-onboarding).
+These documents are ingested automatically during the [application onboarding process](/install#application-onboarding).
You can use these documents to ask OpenRAG about itself, and to test the [**Chat**](/chat) feature before uploading your own documents.
@@ -36,21 +37,10 @@ It is recommended that you keep these documents, and use [filters](/knowledge-fi
## OpenSearch authentication and document access {#auth}
-When you [install OpenRAG](/install-options), you provide the initial configuration values for your OpenRAG services.
-This includes authentication credentials for OpenSearch and OAuth connectors.
-This configuration determines how OpenRAG authenticates with OpenSearch and controls access to documents in your knowledge base:
+When you [install OpenRAG](/install-options), you provide the initial configuration values for your OpenRAG services, including authentication credentials for OpenSearch and OAuth connectors.
+This configuration determines how OpenRAG authenticates with your deployment's OpenSearch instance, and it controls user access to documents in your knowledge base:
-* **No-auth mode (basic setup)**: If you select **Basic Setup** in the [TUI](/tui), or your `.env` file doesn't include OAuth credentials, then the OpenRAG OpenSearch instance runs in no-auth mode.
-
- This mode uses one anonymous JWT token for OpenSearch authentication.
- There is no differentiation between users; all users that access your OpenRAG instance can access all documents uploaded to your knowledge base.
-
-* **OAuth mode (advanced setup)**: If you select **Advanced Setup** in the [TUI](/tui), or your `.env` file includes OAuth credentials, then the OpenRAG OpenSearch instance runs in OAuth mode.
-
- This mode uses a unique JWT token for each OpenRAG user, and each document is tagged with user ownership.
- Documents are filtered by user owner; users see only the documents that they uploaded or have access to through their cloud storage accounts.
-
- You can enable OAuth mode after installation, as explained in [Ingest files with OAuth connectors](/ingestion#oauth-ingestion).
+
## OpenSearch indexes
@@ -80,18 +70,18 @@ If needed, you can use [filters](/knowledge-filters) to separate documents that
### Set the embedding model and dimensions {#set-the-embedding-model-and-dimensions}
-When you [install OpenRAG](/install-options), you select at least one embedding model during [application onboarding](/install#application-onboarding).
+When you [install OpenRAG](/install-options), you select at least one embedding model during the [application onboarding process](/install#application-onboarding).
OpenRAG automatically detects and configures the appropriate vector dimensions for your selected embedding model, ensuring optimal search performance and compatibility.
In the OpenRAG repository, you can find the complete list of supported models in [`models_service.py`](https://github.com/langflow-ai/openrag/blob/main/src/services/models_service.py) and the corresponding vector dimensions in [`settings.py`](https://github.com/langflow-ai/openrag/blob/main/src/config/settings.py).
-During application onboarding, you can select from the supported models.
+During the application onboarding process, you can select from the supported models.
The default embedding dimension is `1536`, and the default model is the OpenAI `text-embedding-3-small`.
-If you want to use an unsupported model, you must manually set the model in your [OpenRAG configuration](/reference/configuration).
+If you want to use an unsupported model, you must manually set the model in your [OpenRAG `.env` file](/reference/configuration).
If you use an unsupported embedding model that doesn't have defined dimensions in `settings.py`, then OpenRAG falls back to the default dimensions (1536) and logs a warning. OpenRAG's OpenSearch instance and flows continue to work, but [similarity search](https://www.ibm.com/think/topics/vector-search) quality can be affected if the actual model dimensions aren't 1536.
-To change the embedding model after onboarding, it is recommended that you modify the embedding model setting in the OpenRAG **Settings** page or in your [OpenRAG configuration](/reference/configuration).
+To change the embedding model after onboarding, it is recommended that you modify the embedding model setting in the OpenRAG **Settings** page or in your [OpenRAG `.env` file](/reference/configuration).
This will automatically update all relevant [OpenRAG flows](/agents) to use the new embedding model configuration.
### Set Docling parameters
@@ -121,7 +111,7 @@ For information about starting and stopping OpenRAG native services, like Doclin
* **Embedding model**: Select the model to use to generate vector embeddings for your documents.
This is initially set during installation.
- The recommended way to change this setting is in the OpenRAG **Settings** or your [OpenRAG configuration](/reference/configuration).
+ The recommended way to change this setting is in the OpenRAG **Settings** or your [OpenRAG `.env` file](/reference/configuration).
This will automatically update all relevant [OpenRAG flows](/agents) to use the new embedding model configuration.
If you uploaded documents prior to changing the embedding model, you can [create filters](/knowledge-filters) to separate documents embedded with different models, or you can reupload all documents to regenerate embeddings with the new model.
@@ -149,7 +139,7 @@ The default value is 200 characters, which represents an overlap of 20 percent i
The default path for local uploads is the `./openrag-documents` subdirectory in your OpenRAG installation directory. This is mounted to the `/app/openrag-documents/` directory inside the OpenRAG container. Files added to the host or container directory are visible in both locations.
-To change this location, modify the **Documents Paths** variable in either the [**Advanced Setup** menu](/install#setup) or in the `.env` used by Docker Compose.
+To change this location, modify the **Documents Paths** variable in either the [**Advanced Setup** menu](/install#setup) or in your [OpenRAG `.env` file](/reference/configuration).
## Delete knowledge {#delete-knowledge}
diff --git a/docs/docs/get-started/docker.mdx b/docs/docs/get-started/docker.mdx
index 8121e4da..9eca74f1 100644
--- a/docs/docs/get-started/docker.mdx
+++ b/docs/docs/get-started/docker.mdx
@@ -20,12 +20,12 @@ Use this installation method if you don't want to [use the Terminal User Interfa
-
-
+
+
## Prepare your deployment {#setup}
1. Clone the OpenRAG repository:
@@ -61,13 +61,13 @@ The following variables are required or recommended:
* **`OPENSEARCH_PASSWORD` (Required)**: Sets the OpenSearch administrator password. It must adhere to the [OpenSearch password complexity requirements](https://docs.opensearch.org/latest/security/configuration/demo-configuration/#setting-up-a-custom-admin-password).
- * **`LANGFLOW_SUPERUSER`**: The username for the Langflow administrator user. Defaults to `admin` if not set.
+ * **`LANGFLOW_SUPERUSER`**: The username for the Langflow administrator user. If `LANGFLOW_SUPERUSER` isn't set, then the default value is `admin`.
- * **`LANGFLOW_SUPERUSER_PASSWORD` (Strongly recommended)**: Sets the Langflow administrator password, and determines the Langflow server's default authentication mode. If not set, the Langflow server starts without authentication enabled. For more information, see [Langflow settings](/reference/configuration#langflow-settings).
+ * **`LANGFLOW_SUPERUSER_PASSWORD` (Strongly recommended)**: Sets the Langflow administrator password, and determines the Langflow server's default authentication mode. If `LANGFLOW_SUPERUSER_PASSWORD` isn't set, then the Langflow server starts without authentication enabled. For more information, see [Langflow settings](/reference/configuration#langflow-settings).
- * **`LANGFLOW_SECRET_KEY` (Strongly recommended)**: A secret encryption key for internal Langflow operations. It is recommended to [generate your own Langflow secret key](https://docs.langflow.org/api-keys-and-authentication#langflow-secret-key). If not set, Langflow generates a secret key automatically.
+ * **`LANGFLOW_SECRET_KEY` (Strongly recommended)**: A secret encryption key for internal Langflow operations. It is recommended to [generate your own Langflow secret key](https://docs.langflow.org/api-keys-and-authentication#langflow-secret-key). If `LANGFLOW_SECRET_KEY` isn't set, then Langflow generates a secret key automatically.
- * **Model provider credentials**: Provide credentials for your preferred model providers. If not set in the `.env` file, you must configure at least one provider during [application onboarding](#application-onboarding).
+ * **Model provider credentials**: Provide credentials for your preferred model providers. If none of these are set in the `.env` file, you must configure at least one provider during the [application onboarding process](#application-onboarding).
* `OPENAI_API_KEY`
* `ANTHROPIC_API_KEY`
@@ -160,7 +160,7 @@ Both files deploy the same services.
When the containers are running, you can access your OpenRAG services at their addresses.
-5. Access the OpenRAG frontend at `http://localhost:3000`, and then continue with [application onboarding](#application-onboarding).
+5. Access the OpenRAG frontend at `http://localhost:3000`, and then continue with the [application onboarding process](#application-onboarding).
diff --git a/docs/docs/get-started/install-options.mdx b/docs/docs/get-started/install-options.mdx
index feca2b70..90f800a5 100644
--- a/docs/docs/get-started/install-options.mdx
+++ b/docs/docs/get-started/install-options.mdx
@@ -8,7 +8,7 @@ Depending on your use case, OpenRAG can assist with service management, or you c
Select the installation method that best fits your needs:
-* **Use the [Terminal User Interface (TUI)](/tui) to manage services**: For guided configuration and simplified service management, install OpenRAG with TUI-managed services.
+* **Use the [Terminal User Interface (TUI)](/tui) to manage services**: For guided configuration and simplified service management, install OpenRAG with TUI-managed services. Use one of the following options:
* [**Automatic installer script**](/install): Run one script to install the required dependencies and OpenRAG.
* [**`uv`**](/install-uv): Install OpenRAG as a dependency of a new or existing Python project.
@@ -16,16 +16,19 @@ Select the installation method that best fits your needs:
* [**Install OpenRAG on Microsoft Windows**](/install-windows): On Windows machines, you must install OpenRAG within the Windows Subsystem for Linux (WSL).
+ :::warning
OpenRAG doesn't support nested virtualization; don't run OpenRAG on a WSL distribution that is inside a Windows VM.
+ :::
* [**Manage your own services**](/docker): You can use Docker or Podman to deploy self-managed OpenRAG services.
-The first time you start OpenRAG, you must complete application onboarding.
+The first time you start OpenRAG, you must complete the application onboarding process.
This is required for all installation methods because it prepares the minimum required configuration for OpenRAG to run.
For TUI-managed services, you must also complete initial setup before you start the OpenRAG services.
For more information, see the instructions for your preferred installation method.
Your OpenRAG configuration is stored in a `.env` file in the OpenRAG installation directory.
-When using TUI-managed services, the TUI prompts you for any missing values during setup and onboarding, and any values detected in a preexisting `.env` file are automatically populated.
-When using self-managed services, you must predefine these values in a `.env` file, as you would for any Docker or Podman deployment.
-For more information, see the instructions for your preferred installation method and [Environment variables](/reference/configuration).
\ No newline at end of file
+When using TUI-managed services, this file is created automatically, or you can provide a pre-populated `.env` file before starting the TUI.
+The TUI prompts you for the required values during setup and onboarding, and any values detected in a preexisting `.env` file are populated automatically.
+When using self-managed services, you must provide a pre-populated `.env` file, as you would for any Docker or Podman deployment.
+For more information, see the instructions for your preferred installation method and the [OpenRAG environment variables reference](/reference/configuration).
\ No newline at end of file
diff --git a/docs/docs/get-started/install-uv.mdx b/docs/docs/get-started/install-uv.mdx
index 1b6c8dfc..9a73546e 100644
--- a/docs/docs/get-started/install-uv.mdx
+++ b/docs/docs/get-started/install-uv.mdx
@@ -12,10 +12,11 @@ import PartialPrereqNoScript from '@site/docs/_partial-prereq-no-script.mdx';
import PartialPrereqWindows from '@site/docs/_partial-prereq-windows.mdx';
import PartialPrereqPython from '@site/docs/_partial-prereq-python.mdx';
import PartialInstallNextSteps from '@site/docs/_partial-install-next-steps.mdx';
+import PartialOpenSearchAuthMode from '@site/docs/_partial-opensearch-auth-mode.mdx';
-For guided configuration and simplified service management, install OpenRAG with services managed by the [Terminal User Interface (TUI)](/tui).
+Use [`uv`](https://docs.astral.sh/uv/getting-started/installation/) to install OpenRAG as a managed or unmanaged dependency in a new or existing Python project.
-You can use [`uv`](https://docs.astral.sh/uv/getting-started/installation/) to install OpenRAG as a managed or unmanaged dependency in a new or existing Python project.
+When you install OpenRAG with `uv`, you will use the [Terminal User Interface (TUI)](/tui) to configure and manage your OpenRAG deployment.
For other installation methods, see [Select an installation method](/install-options).
@@ -23,12 +24,12 @@ For other installation methods, see [Select an installation method](/install-opt
-
-
+
+
## Install and start OpenRAG with uv
There are two ways to install OpenRAG with `uv`:
@@ -40,7 +41,7 @@ This is recommended because it adds OpenRAG to your `pyproject.toml` and lockfil
If you encounter errors during installation, see [Troubleshoot OpenRAG](/support/troubleshoot).
-### uv add {#uv-add}
+### Use uv add {#uv-add}
1. Create a new `uv`-managed Python project:
@@ -57,7 +58,7 @@ If you encounter errors during installation, see [Troubleshoot OpenRAG](/support
Because `uv` manages the virtual environment for you, you won't see a `(venv)` prompt.
`uv` commands automatically use the project's virtual environment.
-2. Add OpenRAG to your project:
+3. Add OpenRAG to your project:
* Add the latest version:
@@ -79,13 +80,15 @@ If you encounter errors during installation, see [Troubleshoot OpenRAG](/support
For more options, see [Managing dependencies with `uv`](https://docs.astral.sh/uv/concepts/projects/dependencies/).
-3. Start the OpenRAG TUI:
+4. Optional: If you want to use a pre-populated [OpenRAG `.env` file](/reference/configuration), copy it to this directory before starting OpenRAG.
+
+5. Start the OpenRAG TUI:
```bash
uv run openrag
```
-### uv pip install {#uv-pip-install}
+### Use uv pip install {#uv-pip-install}
1. Activate your virtual environment.
@@ -95,7 +98,9 @@ If you encounter errors during installation, see [Troubleshoot OpenRAG](/support
uv pip install openrag
```
-3. Start the OpenRAG TUI:
+3. Optional: If you want to use a pre-populated [OpenRAG `.env` file](/reference/configuration), copy it to this directory before starting OpenRAG.
+
+4. Start the OpenRAG TUI:
```bash
uv run openrag
@@ -103,12 +108,13 @@ If you encounter errors during installation, see [Troubleshoot OpenRAG](/support
## Set up OpenRAG with the TUI {#setup}
-When you install OpenRAG with `uv`, you manage the OpenRAG services with the Terminal User Interface (TUI).
+When you install OpenRAG with `uv`, you manage the OpenRAG services with the TUI.
The TUI guides you through the initial configuration process before you start the OpenRAG services.
-Your [OpenRAG configuration](/reference/configuration) is stored in a `.env` file that is created automatically in the Python project where you installed OpenRAG.
-If OpenRAG detects an existing `.env` file, the TUI automatically populates those values during setup and onboarding.
-Container definitions are stored in the `docker-compose` files in the same directory.
+Your configuration values are stored in an [OpenRAG `.env` file](/reference/configuration) that is created automatically in the Python project where you installed OpenRAG.
+If OpenRAG detects an existing `.env` file in this directory, then the TUI can populate those values automatically during setup and onboarding.
+
+Container definitions are stored in the `docker-compose` files in the same directory as the OpenRAG `.env` file.
diff --git a/docs/docs/get-started/install-uvx.mdx b/docs/docs/get-started/install-uvx.mdx
index 87af4f2f..a165191f 100644
--- a/docs/docs/get-started/install-uvx.mdx
+++ b/docs/docs/get-started/install-uvx.mdx
@@ -12,15 +12,16 @@ import PartialPrereqNoScript from '@site/docs/_partial-prereq-no-script.mdx';
import PartialPrereqWindows from '@site/docs/_partial-prereq-windows.mdx';
import PartialPrereqPython from '@site/docs/_partial-prereq-python.mdx';
import PartialInstallNextSteps from '@site/docs/_partial-install-next-steps.mdx';
+import PartialOpenSearchAuthMode from '@site/docs/_partial-opensearch-auth-mode.mdx';
-For guided configuration and simplified service management, install OpenRAG with services managed by the [Terminal User Interface (TUI)](/tui).
-
-You can use [`uvx`](https://docs.astral.sh/uv/guides/tools/#running-tools) to invoke OpenRAG outside of a Python project or without modifying your project's dependencies.
+Use [`uvx`](https://docs.astral.sh/uv/guides/tools/#running-tools) to invoke OpenRAG outside of a Python project or without modifying your project's dependencies.
:::tip
The [automatic installer script](/install) also uses `uvx` to install OpenRAG.
:::
+When you install OpenRAG with `uvx`, you will use the [Terminal User Interface (TUI)](/tui) to configure and manage your OpenRAG deployment.
+
This installation method is best for testing OpenRAG by running it outside of a Python project.
For other installation methods, see [Select an installation method](/install-options).
@@ -28,12 +29,12 @@ For other installation methods, see [Select an installation method](/install-opt
-
-
+
+
## Install and run OpenRAG with uvx
1. Create a directory to store your OpenRAG configuration files and data, and then change to that directory:
@@ -43,7 +44,7 @@ For other installation methods, see [Select an installation method](/install-opt
cd openrag-workspace
```
-2. Optional: If you want to use a pre-populated [`.env`](/reference/configuration) file for OpenRAG, copy it to this directory before invoking OpenRAG.
+2. Optional: If you want to use a pre-populated [OpenRAG `.env` file](/reference/configuration), copy it to this directory before invoking OpenRAG.
3. Invoke OpenRAG:
@@ -66,15 +67,13 @@ If you encounter errors during installation, see [Troubleshoot OpenRAG](/support
## Set up OpenRAG with the TUI {#setup}
-When you install OpenRAG with `uvx`, you manage the OpenRAG services with the Terminal User Interface (TUI).
+When you install OpenRAG with `uvx`, you manage the OpenRAG services with the TUI.
The TUI guides you through the initial configuration process before you start the OpenRAG services.
-Your [OpenRAG configuration](/reference/configuration) is stored in a `.env` file that is created automatically in the OpenRAG installation directory.
-If OpenRAG detects an existing `.env` file, the TUI automatically populates those values during setup and onboarding.
+Your configuration values are stored in an [OpenRAG `.env` file](/reference/configuration) that is created automatically in the OpenRAG installation directory, which is the directory where you invoked OpenRAG.
+If OpenRAG detects an existing `.env` file in this directory, then the TUI can populate those values automatically during setup and onboarding.
-Container definitions are stored in the `docker-compose` files in the OpenRAG installation directory.
-
-With `uvx`, the OpenRAG `.env` and `docker-compose` files are stored in the directory where you invoked OpenRAG.
+Container definitions are stored in the `docker-compose` files in the same directory as the OpenRAG `.env` file.
diff --git a/docs/docs/get-started/install-windows.mdx b/docs/docs/get-started/install-windows.mdx
index a8d411b7..664f8702 100644
--- a/docs/docs/get-started/install-windows.mdx
+++ b/docs/docs/get-started/install-windows.mdx
@@ -5,11 +5,13 @@ slug: /install-windows
If you're using Windows, you must install OpenRAG within the Windows Subsystem for Linux (WSL).
-## Nested virtualization isn't supported
+:::warning
+Nested virtualization isn't supported.
OpenRAG isn't compatible with nested virtualization, which can cause networking issues.
Don't install OpenRAG on a WSL distribution that is installed inside a Windows VM.
Instead, install OpenRAG on your base OS or a non-nested Linux VM.
+:::
## Install OpenRAG in the WSL
diff --git a/docs/docs/get-started/install.mdx b/docs/docs/get-started/install.mdx
index 1c09e1ba..d044e48e 100644
--- a/docs/docs/get-started/install.mdx
+++ b/docs/docs/get-started/install.mdx
@@ -11,14 +11,16 @@ import PartialPrereqCommon from '@site/docs/_partial-prereq-common.mdx';
import PartialPrereqWindows from '@site/docs/_partial-prereq-windows.mdx';
import PartialPrereqPython from '@site/docs/_partial-prereq-python.mdx';
import PartialInstallNextSteps from '@site/docs/_partial-install-next-steps.mdx';
+import PartialOpenSearchAuthMode from '@site/docs/_partial-opensearch-auth-mode.mdx';
:::tip
-For a fully guided installation and preview of OpenRAG's core features, try the [quickstart](/quickstart).
+To quickly install and test OpenRAG's core features, try the [quickstart](/quickstart).
:::
-For guided configuration and simplified service management, install OpenRAG with services managed by the [Terminal User Interface (TUI)](/tui).
-
The installer script installs `uv`, Docker or Podman, Docker Compose, and OpenRAG.
+Then, it installs and runs OpenRAG with `uvx`.
+
+When you install OpenRAG with the installer script, you will use the [Terminal User Interface (TUI)](/tui) to configure and manage your OpenRAG deployment.
This installation method is best for testing OpenRAG by running it outside of a Python project.
For other installation methods, see [Select an installation method](/install-options).
@@ -27,10 +29,10 @@ For other installation methods, see [Select an installation method](/install-opt
-
-
+
+
## Run the installer script {#install}
1. Create a directory to store your OpenRAG configuration files and data, and then change to that directory:
@@ -46,21 +48,13 @@ For other installation methods, see [Select an installation method](/install-opt
curl -fsSL https://docs.openr.ag/files/run_openrag_with_prereqs.sh | bash
```
- :::tip
- You can also manually [download the OpenRAG install script](https://docs.openr.ag/files/run_openrag_with_prereqs.sh), move it to your OpenRAG directory, and then run it:
-
- ```bash
- bash run_openrag_with_prereqs.sh
- ```
- :::
-
The installer script installs OpenRAG with [`uvx`](https://docs.astral.sh/uv/guides/tools/#running-tools) in the directory where you run the script.
3. Wait while the installer script prepares your environment and installs OpenRAG.
You might be prompted to install certain dependencies if they aren't already present in your environment.
The entire process can take a few minutes.
-Once the environment is ready, the OpenRAG Terminal User Interface (TUI) starts.
+Once the environment is ready, the OpenRAG TUI starts.

@@ -72,15 +66,13 @@ If you encounter errors during installation, see [Troubleshoot OpenRAG](/support
## Set up OpenRAG with the TUI {#setup}
-When you install OpenRAG with the installer script, you manage the OpenRAG services with the Terminal User Interface (TUI).
+When you install OpenRAG with the installer script, you manage the OpenRAG services with the TUI.
The TUI guides you through the initial configuration process before you start the OpenRAG services.
-Your [OpenRAG configuration](/reference/configuration) is stored in a `.env` file that is created automatically in the OpenRAG installation directory.
-If OpenRAG detects an existing `.env` file, the TUI automatically populates those values during setup and onboarding.
+Your configuration values are stored in an [OpenRAG `.env` file](/reference/configuration) that is created automatically in the OpenRAG installation directory, which is the directory where you ran the installer script.
+If OpenRAG detects an existing `.env` file in this directory, then the TUI can populate those values automatically during setup and onboarding.
-Container definitions are stored in the `docker-compose` files in the OpenRAG installation directory.
-
-Because the installer script uses `uvx`, the OpenRAG `.env` and `docker-compose` files are stored in the directory where you ran the installer script.
+Container definitions are stored in the `docker-compose` files in the same directory as the OpenRAG `.env` file.
diff --git a/docs/docs/get-started/manage-services.mdx b/docs/docs/get-started/manage-services.mdx
index 09cb8675..4e851a3a 100644
--- a/docs/docs/get-started/manage-services.mdx
+++ b/docs/docs/get-started/manage-services.mdx
@@ -18,70 +18,99 @@ For [self-managed deployments](/docker), run Docker or Podman commands to manage
## Monitor services
+
+
+
* **TUI Status menu**: In the **Status** menu (3), you can access streaming logs for all OpenRAG services.
Select the service you want to view, and then press l.
To copy the logs, click **Copy to Clipboard**.
* **TUI Diagnostics menu**: The TUI's **Diagnostics** menu (4) provides health monitoring for your container runtimes and monitoring of your OpenSearch instance.
-* **Self-managed containers**: Get container logs with [`docker compose logs`](https://docs.docker.com/reference/cli/docker/compose/logs/) or [`podman logs`](https://docs.podman.io/en/latest/markdown/podman-logs.1.html).
+* **Docling**: See [Stop, start, and inspect native services](#start-native-services).
+
+
+
+
+* **Containers**: Get container logs with [`docker compose logs`](https://docs.docker.com/reference/cli/docker/compose/logs/) or [`podman logs`](https://docs.podman.io/en/latest/markdown/podman-logs.1.html).
* **Docling**: See [Stop, start, and inspect native services](#start-native-services).
+
+
+
## Stop and start containers
-* **TUI**: In the TUI's **Status** menu (3), click **Stop Services** to stop all OpenRAG container-based services.
+
+
- Click **Start All Services** to restart the OpenRAG containers.
- This function triggers the following processes:
+In the TUI's **Status** menu (3), click **Stop Services** to stop all OpenRAG container-based services.
+Then, click **Start All Services** to restart the OpenRAG containers.
- 1. OpenRAG automatically detects your container runtime, and then checks if your machine has compatible GPU support by checking for `CUDA`, `NVIDIA_SMI`, and Docker/Podman runtime support. This check determines which Docker Compose file OpenRAG uses because there are separate Docker Compose files for GPU and CPU deployments.
+When you click **Start All Services**, the following processes are triggered:
- 2. OpenRAG pulls the OpenRAG container images with `docker compose pull` if any images are missing.
+1. OpenRAG automatically detects your container runtime, and then checks if your machine has compatible GPU support by checking for `CUDA`, `NVIDIA_SMI`, and Docker/Podman runtime support. This check determines which Docker Compose file OpenRAG uses because there are separate Docker Compose files for GPU and CPU deployments.
- 3. OpenRAG deploys the containers with `docker compose up -d`.
+2. OpenRAG pulls the OpenRAG container images with `docker compose pull` if any images are missing.
-* **Self-managed containers**: Use [`docker compose down`](https://docs.docker.com/reference/cli/docker/compose/down/) and [`docker compose up -d`](https://docs.docker.com/reference/cli/docker/compose/up/).
+3. OpenRAG deploys the containers with `docker compose up -d`.
- To stop or start individual containers, use targeted commands like `docker stop CONTAINER_ID` and `docker start CONTAINER_ID`.
+
+
+
+Use [`docker compose down`](https://docs.docker.com/reference/cli/docker/compose/down/) and [`docker compose up -d`](https://docs.docker.com/reference/cli/docker/compose/up/).
+
+To stop or start individual containers, use targeted commands like `docker stop CONTAINER_ID` and `docker start CONTAINER_ID`.
+
+
+
## Stop, start, and inspect native services (Docling) {#start-native-services}
A _native service_ in OpenRAG is a service that runs locally on your machine, not within a container. For example, the `docling serve` process is an OpenRAG native service because this document processing service runs on your local machine, separate from the OpenRAG containers.
-* **TUI**: From the TUI's **Status** menu (3), click **Native Services** to do the following:
+
+
- * View the service's status, port, and process ID (PID).
- * Stop, start, and restart native services.
+From the TUI's **Status** menu (3), click **Native Services** to do the following:
-* **Self-managed services**: Because the Docling service doesn't run in a container, you must start and stop it manually on the host machine:
+* View the service's status, port, and process ID (PID).
+* Stop, start, and restart native services.
- * Stop `docling serve`:
+
+
- ```bash
- uv run python scripts/docling_ctl.py stop
- ```
+Because the Docling service doesn't run in a container, you must start and stop it manually on the host machine:
- * Start `docling serve`:
+* Stop `docling serve`:
- ```bash
- uv run python scripts/docling_ctl.py start --port 5001
- ```
+ ```bash
+ uv run python scripts/docling_ctl.py stop
+ ```
- * Check that `docling serve` is running:
+* Start `docling serve`:
- ```bash
- uv run python scripts/docling_ctl.py status
- ```
+ ```bash
+ uv run python scripts/docling_ctl.py start --port 5001
+ ```
- If `docling serve` is running, the output includes the status, address, and process ID (PID):
+* Check that `docling serve` is running:
- ```text
- Status: running
- Endpoint: http://127.0.0.1:5001
- Docs: http://127.0.0.1:5001/docs
- PID: 27746
- ```
+ ```bash
+ uv run python scripts/docling_ctl.py status
+ ```
+
+ If `docling serve` is running, the output includes the status, address, and process ID (PID):
+
+ ```text
+ Status: running
+ Endpoint: http://127.0.0.1:5001
+ Docs: http://127.0.0.1:5001/docs
+ PID: 27746
+ ```
+
+
+
## Upgrade services
@@ -130,7 +159,7 @@ These are destructive operations that reset your OpenRAG deployment to an initia
Destroyed containers and deleted data are lost and cannot be recovered after running this operation.
:::
-1. Destroy the containers, volumes, and local images, and then remove (prune) any additional Docker objects:
+1. Destroy the containers, volumes, and local images, and then remove (prune) any additional container objects:
@@ -149,7 +178,7 @@ For more information, see [Deploy OpenRAG with self-managed services](/docker).
-5. Launch the OpenRAG app, and then repeat [application onboarding](/docker#application-onboarding).
+5. Launch the OpenRAG app, and then repeat the [application onboarding process](/docker#application-onboarding).
## See also
diff --git a/docs/docs/get-started/quickstart.mdx b/docs/docs/get-started/quickstart.mdx
index 0e427fcb..fee479a2 100644
--- a/docs/docs/get-started/quickstart.mdx
+++ b/docs/docs/get-started/quickstart.mdx
@@ -14,13 +14,13 @@ Use this quickstart to install OpenRAG, and then try some of OpenRAG's core feat
## Prerequisites
-
+
* Get an [OpenAI API key](https://platform.openai.com/api-keys).
This quickstart uses OpenAI for simplicity.
For other providers, see the other [installation methods](/install-options).
-
+
## Install OpenRAG
@@ -54,9 +54,6 @@ The script installs OpenRAG dependencies, including Docker or Podman, and then i
5. Leave the **OpenAI API key** field empty.
- Your passwords are saved in the `.env` file that is used to start OpenRAG.
- You can find this file in your OpenRAG installation directory.
-
6. Click **Save Configuration**, and then click **Start All Services**.
This process can take some time while OpenRAG pulls and runs the container images.
@@ -67,7 +64,7 @@ The script installs OpenRAG dependencies, including Docker or Podman, and then i
Command completed successfully
```
- Your [OpenRAG configuration](/reference/configuration) is stored in a `.env` file that is created automatically in the directory where you ran the installer script.
+ Your OpenRAG configuration and passwords are stored in an [OpenRAG `.env` file](/reference/configuration) file that is created automatically in your OpenRAG installation directory, which is the directory where you ran the installer script.
Container definitions are stored in the `docker-compose` files in the same directory.
7. Under [**Native Services**](/manage-services), click **Start** to start the Docling service.
diff --git a/docs/docs/get-started/reinstall.mdx b/docs/docs/get-started/reinstall.mdx
index c953fbef..d120f7cd 100644
--- a/docs/docs/get-started/reinstall.mdx
+++ b/docs/docs/get-started/reinstall.mdx
@@ -9,7 +9,7 @@ import PartialDockerStopAll from '@site/docs/_partial-docker-stop-all.mdx';
import PartialDockerRemoveAndCleanupSteps from '@site/docs/_partial-docker-remove-and-cleanup-steps.mdx';
import PartialFactorResetWarning from '@site/docs/_partial-factory-reset-warning.mdx';
-You can reset your OpenRAG deployment to its initial state by recreating the containers and deleting accessory data like the `.env` file and ingested documents.
+You can reset your OpenRAG deployment to its initial state by recreating the containers and deleting accessory data, such as the `.env` file and ingested documents.
:::warning
These are destructive operations that reset your OpenRAG deployment to an initial state.
@@ -37,13 +37,15 @@ For a completely fresh installation, delete all of this data.
4. Restart the TUI with `uv run openrag` or `uvx openrag`.
5. Repeat the [setup process](/install#setup) to configure OpenRAG and restart all services.
-Then, launch the OpenRAG app and repeat [application onboarding](/install#application-onboarding).
+Then, launch the OpenRAG app and repeat the [application onboarding process](/install#application-onboarding).
If OpenRAG detects a `.env` file during setup and onboarding, it automatically populates any OpenRAG passwords, OAuth credentials, and onboarding configuration set in that file.
-## Reinstall with Docker Compose or Podman Compose
+## Reinstall self-managed containers with `docker compose` or `podman compose`
-1. Destroy the containers, volumes, and local images, and then remove (prune) any additional Podman objects:
+Use these steps to reinstall OpenRAG containers with streamlined `docker compose` or `podman compose` commands:
+
+1. Destroy the containers, volumes, and local images, and then remove (prune) any additional container objects:
@@ -62,11 +64,13 @@ For more information, see [Deploy OpenRAG with self-managed services](/docker).
-5. Launch the OpenRAG app, and then repeat [application onboarding](/docker#application-onboarding).
+5. Launch the OpenRAG app, and then repeat the [application onboarding process](/docker#application-onboarding).
-## Step-by-step reinstallation with Docker or Podman
+## Reinstall self-managed containers with discrete `docker` or `podman` commands
-Use these commands for step-by-step container removal and cleanup:
+Use these commands to remove and clean up OpenRAG containers with discrete `docker` or `podman` commands.
+
+If you want to reinstall one container, specify the container name in the commands instead of running the commands on all containers.
1. Stop all running containers:
@@ -82,4 +86,5 @@ Use these commands for step-by-step container removal and cleanup:
* The contents of the `./opensearch-data` directory
* The `conversations.json` file
-8. [Redeploy OpenRAG](/docker).
\ No newline at end of file
+8. If you removed all OpenRAG containers, [redeploy OpenRAG](/docker).
+If you removed only one container, redeploy that container with the appropriate `docker run` or `podman run` command.
\ No newline at end of file
diff --git a/docs/docs/get-started/uninstall.mdx b/docs/docs/get-started/uninstall.mdx
index 7c3ecedf..3ffd3670 100644
--- a/docs/docs/get-started/uninstall.mdx
+++ b/docs/docs/get-started/uninstall.mdx
@@ -19,11 +19,13 @@ If you used [`uv`](/install-uv) to install OpenRAG, run `uv remove openrag` in y
## Uninstall self-managed deployments
-For self-managed services, destroy the containers, prune any additional Docker objects, shut down the Docling service, and delete any remaining OpenRAG files.
+For self-managed services, destroy the containers, prune any additional container objects, delete any remaining OpenRAG files, and then shut down the Docling service.
-### Uninstall with Docker Compose or Podman Compose
+### Uninstall with `docker compose` or `podman compose`
-1. Destroy the containers, volumes, and local images, and then remove (prune) any additional Docker objects:
+Use these steps to uninstall a self-managed OpenRAG deployment with streamlined `docker compose` or `podman compose` commands:
+
+1. Destroy the containers, volumes, and local images, and then remove (prune) any additional container objects:
@@ -41,9 +43,9 @@ For self-managed services, destroy the containers, prune any additional Docker o
uv run python scripts/docling_ctl.py stop
```
-### Step-by-step removal and cleanup with Docker or Podman
+### Uninstall with discrete `docker` or `podman` commands
-Use these commands for step-by-step container removal and cleanup:
+Use these commands to uninstall a self-managed OpenRAG deployment with discrete `docker` or `podman` commands:
1. Stop all running containers:
diff --git a/docs/docs/get-started/upgrade.mdx b/docs/docs/get-started/upgrade.mdx
index dd409e65..22808116 100644
--- a/docs/docs/get-started/upgrade.mdx
+++ b/docs/docs/get-started/upgrade.mdx
@@ -19,7 +19,7 @@ To upgrade OpenRAG, you need to upgrade the OpenRAG Python package, and then upg
Upgrading the Python package also upgrades Docling by bumping the dependency in `pyproject.toml`.
-This is a two part process because upgrading the OpenRAG Python package updates the Terminal User Interface (TUI) and Python code, but the container versions are controlled by environment variables in your `.env` file.
+This is a two-part process because upgrading the OpenRAG Python package updates the Terminal User Interface (TUI) and Python code, but the container versions are controlled by environment variables in your [OpenRAG `.env` file](/reference/configuration).
1. To check for updates, open the TUI's **Status** menu (3), and then click **Upgrade**.
@@ -114,7 +114,7 @@ The commands to upgrade the package depend on how you installed OpenRAG.
When you start services after upgrading the Python package, OpenRAG runs `docker compose pull` to get the appropriate container images matching the version specified in your OpenRAG `.env` file. Then, it recreates the containers with the new images using `docker compose up -d --force-recreate`.
:::tip Pin container versions
- In the `.env` file, the `OPENRAG_VERSION` [environment variable](/reference/configuration#system-settings) is set to `latest` by default, which it pulls the `latest` available container images.
+ In the `.env` file, the `OPENRAG_VERSION` [environment variable](/reference/configuration#system-settings) is set to `latest` by default, which pulls the `latest` available container images.
To pin a specific container image version, you can set `OPENRAG_VERSION` to the desired container image version, such as `OPENRAG_VERSION=0.1.33`.
However, when you upgrade the Python package, OpenRAG automatically attempts to keep the `OPENRAG_VERSION` synchronized with the Python package version.
diff --git a/docs/docs/reference/configuration.mdx b/docs/docs/reference/configuration.mdx
index 752e2677..9a36ebaa 100644
--- a/docs/docs/reference/configuration.mdx
+++ b/docs/docs/reference/configuration.mdx
@@ -7,7 +7,7 @@ import PartialDockerComposeUp from '@site/docs/_partial-docker-compose-up.mdx';
OpenRAG recognizes environment variables from the following sources:
-* [Environment variables](#configure-environment-variables): Values set in the `.env` file.
+* [Environment variables](#configure-environment-variables): Values set in the `.env` file in the OpenRAG installation directory.
* [Langflow runtime overrides](#langflow-runtime-overrides): Langflow components can set environment variables at runtime.
* [Default or fallback values](#default-values-and-fallbacks): These values are default or fallback values if OpenRAG doesn't find a value.
@@ -54,7 +54,7 @@ For example, with self-managed services, do the following:
4. Restart the Docling service.
-5. Launch the OpenRAG app, and then repeat [application onboarding](/install#application-onboarding). The values in your `.env` file are automatically populated.
+5. Launch the OpenRAG app, and then repeat the [application onboarding process](/install#application-onboarding). The values in your `.env` file are automatically populated.
## Supported environment variables
@@ -65,12 +65,12 @@ All OpenRAG configuration can be controlled through environment variables.
Configure which models and providers OpenRAG uses to generate text and embeddings.
You only need to provide credentials for the providers you are using in OpenRAG.
-These variables are initially set during [application onboarding](/install#application-onboarding).
+These variables are initially set during the [application onboarding process](/install#application-onboarding).
Some of these variables are immutable and can only be changed by redeploying OpenRAG, as explained in [Set environment variables](#set-environment-variables).
| Variable | Default | Description |
|----------|---------|-------------|
-| `EMBEDDING_MODEL` | `text-embedding-3-small` | Embedding model for generating vector embeddings for documents in the knowledge base and similarity search queries. Can be changed after application onboarding. Accepts one or more models. |
+| `EMBEDDING_MODEL` | `text-embedding-3-small` | Embedding model for generating vector embeddings for documents in the knowledge base and similarity search queries. Can be changed after the application onboarding process. Accepts one or more models. |
| `LLM_MODEL` | `gpt-4o-mini` | Language model for language processing and text generation in the **Chat** feature. |
| `MODEL_PROVIDER` | `openai` | Model provider, as one of `openai`, `watsonx`, `ollama`, or `anthropic`. |
| `ANTHROPIC_API_KEY` | Not set | API key for the Anthropic language model provider. |
@@ -113,9 +113,9 @@ For better security, it is recommended to set `LANGFLOW_SUPERUSER_PASSWORD` so t
| `LANGFLOW_NEW_USER_IS_ACTIVE` | Determined by `LANGFLOW_SUPERUSER_PASSWORD` | Whether new [Langflow user accounts are active by default](https://docs.langflow.org/api-keys-and-authentication#langflow-new-user-is-active). If `LANGFLOW_SUPERUSER_PASSWORD` isn't set, then `LANGFLOW_NEW_USER_IS_ACTIVE` is `True` and new user accounts are active by default. If `LANGFLOW_SUPERUSER_PASSWORD` is set, then `LANGFLOW_NEW_USER_IS_ACTIVE` is `False` and new user accounts are inactive by default. |
| `LANGFLOW_PUBLIC_URL` | `http://localhost:7860` | Public URL for the Langflow instance. Forms the base URL for Langflow API calls and other interfaces with your OpenRAG Langflow instance. |
| `LANGFLOW_KEY` | Automatically generated | A Langflow API key to run flows with Langflow API calls. Because Langflow API keys are server-specific, allow OpenRAG to generate this key initially. You can create additional Langflow API keys after deploying OpenRAG. |
-| `LANGFLOW_SECRET_KEY` | Automatically generated | Secret encryption key for Langflow internal operations. It is recommended to [generate your own Langflow secret key](https://docs.langflow.org/api-keys-and-authentication#langflow-secret-key) for this variable. If not set, Langflow generates a secret key automatically. |
+| `LANGFLOW_SECRET_KEY` | Automatically generated | Secret encryption key for Langflow internal operations. It is recommended to [generate your own Langflow secret key](https://docs.langflow.org/api-keys-and-authentication#langflow-secret-key) for this variable. If this variable isn't set, then Langflow generates a secret key automatically. |
| `LANGFLOW_SUPERUSER` | `admin` | Username for the Langflow administrator user. |
-| `LANGFLOW_SUPERUSER_PASSWORD` | Not set | Langflow administrator password. If not set, the Langflow server starts _without_ authentication enabled. It is recommended to set `LANGFLOW_SUPERUSER_PASSWORD` so the [Langflow server starts with authentication enabled](https://docs.langflow.org/api-keys-and-authentication#start-a-langflow-server-with-authentication-enabled). |
+| `LANGFLOW_SUPERUSER_PASSWORD` | Not set | Langflow administrator password. If this variable isn't set, then the Langflow server starts _without_ authentication enabled. It is recommended to set `LANGFLOW_SUPERUSER_PASSWORD` so the [Langflow server starts with authentication enabled](https://docs.langflow.org/api-keys-and-authentication#start-a-langflow-server-with-authentication-enabled). |
| `LANGFLOW_URL` | `http://localhost:7860` | URL for the Langflow instance. |
| `LANGFLOW_CHAT_FLOW_ID`, `LANGFLOW_INGEST_FLOW_ID`, `NUDGES_FLOW_ID` | Built-in flow IDs | These variables are set automatically to the IDs of the chat, ingestion, and nudges [flows](/agents). The default values are found in [`.env.example`](https://github.com/langflow-ai/openrag/blob/main/.env.example). Only change these values if you want to replace a built-in flow with your own custom flow. The flow JSON must be present in your version of the OpenRAG codebase. For example, if you [deploy self-managed services](/docker), you can add the flow JSON to your local clone of the OpenRAG repository before deploying OpenRAG. |
| `SYSTEM_PROMPT` | `You are a helpful AI assistant with access to a knowledge base. Answer questions based on the provided context.` | System prompt instructions for the agent driving the **Chat** flow. |
@@ -129,7 +129,7 @@ Configure [OAuth providers](/ingestion#oauth-ingestion) and external service int
| `AWS_ACCESS_KEY_ID`
`AWS_SECRET_ACCESS_KEY` | Not set | Enable access to AWS S3 with an [AWS OAuth app](https://docs.aws.amazon.com/singlesignon/latest/userguide/manage-your-applications.html) integration. |
| `GOOGLE_OAUTH_CLIENT_ID`
`GOOGLE_OAUTH_CLIENT_SECRET` | Not set | Enable the [Google OAuth client](https://developers.google.com/identity/protocols/oauth2) integration. You can generate these values in the [Google Cloud Console](https://console.cloud.google.com/apis/credentials). |
| `MICROSOFT_GRAPH_OAUTH_CLIENT_ID`
`MICROSOFT_GRAPH_OAUTH_CLIENT_SECRET` | Not set | Enable the [Microsoft Graph OAuth client](https://learn.microsoft.com/en-us/onedrive/developer/rest-api/getting-started/graph-oauth) integration by providing [Azure application registration credentials for SharePoint and OneDrive](https://learn.microsoft.com/en-us/onedrive/developer/rest-api/getting-started/app-registration?view=odsp-graph-online). |
-| `WEBHOOK_BASE_URL` | Not set | Base URL for OAuth connector webhook endpoints. If not set, a default base URL is used. |
+| `WEBHOOK_BASE_URL` | Not set | Base URL for OAuth connector webhook endpoints. If this variable isn't set, a default base URL is used. |
### OpenSearch settings
@@ -151,7 +151,7 @@ Configure general system components, session management, and logging.
| `LANGFLOW_KEY_RETRIES` | `15` | Number of retries for Langflow key generation. |
| `LANGFLOW_KEY_RETRY_DELAY` | `2.0` | Delay between retries in seconds. |
| `LANGFLOW_VERSION` | `OPENRAG_VERSION` | Langflow Docker image version. By default, OpenRAG uses the `OPENRAG_VERSION` for the Langflow Docker image version. |
-| `LOG_FORMAT` | Not set | Set to `json` to enabled JSON-formatted log output. If not set, the default format is used. |
+| `LOG_FORMAT` | Not set | Set to `json` to enabled JSON-formatted log output. If this variable isn't set, then the default logging format is used. |
| `LOG_LEVEL` | `INFO` | Logging level. Can be one of `DEBUG`, `INFO`, `WARNING`, or `ERROR`. `DEBUG` provides the most detailed logs but can impact performance. |
| `MAX_WORKERS` | `1` | Maximum number of workers for document processing. |
| `OPENRAG_VERSION` | `latest` | The version of the OpenRAG Docker images to run. For more information, see [Upgrade OpenRAG](/upgrade) |
diff --git a/docs/docs/support/troubleshoot.mdx b/docs/docs/support/troubleshoot.mdx
index 4870b6eb..958fa102 100644
--- a/docs/docs/support/troubleshoot.mdx
+++ b/docs/docs/support/troubleshoot.mdx
@@ -7,8 +7,9 @@ This page provides troubleshooting advice for issues you might encounter when us
## OpenSearch fails to start
-Check that `OPENSEARCH_PASSWORD` set in [Environment variables](/reference/configuration) meets requirements.
-The password must contain at least 8 characters, and must contain at least one uppercase letter, one lowercase letter, one digit, and one special character that is strong.
+Check that the value of the `OPENSEARCH_PASSWORD` [environment variable](/reference/configuration) meets the [OpenSearch password complexity requirements](https://docs.opensearch.org/latest/security/configuration/demo-configuration/#setting-up-a-custom-admin-password).
+
+If you need to change the password, you must [reset the OpenRAG services](/manage-services).
## OpenRAG fails to start from the TUI with operation not supported
@@ -30,7 +31,8 @@ Replace `VERSION` with your installed Python version, such as `3.13`.
## Langflow connection issues
-Verify the `LANGFLOW_SUPERUSER` credentials set in [Environment variables](/reference/configuration) are correct.
+Verify that the value of the `LANGFLOW_SUPERUSER` environment variable is correct.
+For more information about this variable and how this variable controls Langflow access, see [Langflow settings](/reference/configuration#langflow-settings).
## Container out of memory errors
@@ -50,7 +52,7 @@ podman machine start
## Port conflicts
-With the default [configuration](/reference/configuration), OpenRAG requires the following ports to be available on the host machine:
+With the default [environment variable](/reference/configuration) values, OpenRAG requires the following ports to be available on the host machine:
* 3000: Langflow application
* 5001: Docling local ingestion service