peer review pt 2

This commit is contained in:
April M 2025-12-02 07:36:52 -08:00
parent 3323492573
commit dc7588bb7e
6 changed files with 100 additions and 77 deletions

View file

@ -1,28 +1,28 @@
import Icon from "@site/src/components/icon/icon"; import Icon from "@site/src/components/icon/icon";
import Tabs from '@theme/Tabs'; import Tabs from '@theme/Tabs';
import TabItem from '@theme/TabItem'; import TabItem from '@theme/TabItem';
import PartialOllama from '@site/docs/_partial-ollama.mdx'; import PartialOllama from '@site/docs/_partial-ollama.mdx';
## Application onboarding ## Application onboarding
The first time you start OpenRAG, whether using the TUI or a `.env` file, you must complete application onboarding. The first time you start OpenRAG, regardless of how you installed it, you must complete application onboarding.
:::warning Some of these variables, such as the embedding models, can be changed seamlessly after onboarding.
Most values from onboarding can be changed later in the OpenRAG **Settings** page, but there are important restrictions. Others are immutable and require you to destroy and recreate the OpenRAG containers.
For more information, see [Environment variables](/reference/configuration).
The **language model provider** and **embeddings model provider** can only be selected at onboarding.
To change your provider selection later, you must [reinstall OpenRAG](/install#reinstall).
You can use different providers for your language model and embedding model, such as Anthropic for the language model and OpenAI for the embeddings model. You can use different providers for your language model and embedding model, such as Anthropic for the language model and OpenAI for the embeddings model.
::: Additionally, you can set multiple embedding models.
Choose one LLM provider and complete these steps: You only need to complete onboarding for your preferred providers.
<Tabs groupId="Provider"> <Tabs groupId="Provider">
<TabItem value="Anthropic" label="Anthropic" default> <TabItem value="Anthropic" label="Anthropic" default>
:::info :::info
Anthropic does not provide embedding models. If you select Anthropic for your language model, you must then select a different provider for embeddings. Anthropic doesn't provide embedding models. If you select Anthropic for your language model, you must select a different provider for embeddings.
::: :::
1. Enable **Use environment Anthropic API key** to automatically use your key from the `.env` file. 1. Enable **Use environment Anthropic API key** to automatically use your key from the `.env` file.
Alternatively, paste an Anthropic API key into the field. Alternatively, paste an Anthropic API key into the field.
2. Under **Advanced settings**, select your **Language Model**. 2. Under **Advanced settings**, select your **Language Model**.
@ -34,6 +34,7 @@ Choose one LLM provider and complete these steps:
</TabItem> </TabItem>
<TabItem value="OpenAI" label="OpenAI"> <TabItem value="OpenAI" label="OpenAI">
1. Enable **Get API key from environment variable** to automatically enter your key from the TUI-generated `.env` file. 1. Enable **Get API key from environment variable** to automatically enter your key from the TUI-generated `.env` file.
Alternatively, paste an OpenAI API key into the field. Alternatively, paste an OpenAI API key into the field.
2. Under **Advanced settings**, select your **Language Model**. 2. Under **Advanced settings**, select your **Language Model**.
@ -45,6 +46,7 @@ Choose one LLM provider and complete these steps:
</TabItem> </TabItem>
<TabItem value="IBM watsonx.ai" label="IBM watsonx.ai"> <TabItem value="IBM watsonx.ai" label="IBM watsonx.ai">
1. Complete the fields for **watsonx.ai API Endpoint**, **IBM Project ID**, and **IBM API key**. 1. Complete the fields for **watsonx.ai API Endpoint**, **IBM Project ID**, and **IBM API key**.
These values are found in your IBM watsonx deployment. These values are found in your IBM watsonx deployment.
2. Under **Advanced settings**, select your **Language Model**. 2. Under **Advanced settings**, select your **Language Model**.
@ -56,9 +58,11 @@ Choose one LLM provider and complete these steps:
</TabItem> </TabItem>
<TabItem value="Ollama" label="Ollama"> <TabItem value="Ollama" label="Ollama">
:::tip
Ollama is not included with OpenRAG. To install Ollama, see the [Ollama documentation](https://docs.ollama.com/). :::info
::: Ollama isn't installed with OpenRAG. To install Ollama, see the [Ollama documentation](https://docs.ollama.com/).
:::
1. To connect to an Ollama server running on your local machine, enter your Ollama server's base URL address. 1. To connect to an Ollama server running on your local machine, enter your Ollama server's base URL address.
The default Ollama server address is `http://localhost:11434`. The default Ollama server address is `http://localhost:11434`.
OpenRAG connects to the Ollama server and populates the model lists with the server's available models. OpenRAG connects to the Ollama server and populates the model lists with the server's available models.
@ -70,5 +74,6 @@ Choose one LLM provider and complete these steps:
3. Click **Complete**. 3. Click **Complete**.
4. To complete the onboarding tasks, click **What is OpenRAG**, and then click **Add a Document**. 4. To complete the onboarding tasks, click **What is OpenRAG**, and then click **Add a Document**.
5. Continue with the [Quickstart](/quickstart). 5. Continue with the [Quickstart](/quickstart).
</TabItem> </TabItem>
</Tabs> </Tabs>

View file

@ -75,19 +75,19 @@ If needed, you can use [filters](/knowledge-filters) to separate documents that
### Set the embedding model and dimensions {#set-the-embedding-model-and-dimensions} ### Set the embedding model and dimensions {#set-the-embedding-model-and-dimensions}
When you [install OpenRAG](/install), you select an embedding model during **Application Onboarding**. When you [install OpenRAG](/install), you select at least one embedding model during [application onboarding](/install#application-onboarding).
OpenRAG automatically detects and configures the appropriate vector dimensions for your selected embedding model, ensuring optimal search performance and compatibility. OpenRAG automatically detects and configures the appropriate vector dimensions for your selected embedding model, ensuring optimal search performance and compatibility.
In the OpenRAG repository, you can find the complete list of supported models in [`models_service.py`](https://github.com/langflow-ai/openrag/blob/main/src/services/models_service.py) and the corresponding vector dimensions in [`settings.py`](https://github.com/langflow-ai/openrag/blob/main/src/config/settings.py). In the OpenRAG repository, you can find the complete list of supported models in [`models_service.py`](https://github.com/langflow-ai/openrag/blob/main/src/services/models_service.py) and the corresponding vector dimensions in [`settings.py`](https://github.com/langflow-ai/openrag/blob/main/src/config/settings.py).
The default embedding dimension is `1536` and the default model is the OpenAI `text-embedding-3-small`. During application onboarding, you can select from the supported models.
The default embedding dimension is `1536`, and the default model is the OpenAI `text-embedding-3-small`.
You can use any supported or unsupported embedding model by specifying the model in your OpenRAG configuration during installation.
If you want to use an unsupported model, you must manually set the model in your [OpenRAG configuration](/reference/configuration).
If you use an unsupported embedding model that doesn't have defined dimensions in `settings.py`, then OpenRAG falls back to the default dimensions (1536) and logs a warning. OpenRAG's OpenSearch instance and flows continue to work, but [similarity search](https://www.ibm.com/think/topics/vector-search) quality can be affected if the actual model dimensions aren't 1536. If you use an unsupported embedding model that doesn't have defined dimensions in `settings.py`, then OpenRAG falls back to the default dimensions (1536) and logs a warning. OpenRAG's OpenSearch instance and flows continue to work, but [similarity search](https://www.ibm.com/think/topics/vector-search) quality can be affected if the actual model dimensions aren't 1536.
This embedding model you choose during **Application Onboarding** is immutable and can only be changed by [reinstalling OpenRAG](/install#reinstall). To change the embedding model after onboarding, it is recommended that you modify the embedding model setting in the OpenRAG **Settings** page or in your [OpenRAG configuration](/reference/configuration).
Alternatively, you can [edit the OpenRAG flows](/agents#inspect-and-modify-flows) for knowledge ingestion and chat. Make sure all flows use the same embedding model. This will automatically update all relevant [OpenRAG flows](/agents) to use the new embedding model configuration.
### Set Docling parameters ### Set Docling parameters
@ -122,10 +122,14 @@ OpenRAG warns you if `docling serve` isn't running.
You can [start and stop OpenRAG services](/install#tui-container-management) from the TUI main menu with **Start Native Services** or **Stop Native Services**. You can [start and stop OpenRAG services](/install#tui-container-management) from the TUI main menu with **Start Native Services** or **Stop Native Services**.
::: :::
* **Embedding model**: Select the model to use to generate vector embeddings for your documents. This is initially set during installation. * **Embedding model**: Select the model to use to generate vector embeddings for your documents.
The recommended way to change this setting is by [reinstalling OpenRAG](/install#reinstall).
If you change this value by directly [editing the flow](/agents#inspect-and-modify-flows), you must also change the embedding model in other [OpenRAG flows](/agents) to ensure that similarity search results are consistent. This is initially set during installation.
If you uploaded documents prior to changing the embedding model, you must either [create filters](/knowledge-filters) to prevent mixing documents embedded with different models, or you must reupload all documents to regenerate embeddings with the new model. The recommended way to change this setting is in the OpenRAG **Settings** or your [OpenRAG configuration](/reference/configuration).
This will automatically update all relevant [OpenRAG flows](/agents) to use the new embedding model configuration.
If you uploaded documents prior to changing the embedding model, you can [create filters](/knowledge-filters) to separate documents embedded with different models, or you can reupload all documents to regenerate embeddings with the new model.
If you want to use multiple embeddings models, similarity search (in the **Chat**) can take longer as it searching each model's embeddings separately.
* **Chunk size**: Set the number of characters for each text chunk when breaking down a file. * **Chunk size**: Set the number of characters for each text chunk when breaking down a file.
Larger chunks yield more context per chunk, but can include irrelevant information. Smaller chunks yield more precise semantic search, but can lack context. Larger chunks yield more context per chunk, but can include irrelevant information. Smaller chunks yield more precise semantic search, but can lack context.

View file

@ -34,7 +34,7 @@ OpenRAG has two Docker Compose files. Both files deploy the same applications an
- Prepare model providers and credentials. - Prepare model providers and credentials.
During [Application Onboarding](#application-onboarding), you must select language model and embedding model providers. During [application onboarding](#application-onboarding), you must select language model and embedding model providers.
If your chosen provider offers both types, you can use the same provider for both selections. If your chosen provider offers both types, you can use the same provider for both selections.
If your provider offers only one type, such as Anthropic, you must select two providers. If your provider offers only one type, such as Anthropic, you must select two providers.
@ -84,7 +84,7 @@ To install OpenRAG with Docker Compose, do the following:
LANGFLOW_SECRET_KEY=your_secret_key LANGFLOW_SECRET_KEY=your_secret_key
``` ```
`OPENAI_API_KEY` is optional. You can provide it during [Application Onboarding](#application-onboarding) or choose a different model provider. If you want to set it in your `.env` file, you can find your OpenAI API key in your [OpenAI account](https://platform.openai.com/api-keys). `OPENAI_API_KEY` is optional. You can provide it during [application onboarding](#application-onboarding) or choose a different model provider. If you want to set it in your `.env` file, you can find your OpenAI API key in your [OpenAI account](https://platform.openai.com/api-keys).
`LANGFLOW_SECRET_KEY` is optional. Langflow will auto-generate it if not set. For more information, see the [Langflow documentation](https://docs.langflow.org/api-keys-and-authentication#langflow-secret-key). `LANGFLOW_SECRET_KEY` is optional. Langflow will auto-generate it if not set. For more information, see the [Langflow documentation](https://docs.langflow.org/api-keys-and-authentication#langflow-secret-key).
@ -159,7 +159,7 @@ To install OpenRAG with Docker Compose, do the following:
- **Backend API**: http://localhost:8000 - **Backend API**: http://localhost:8000
- **Langflow**: http://localhost:7860 - **Langflow**: http://localhost:7860
9. Continue with [Application Onboarding](#application-onboarding). 9. Continue with [application onboarding](#application-onboarding).
To stop `docling serve` when you're done with your OpenRAG deployment, run: To stop `docling serve` when you're done with your OpenRAG deployment, run:

View file

@ -41,7 +41,7 @@ If you prefer running Podman or Docker containers and manually editing `.env` fi
- Prepare model providers and credentials. - Prepare model providers and credentials.
During [Application Onboarding](#application-onboarding), you must select language model and embedding model providers. During [application onboarding](#application-onboarding), you must select language model and embedding model providers.
If your chosen provider offers both types, you can use the same provider for both selections. If your chosen provider offers both types, you can use the same provider for both selections.
If your provider offers only one type, such as Anthropic, you must select two providers. If your provider offers only one type, such as Anthropic, you must select two providers.
@ -208,7 +208,7 @@ If OpenRAG detects OAuth credentials, it recommends **Advanced Setup**.
6. To start the Docling service, under **Native Services**, click **Start**. 6. To start the Docling service, under **Native Services**, click **Start**.
7. To open the OpenRAG application, navigate to the TUI main menu, and then click **Open App**. 7. To open the OpenRAG application, navigate to the TUI main menu, and then click **Open App**.
Alternatively, in your browser, navigate to `localhost:3000`. Alternatively, in your browser, navigate to `localhost:3000`.
8. Continue with [Application Onboarding](#application-onboarding). 8. Continue with [application onboarding](#application-onboarding).
</TabItem> </TabItem>
<TabItem value="Advanced setup" label="Advanced setup"> <TabItem value="Advanced setup" label="Advanced setup">
@ -257,7 +257,7 @@ If OpenRAG detects OAuth credentials, it recommends **Advanced Setup**.
- OneDrive: `/connectors/onedrive/webhook` - OneDrive: `/connectors/onedrive/webhook`
- SharePoint: `/connectors/sharepoint/webhook` - SharePoint: `/connectors/sharepoint/webhook`
12. Continue with [Application Onboarding](#application-onboarding). 12. Continue with [application onboarding](#application-onboarding).
</TabItem> </TabItem>
</Tabs> </Tabs>
@ -436,11 +436,11 @@ To reinstall OpenRAG with a completely fresh setup:
This removes all containers, volumes, and data. This removes all containers, volumes, and data.
2. Optional: Delete your project's `.env` file. 2. Optional: Delete your project's `.env` file.
The Reset operation does not remove your project's `.env` file, so your passwords, API keys, and OAuth settings can be preserved. The Reset operation doesn't remove your project's `.env` file, so your passwords, API keys, and OAuth settings can be preserved.
If you delete the `.env` file, run the [Set up OpenRAG with the TUI](#setup) process again to create a new configuration. If you delete the `.env` file, run the [Set up OpenRAG with the TUI](#setup) process again to create a new configuration.
3. In the TUI Setup menu, follow these steps from [Basic Setup](#setup): 3. In the TUI Setup menu, follow these steps from [Basic Setup](#setup):
1. Click **Start All Services** to pull container images and start them. 1. Click **Start All Services** to pull container images and start them.
2. Under **Native Services**, click **Start** to start the Docling service. 2. Under **Native Services**, click **Start** to start the Docling service.
3. Click **Open App** to open the OpenRAG application. 3. Click **Open App** to open the OpenRAG application.
4. Continue with [Application Onboarding](#application-onboarding). 4. Continue with [application onboarding](#application-onboarding).

View file

@ -23,32 +23,47 @@ The Docker Compose files are populated with values from your `.env`, so you don'
Environment variables always take precedence over other variables. Environment variables always take precedence over other variables.
### Set environment variables ### Set environment variables {#set-environment-variables}
To set environment variables, do the following. After you start OpenRAG, you must [stop and restart OpenRAG containers](/install#tui-container-management) to apply any changes you make to the `.env` file.
To set mutable environment variables, do the following:
1. Stop OpenRAG with the TUI or Docker Compose.
1. Stop OpenRAG.
2. Set the values in the `.env` file: 2. Set the values in the `.env` file:
```bash ```bash
LOG_LEVEL=DEBUG LOG_LEVEL=DEBUG
LOG_FORMAT=json LOG_FORMAT=json
SERVICE_NAME=openrag-dev SERVICE_NAME=openrag-dev
``` ```
3. Start OpenRAG.
Updating provider API keys or provider endpoints in the `.env` file will not take effect after [Application onboarding](/install#application-onboarding). To change these values, you must: 3. Start OpenRAG with the TUI or Docker Compose.
Certain environment variables that you set during [application onboarding](/install#application-onboarding), such as provider API keys and provider endpoints, require resetting the containers after modifying the `.env` file.
To change immutable variables with TUI-managed containers, you must [reinstall OpenRAG](/install#reinstall) and either delete or modify the `.env` file before you repeat the setup and onboarding process in the TUI.
To change immutable variables with self-managed containers, do the following:
1. Stop OpenRAG with Docker Compose.
1. Stop OpenRAG.
2. Remove the containers: 2. Remove the containers:
```
```bash
docker-compose down docker-compose down
``` ```
3. Update the values in your `.env` file. 3. Update the values in your `.env` file.
4. Start OpenRAG containers.
``` 4. Start OpenRAG with Docker Compose:
```bash
docker-compose up -d docker-compose up -d
``` ```
5. Complete [Application onboarding](/install#application-onboarding) again.
5. Repeat [application onboarding](/install#application-onboarding). The values in your `.env` file are automatically populated.
## Supported environment variables ## Supported environment variables
@ -56,18 +71,19 @@ All OpenRAG configuration can be controlled through environment variables.
### AI provider settings ### AI provider settings
Configure which AI models and providers OpenRAG uses for language processing and embeddings. Configure which models and providers OpenRAG uses to generate text and embeddings.
For more information, see [Application onboarding](/install#application-onboarding). These are initially set during [application onboarding](/install#application-onboarding).
Some values are immutable and can only be changed by recreating the OpenRAG containers, as explained in [Set environment variables](#set-environment-variables).
| Variable | Default | Description | | Variable | Default | Description |
|----------|---------|-------------| |----------|---------|-------------|
| `EMBEDDING_MODEL` | `text-embedding-3-small` | Embedding model for vector search. | | `EMBEDDING_MODEL` | `text-embedding-3-small` | Embedding model for generating vector embeddings for documents in the knowledge base and similarity search queries. Can be changed after application onboarding. Accepts one or more models. |
| `LLM_MODEL` | `gpt-4o-mini` | Language model for the chat agent. | | `LLM_MODEL` | `gpt-4o-mini` | Language model for language processing and text generation in the **Chat** feature. |
| `MODEL_PROVIDER` | `openai` | Model provider, such as OpenAI or IBM watsonx.ai. | | `MODEL_PROVIDER` | `openai` | Model provider, such as OpenAI or IBM watsonx.ai. |
| `OPENAI_API_KEY` | - | Your OpenAI API key. Optional. Can be provided during application onboarding when installing OpenRAG. | | `OPENAI_API_KEY` | Not set | Optional OpenAI API key for the default model. For other providers, use `PROVIDER_API_KEY`. |
| `PROVIDER_API_KEY` | - | API key for the model provider. | | `PROVIDER_API_KEY` | Not set | API key for the model provider. |
| `PROVIDER_ENDPOINT` | - | Custom provider endpoint. Only used for IBM or Ollama providers. | | `PROVIDER_ENDPOINT` | Not set | Custom provider endpoint for the IBM and Ollama model providers. Leave unset for other model providers. |
| `PROVIDER_PROJECT_ID` | - | Project ID for providers. Only required for the IBM watsonx.ai provider. | | `PROVIDER_PROJECT_ID` | Not set | Project ID for the IBM watsonx.ai model provider only. Leave unset for other model providers. |
### Document processing ### Document processing
@ -78,7 +94,7 @@ Control how OpenRAG [processes and ingests documents](/ingestion) into your know
| `CHUNK_OVERLAP` | `200` | Overlap between chunks. | | `CHUNK_OVERLAP` | `200` | Overlap between chunks. |
| `CHUNK_SIZE` | `1000` | Text chunk size for document processing. | | `CHUNK_SIZE` | `1000` | Text chunk size for document processing. |
| `DISABLE_INGEST_WITH_LANGFLOW` | `false` | Disable Langflow ingestion pipeline. | | `DISABLE_INGEST_WITH_LANGFLOW` | `false` | Disable Langflow ingestion pipeline. |
| `DOCLING_OCR_ENGINE` | - | OCR engine for document processing. | | `DOCLING_OCR_ENGINE` | Set by OS | OCR engine for document processing. For macOS, `ocrmac`. For any other OS, `easyocr`. |
| `OCR_ENABLED` | `false` | Enable OCR for image processing. | | `OCR_ENABLED` | `false` | Enable OCR for image processing. |
| `OPENRAG_DOCUMENTS_PATHS` | `./openrag-documents` | Document paths for ingestion. | | `OPENRAG_DOCUMENTS_PATHS` | `./openrag-documents` | Document paths for ingestion. |
| `PICTURE_DESCRIPTIONS_ENABLED` | `false` | Enable picture descriptions. | | `PICTURE_DESCRIPTIONS_ENABLED` | `false` | Enable picture descriptions. |
@ -90,18 +106,18 @@ Configure Langflow authentication.
| Variable | Default | Description | | Variable | Default | Description |
|----------|---------|-------------| |----------|---------|-------------|
| `LANGFLOW_AUTO_LOGIN` | `False` | Enable auto-login for Langflow. | | `LANGFLOW_AUTO_LOGIN` | `False` | Enable auto-login for Langflow. |
| `LANGFLOW_CHAT_FLOW_ID` | pre-filled | This value is pre-filled. The default value is found in [.env.example](https://github.com/langflow-ai/openrag/blob/main/.env.example). | | `LANGFLOW_CHAT_FLOW_ID` | Built-in flow ID | This value is automatically set to the ID of the chat [flow](/agents). The default value is found in [`.env.example`](https://github.com/langflow-ai/openrag/blob/main/.env.example). Only change this value if you explicitly don't want to use this built-in flow. |
| `LANGFLOW_ENABLE_SUPERUSER_CLI` | `False` | Enable superuser CLI. | | `LANGFLOW_ENABLE_SUPERUSER_CLI` | `False` | Enable superuser privileges for Langflow CLI commands. |
| `LANGFLOW_INGEST_FLOW_ID` | pre-filled | This value is pre-filled. The default value is found in [.env.example](https://github.com/langflow-ai/openrag/blob/main/.env.example). | | `LANGFLOW_INGEST_FLOW_ID` | Built-in flow ID | This value is automatically set to the ID of the ingestion [flow](/agents). The default value is found in [`.env.example`](https://github.com/langflow-ai/openrag/blob/main/.env.example). Only change this value if you explicitly don't want to use this built-in flow. |
| `LANGFLOW_KEY` | auto-generated | Explicit Langflow API key. | | `LANGFLOW_KEY` | Automatically generated | Explicit Langflow API key. |
| `LANGFLOW_NEW_USER_IS_ACTIVE` | `False` | New users are active by default. | | `LANGFLOW_NEW_USER_IS_ACTIVE` | `False` | Whether new Langflow users are active by default. |
| `LANGFLOW_PUBLIC_URL` | `http://localhost:7860` | Public URL for Langflow. | | `LANGFLOW_PUBLIC_URL` | `http://localhost:7860` | Public URL for the Langflow instance. |
| `LANGFLOW_SECRET_KEY` | - | Secret key for Langflow internal operations. | | `LANGFLOW_SECRET_KEY` | Not set | Secret key for Langflow internal operations. |
| `LANGFLOW_SUPERUSER` | - | Langflow admin username. Required. | | `LANGFLOW_SUPERUSER` | None, must be explicitly set | Langflow admin username. Required. |
| `LANGFLOW_SUPERUSER_PASSWORD` | - | Langflow admin password. Required. | | `LANGFLOW_SUPERUSER_PASSWORD` | None, must be explicitly set | Langflow admin password. Required. |
| `LANGFLOW_URL` | `http://localhost:7860` | Langflow URL. | | `LANGFLOW_URL` | `http://localhost:7860` | URL for the Langflow instance. |
| `NUDGES_FLOW_ID` | pre-filled | This value is pre-filled. The default value is found in [.env.example](https://github.com/langflow-ai/openrag/blob/main/.env.example). | | `NUDGES_FLOW_ID` | Built-in flow ID | This value is automatically set to the ID of the nudges [flow](/agents). The default value is found in [`.env.example`](https://github.com/langflow-ai/openrag/blob/main/.env.example). Only change this value if you explicitly don't want to use this built-in flow. |
| `SYSTEM_PROMPT` | "You are a helpful AI assistant with access to a knowledge base. Answer questions based on the provided context." | System prompt for the Langflow agent. | | `SYSTEM_PROMPT` | `You are a helpful AI assistant with access to a knowledge base. Answer questions based on the provided context.` | System prompt instructions for the agent driving the **Chat** flow. |
### OAuth provider settings ### OAuth provider settings
@ -134,30 +150,28 @@ Configure general system components, session management, and logging.
| `LANGFLOW_KEY_RETRIES` | `15` | Number of retries for Langflow key generation. | | `LANGFLOW_KEY_RETRIES` | `15` | Number of retries for Langflow key generation. |
| `LANGFLOW_KEY_RETRY_DELAY` | `2.0` | Delay between retries in seconds. | | `LANGFLOW_KEY_RETRY_DELAY` | `2.0` | Delay between retries in seconds. |
| `LANGFLOW_VERSION` | `latest` | Langflow Docker image version. | | `LANGFLOW_VERSION` | `latest` | Langflow Docker image version. |
| `LOG_FORMAT` | - | Log format (set to "json" for JSON output). | | `LOG_FORMAT` | Disabled | Set to `json` to enabled JSON-formatted log output. |
| `LOG_LEVEL` | `INFO` | Logging level (DEBUG, INFO, WARNING, ERROR). | | `LOG_LEVEL` | `INFO` | Logging level (DEBUG, INFO, WARNING, ERROR). |
| `MAX_WORKERS` | - | Maximum number of workers for document processing. | | `MAX_WORKERS` | `1` | Maximum number of workers for document processing. |
| `OPENRAG_VERSION` | `latest` | OpenRAG Docker image version. | | `OPENRAG_VERSION` | `latest` | OpenRAG Docker image version. |
| `SERVICE_NAME` | `openrag` | Service name for logging. | | `SERVICE_NAME` | `openrag` | Service name for logging. |
| `SESSION_SECRET` | auto-generated | Session management. | | `SESSION_SECRET` | Automatically generated | Session management. |
## Langflow runtime overrides ## Langflow runtime overrides
Langflow runtime overrides allow you to modify component settings at runtime without changing the base configuration. You can modify [flow](/agents) settings at runtime without permanently changing the flow's configuration.
Runtime overrides are implemented through **tweaks** - parameter modifications that are passed to specific Langflow components during flow execution. Runtime overrides are implemented through _tweaks_, which are one-time parameter modifications that are passed to specific Langflow components during flow execution.
For more information on tweaks, see [Input schema (tweaks)](https://docs.langflow.org/concepts-publish#input-schema). For more information on tweaks, see the Langflow documentation on [Input schema (tweaks)](https://docs.langflow.org/concepts-publish#input-schema).
## Default values and fallbacks ## Default values and fallbacks
When no environment variables or configuration file values are provided, OpenRAG uses default values. If a variable isn't set by environment variables or a configuration file, OpenRAG can use a default value if one is defined in the codebase.
These values can be found in the code base at the following locations. Default values can be found in the OpenRAG repository:
### OpenRAG configuration defaults * OpenRAG configuration: [`config_manager.py`](https://github.com/langflow-ai/openrag/blob/main/src/config/config_manager.py)
These values are defined in [`config_manager.py` in the OpenRAG repository](https://github.com/langflow-ai/openrag/blob/main/src/config/config_manager.py). * System configuration: [`settings.py`](https://github.com/langflow-ai/openrag/blob/main/src/config/settings.py)
### System configuration defaults * Logging configuration: [`logging_config.py`](https://github.com/langflow-ai/openrag/blob/main/src/utils/logging_config.py)
These fallback values are defined in [`settings.py` in the OpenRAG repository](https://github.com/langflow-ai/openrag/blob/main/src/config/settings.py).

View file

@ -77,7 +77,7 @@ On macOS, this cache directory is typically a user cache directory such as `/Use
uvx openrag uvx openrag
``` ```
If you do not need OCR, you can disable OCR-based processing in your ingestion settings to avoid requiring `easyocr`. If you don't need OCR, you can disable OCR-based processing in your ingestion settings to avoid requiring `easyocr`.
## Upgrade fails due to Langflow container already exists {#langflow-container-already-exists-during-upgrade} ## Upgrade fails due to Langflow container already exists {#langflow-container-already-exists-during-upgrade}