From c8740437073f44b4c3b3482f8726efef34c41a0a Mon Sep 17 00:00:00 2001
From: Mendon Kissling <59585235+mendonk@users.noreply.github.com>
Date: Tue, 28 Oct 2025 15:26:13 -0400
Subject: [PATCH 1/9] install-and-onboarding
---
docs/docs/get-started/install.mdx | 32 ++++++++++++-----------
docs/docs/get-started/what-is-openrag.mdx | 2 +-
2 files changed, 18 insertions(+), 16 deletions(-)
diff --git a/docs/docs/get-started/install.mdx b/docs/docs/get-started/install.mdx
index d8fa252f..d7faaabb 100644
--- a/docs/docs/get-started/install.mdx
+++ b/docs/docs/get-started/install.mdx
@@ -9,7 +9,7 @@ import PartialOnboarding from '@site/docs/_partial-onboarding.mdx';
[Install the OpenRAG Python wheel](#install-python-wheel), and then run the [OpenRAG Terminal User Interface(TUI)](#setup) to start your OpenRAG deployment with a guided setup process.
-The OpenRAG Terminal User Interface (TUI) allows you to set up, configure, and monitor your OpenRAG deployment directly from the terminal, on any operating system.
+The OpenRAG Terminal User Interface (TUI) allows you to set up, configure, and monitor your OpenRAG deployment directly from the terminal.

@@ -102,10 +102,12 @@ If the TUI detects OAuth credentials, it enforces the **Advanced Setup** path.
1. To install OpenRAG with **Basic Setup**, click **Basic Setup** or press 1.
2. Click **Generate Passwords** to generate passwords for OpenSearch and Langflow.
+ Only the **OpenSearch Admin Password** and **OpenAI API key** are required.
+ To generate the optional **Langflow Admin Password**, click **Generate Password**.
3. Paste your OpenAI API key in the OpenAI API key field.
4. Click **Save Configuration**.
Your passwords are saved in the `.env` file used to start OpenRAG.
- 5. To start OpenRAG, click **Start Container Services**.
+ 5. To start OpenRAG, click **Start All Services**.
Startup pulls container images and runs them, so it can take some time.
When startup is complete, the TUI displays the following:
```bash
@@ -127,14 +129,14 @@ If the TUI detects OAuth credentials, it enforces the **Advanced Setup** path.
These are the URLs your OAuth provider will redirect back to after user sign-in.
Register these redirect values with your OAuth provider as they are presented in the TUI.
6. Click **Save Configuration**.
- 7. To start OpenRAG, click **Start Container Services**.
+ 7. To start OpenRAG, click **Start All Services**.
Startup pulls container images and runs them, so it can take some time.
When startup is complete, the TUI displays the following:
```bash
Services started successfully
Command completed successfully
```
- 8. To open the OpenRAG application, click **Open App**, press 6, or navigate to `http://localhost:3000`.
+ 8. To open the OpenRAG application, click **Open App**.
You are presented with your provider's OAuth sign-in screen.
After sign-in, you are redirected to the redirect URI.
@@ -159,9 +161,9 @@ If the TUI detects OAuth credentials, it enforces the **Advanced Setup** path.
After installation, the TUI can deploy, manage, and upgrade your OpenRAG containers.
-### Start container services
+### Start all services
-Click **Start Container Services** to start the OpenRAG containers.
+Click **Start All Services** to start the OpenRAG containers.
The TUI automatically detects your container runtime, and then checks if your machine has compatible GPU support by checking for `CUDA`, `NVIDIA_SMI`, and Docker/Podman runtime support. This check determines which Docker Compose file OpenRAG uses.
The TUI then pulls the images and deploys the containers with the following command.
```bash
@@ -170,15 +172,6 @@ docker compose up -d
If images are missing, the TUI runs `docker compose pull`, then runs `docker compose up -d`.
-### Start native services
-
-A "native" service in OpenRAG refers to a service run natively on your machine, and not within a container.
-The `docling serve` process is a native service in OpenRAG, because it's a document processing service that is run on your local machine, and controlled separately from the containers.
-
-To start or stop `docling serve` or any other native services, in the TUI main menu, click **Start Native Services** or **Stop Native Services**.
-
-To view the status, port, or PID of a native service, in the TUI main menu, click [Status](#status).
-
### Status
The **Status** menu displays information on your container deployment.
@@ -207,6 +200,15 @@ When the first command is complete, OpenRAG removes any additional Docker object
docker system prune -f
```
+### Native services status
+
+A "native" service in OpenRAG refers to a service run locally on your machine, and not within a container.
+The `docling serve` process is a native service in OpenRAG, because it's a document processing service that is run on your local machine, and controlled separately from the containers.
+
+To start or stop `docling serve` or any other native services, in the TUI Status menu, click **Stop** or **Restart**.
+
+To view the status, port, or PID of a native service, in the TUI main menu, click [Status](#status).
+
## Diagnostics
The **Diagnostics** menu provides health monitoring for your container runtimes and monitoring of your OpenSearch security.
\ No newline at end of file
diff --git a/docs/docs/get-started/what-is-openrag.mdx b/docs/docs/get-started/what-is-openrag.mdx
index 129d2df9..ee227db9 100644
--- a/docs/docs/get-started/what-is-openrag.mdx
+++ b/docs/docs/get-started/what-is-openrag.mdx
@@ -7,7 +7,7 @@ OpenRAG is an open-source package for building agentic RAG systems that integrat
OpenRAG connects and amplifies three popular, proven open-source projects into one powerful platform:
-* [Langflow](https://docs.langflow.org): Langflow is a popular tool for building and deploying AI agents and MCP servers. It supports all major LLMs, vector databases, and a growing library of AI tools.
+* [Langflow](https://docs.langflow.org): Langflow is a versatile tool for building and deploying AI agents and MCP servers. It supports all major LLMs, vector databases, and a growing library of AI tools.
* [OpenSearch](https://docs.opensearch.org/latest/): OpenSearch is a community-driven, Apache 2.0-licensed open source search and analytics suite that makes it easy to ingest, search, visualize, and analyze data.
From 37b1fa47ddb4fc45b8c4dbd263933d817a103bdc Mon Sep 17 00:00:00 2001
From: Mendon Kissling <59585235+mendonk@users.noreply.github.com>
Date: Tue, 28 Oct 2025 15:55:44 -0400
Subject: [PATCH 2/9] new-onboarding
---
docs/docs/_partial-onboarding.mdx | 11 +++++++----
1 file changed, 7 insertions(+), 4 deletions(-)
diff --git a/docs/docs/_partial-onboarding.mdx b/docs/docs/_partial-onboarding.mdx
index 6fc5c87e..c956eb53 100644
--- a/docs/docs/_partial-onboarding.mdx
+++ b/docs/docs/_partial-onboarding.mdx
@@ -17,17 +17,19 @@ Choose one LLM provider and complete only those steps:
3. To load 2 sample PDFs, enable **Sample dataset**.
This is recommended, but not required.
4. Click **Complete**.
- 5. Continue with the [Quickstart](/quickstart).
+ 5. To complete the onboarding tasks, click **What is OpenRAG**, and then click **Add a Document**.
+ 6. Continue with the [Quickstart](/quickstart).
- 1. Complete the fields for **watsonx.ai API Endpoint**, **IBM API key**, and **IBM Project ID**.
+ 1. Complete the fields for **watsonx.ai API Endpoint**, **IBM Project ID**, and **IBM API key**.
These values are found in your IBM watsonx deployment.
2. Under **Advanced settings**, select your **Embedding Model** and **Language Model**.
3. To load 2 sample PDFs, enable **Sample dataset**.
This is recommended, but not required.
4. Click **Complete**.
- 5. Continue with the [Quickstart](/quickstart).
+ 5. To complete the onboarding tasks, click **What is OpenRAG**, and then click **Add a Document**.
+ 6. Continue with the [Quickstart](/quickstart).
@@ -42,6 +44,7 @@ Choose one LLM provider and complete only those steps:
3. To load 2 sample PDFs, enable **Sample dataset**.
This is recommended, but not required.
4. Click **Complete**.
- 5. Continue with the [Quickstart](/quickstart).
+ 5. To complete the onboarding tasks, click **What is OpenRAG**, and then click **Add a Document**.
+ 6. Continue with the [Quickstart](/quickstart).
\ No newline at end of file
From 3e618271c0f383ab2bf013cb9782460c83a9bedb Mon Sep 17 00:00:00 2001
From: Mendon Kissling <59585235+mendonk@users.noreply.github.com>
Date: Tue, 28 Oct 2025 17:08:12 -0400
Subject: [PATCH 3/9] quickstart
---
docs/docs/_partial-onboarding.mdx | 4 +++-
docs/docs/get-started/quickstart.mdx | 22 +++++++++++++---------
2 files changed, 16 insertions(+), 10 deletions(-)
diff --git a/docs/docs/_partial-onboarding.mdx b/docs/docs/_partial-onboarding.mdx
index c956eb53..3f2de8fb 100644
--- a/docs/docs/_partial-onboarding.mdx
+++ b/docs/docs/_partial-onboarding.mdx
@@ -3,7 +3,9 @@ import TabItem from '@theme/TabItem';
## Application onboarding
-The first time you start OpenRAG, whether using the TUI or a `.env` file, you must complete application onboarding.
+The first time you start OpenRAG, whether using the TUI or a `.env` file, it's recommended that you complete application onboarding.
+
+To skip onboarding, click **Skip onboarding**.
Values from onboarding can be changed later in the OpenRAG **Settings** page.
diff --git a/docs/docs/get-started/quickstart.mdx b/docs/docs/get-started/quickstart.mdx
index 80259617..92ed71c8 100644
--- a/docs/docs/get-started/quickstart.mdx
+++ b/docs/docs/get-started/quickstart.mdx
@@ -7,7 +7,7 @@ import Icon from "@site/src/components/icon/icon";
import Tabs from '@theme/Tabs';
import TabItem from '@theme/TabItem';
-Get started with OpenRAG by loading your knowledge, swapping out your language model, and then chatting with the OpenRAG API.
+Get started with OpenRAG by loading your knowledge, swapping out your language model, and then chatting with the Langflow API.
## Prerequisites
@@ -17,20 +17,20 @@ Get started with OpenRAG by loading your knowledge, swapping out your language m
1. In OpenRAG, click **Chat**.
The chat is powered by the OpenRAG OpenSearch Agent.
- For more information, see [Langflow Agents](/agents).
+ For more information, see [Langflow in OpenRAG](/agents).
2. Ask `What documents are available to you?`
The agent responds with a message summarizing the documents that OpenRAG loads by default.
Knowledge is stored in OpenSearch.
- For more information, see [Knowledge](/knowledge).
+ For more information, see [OpenSearch in OpenRAG](/knowledge).
3. To confirm the agent is correct about the default knowledge, click **Knowledge**.
The **Knowledge** page lists the documents OpenRAG has ingested into the OpenSearch vector database.
- Click on a document to display the chunks derived from splitting the default documents into the vector database.
-4. To add documents to your knowledge base, click **Add Knowledge**.
- * Select **Add File** to add a single file from your local machine.
- * Select **Process Folder** to process an entire folder of documents from your local machine.
+ Click on a document to display the chunks derived from splitting the default documents into the OpenSearch vector database.
+4. To add documents to your knowledge base, click **Add Knowledge**.
+ * Select **File** to add a single file from your local machine.
+ * Select **Folder** to process an entire folder of documents from your local machine. The default directory is `/documents` in your OpenRAG directory.
* Select your cloud storage provider to add knowledge from an OAuth-connected storage provider. For more information, see [OAuth ingestion](/knowledge#oauth-ingestion).
5. Return to the Chat window and ask a question about your loaded data.
- For example, with a manual about a PC tablet loaded, ask `How do I connect this device to WiFI?`
+ For example, with a manual about a PC tablet loaded, ask `How do I connect this device to WiFi?`
The agent responds with a message indicating it now has your knowledge as context for answering questions.
6. Click **Function Call: search_documents (tool_call)**.
This log describes how the agent uses tools.
@@ -44,8 +44,12 @@ In this example, you'll try a different LLM to demonstrate how the Agent's respo
1. To edit the Agent's behavior, click **Edit in Langflow**.
You can more quickly access the **Language Model** and **Agent Instructions** fields in this page, but for illustration purposes, navigate to the Langflow visual builder.
+To revert the flow to its initial state, click **Restore flow**.
2. OpenRAG warns you that you're entering Langflow. Click **Proceed**.
-The OpenRAG OpenSearch Agent flow appears in a new browser window.
+
+ If Langflow requests login information, enter the `LANGFLOW_SUPERUSER` and `LANGFLOW_SUPERUSER_PASSWORD` from the `.env` file in your OpenRAG directory.
+
+ The OpenRAG OpenSearch Agent flow appears in a new browser window.

3. Find the **Language Model** component, and then change the **Model Name** field to a different OpenAI model.
From cc0eaf8317832ec56a9cfc6ff9893bcf75251f29 Mon Sep 17 00:00:00 2001
From: Mendon Kissling <59585235+mendonk@users.noreply.github.com>
Date: Tue, 28 Oct 2025 17:30:35 -0400
Subject: [PATCH 4/9] clarify-ingestion
---
docs/docs/core-components/ingestion.mdx | 26 ++++++++++++++-----------
1 file changed, 15 insertions(+), 11 deletions(-)
diff --git a/docs/docs/core-components/ingestion.mdx b/docs/docs/core-components/ingestion.mdx
index 2c1bfa1a..4b6787df 100644
--- a/docs/docs/core-components/ingestion.mdx
+++ b/docs/docs/core-components/ingestion.mdx
@@ -15,14 +15,16 @@ Docling ingests documents from your local machine or OAuth connectors, splits th
OpenRAG chose Docling for its support for a wide variety of file formats, high performance, and advanced understanding of tables and images.
-## Docling ingestion settings
+To modify OpenRAG's Knowledge Ingest settings or flows, click **Settings**.
+
+## Knowledge ingestion settings
These settings configure the Docling ingestion parameters.
OpenRAG will warn you if `docling serve` is not running.
To start or stop `docling serve` or any other native services, in the TUI main menu, click **Start Native Services** or **Stop Native Services**.
-**Embedding model** determines which AI model is used to create vector embeddings. The default is `text-embedding-3-small`.
+**Embedding model** determines which AI model is used to create vector embeddings. The default is the OpenAI `text-embedding-3-small` model.
**Chunk size** determines how large each text chunk is in number of characters.
Larger chunks yield more context per chunk, but may include irrelevant information. Smaller chunks yield more precise semantic search, but may lack context.
@@ -32,6 +34,8 @@ The default value of `1000` characters provides a good starting point that balan
Use larger overlap values for documents where context is most important, and use smaller overlap values for simpler documents, or when optimization is most important.
The default value of 200 characters of overlap with a chunk size of 1000 (20% overlap) is suitable for general use cases. Decrease the overlap to 10% for a more efficient pipeline, or increase to 40% for more complex documents.
+**Table Structure** enables Docling's [`DocumentConverter`](https://docling-project.github.io/docling/reference/document_converter/) tool for parsing tables. Instead of treating tables as plain text, tables are output as structured table data with preserved relationships and metadata. **Table Structure** is enabled by default.
+
**OCR** enables or disabled OCR processing when extracting text from images and scanned documents.
OCR is disabled by default. This setting is best suited for processing text-based documents as quickly as possible with Docling's [`DocumentConverter`](https://docling-project.github.io/docling/reference/document_converter/). Images are ignored and not processed.
@@ -41,14 +45,6 @@ If OpenRAG detects that the local machine is running on macOS, OpenRAG uses the
**Picture descriptions** adds image descriptions generated by the [SmolVLM-256M-Instruct](https://huggingface.co/HuggingFaceTB/SmolVLM-Instruct) model to OCR processing. Enabling picture descriptions can slow ingestion performance.
-## Use OpenRAG default ingestion instead of Docling serve
-
-If you want to use OpenRAG's built-in pipeline instead of Docling serve, set `DISABLE_INGEST_WITH_LANGFLOW=true` in [Environment variables](/reference/configuration#document-processing).
-
-The built-in pipeline still uses the Docling processor, but uses it directly without the Docling Serve API.
-
-For more information, see [`processors.py` in the OpenRAG repository](https://github.com/langflow-ai/openrag/blob/main/src/models/processors.py#L58).
-
## Knowledge ingestion flows
[Flows](https://docs.langflow.org/concepts-overview) in Langflow are functional representations of application workflows, with multiple [component](https://docs.langflow.org/concepts-components) nodes connected as single steps in a workflow.
@@ -74,4 +70,12 @@ An additional knowledge ingestion flow is included in OpenRAG, where it is used
The agent calls this component to fetch web content, and the results are ingested into OpenSearch.
For more on using MCP clients in Langflow, see [MCP clients](https://docs.langflow.org/mcp-client).\
-To connect additional MCP servers to the MCP client, see [Connect to MCP servers from your application](https://docs.langflow.org/mcp-tutorial).
\ No newline at end of file
+To connect additional MCP servers to the MCP client, see [Connect to MCP servers from your application](https://docs.langflow.org/mcp-tutorial).
+
+## Use OpenRAG default ingestion instead of Docling serve
+
+If you want to use OpenRAG's built-in pipeline instead of Docling serve, set `DISABLE_INGEST_WITH_LANGFLOW=true` in [Environment variables](/reference/configuration#document-processing).
+
+The built-in pipeline still uses the Docling processor, but uses it directly without the Docling Serve API.
+
+For more information, see [`processors.py` in the OpenRAG repository](https://github.com/langflow-ai/openrag/blob/main/src/models/processors.py#L58).
\ No newline at end of file
From cd2c948e688cfdede13dbbc58014b51175097d78 Mon Sep 17 00:00:00 2001
From: Mendon Kissling <59585235+mendonk@users.noreply.github.com>
Date: Tue, 28 Oct 2025 17:33:14 -0400
Subject: [PATCH 5/9] more
---
docs/docs/core-components/ingestion.mdx | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)
diff --git a/docs/docs/core-components/ingestion.mdx b/docs/docs/core-components/ingestion.mdx
index 4b6787df..585ea6e3 100644
--- a/docs/docs/core-components/ingestion.mdx
+++ b/docs/docs/core-components/ingestion.mdx
@@ -8,14 +8,14 @@ import Tabs from '@theme/Tabs';
import TabItem from '@theme/TabItem';
import PartialModifyFlows from '@site/docs/_partial-modify-flows.mdx';
-OpenRAG uses [Docling](https://docling-project.github.io/docling/) for its document ingestion pipeline.
+OpenRAG uses [Docling](https://docling-project.github.io/docling/) for document ingestion.
More specifically, OpenRAG uses [Docling Serve](https://github.com/docling-project/docling-serve), which starts a `docling serve` process on your local machine and runs Docling ingestion through an API service.
Docling ingests documents from your local machine or OAuth connectors, splits them into chunks, and stores them as separate, structured documents in the OpenSearch `documents` index.
OpenRAG chose Docling for its support for a wide variety of file formats, high performance, and advanced understanding of tables and images.
-To modify OpenRAG's Knowledge Ingest settings or flows, click **Settings**.
+To modify OpenRAG's ingestion settings, including the Docling settings and ingestion flows, click **Settings**.
## Knowledge ingestion settings
From 362721595161059e446fcbed8832098bc69789ee Mon Sep 17 00:00:00 2001
From: Mendon Kissling <59585235+mendonk@users.noreply.github.com>
Date: Wed, 29 Oct 2025 10:03:14 -0400
Subject: [PATCH 6/9] knowledge-filters
---
docs/docs/core-components/agents.mdx | 2 +-
docs/docs/core-components/ingestion.mdx | 2 +-
docs/docs/core-components/knowledge.mdx | 44 +++++++++++++------------
3 files changed, 25 insertions(+), 23 deletions(-)
diff --git a/docs/docs/core-components/agents.mdx b/docs/docs/core-components/agents.mdx
index a7c5ef24..88102a60 100644
--- a/docs/docs/core-components/agents.mdx
+++ b/docs/docs/core-components/agents.mdx
@@ -52,7 +52,7 @@ This filter is the [Knowledge filter](/knowledge#create-knowledge-filters), and
For an example of changing out the agent's language model in OpenRAG, see the [Quickstart](/quickstart#change-components).
-To restore the flow to its initial state, in OpenRAG, click **Settings**, and then click **Restore Flow**.
+To restore the flow to its initial state, in OpenRAG, click **Settings**, and then click **Restore Flow**.
OpenRAG warns you that this discards all custom settings. Click **Restore** to restore the flow.
## Additional Langflow functionality
diff --git a/docs/docs/core-components/ingestion.mdx b/docs/docs/core-components/ingestion.mdx
index 585ea6e3..f72e9ab0 100644
--- a/docs/docs/core-components/ingestion.mdx
+++ b/docs/docs/core-components/ingestion.mdx
@@ -15,7 +15,7 @@ Docling ingests documents from your local machine or OAuth connectors, splits th
OpenRAG chose Docling for its support for a wide variety of file formats, high performance, and advanced understanding of tables and images.
-To modify OpenRAG's ingestion settings, including the Docling settings and ingestion flows, click **Settings**.
+To modify OpenRAG's ingestion settings, including the Docling settings and ingestion flows, click 2" aria-hidden="true"/> **Settings**.
## Knowledge ingestion settings
diff --git a/docs/docs/core-components/knowledge.mdx b/docs/docs/core-components/knowledge.mdx
index cae39659..045918f1 100644
--- a/docs/docs/core-components/knowledge.mdx
+++ b/docs/docs/core-components/knowledge.mdx
@@ -31,10 +31,10 @@ The **Knowledge Ingest** flow uses Langflow's [**File** component](https://docs.
The default path to your local folder is mounted from the `./documents` folder in your OpenRAG project directory to the `/app/documents/` directory inside the Docker container. Files added to the host or the container will be visible in both locations. To configure this location, modify the **Documents Paths** variable in either the TUI's [Advanced Setup](/install#setup) menu or in the `.env` used by Docker Compose.
-To load and process a single file from the mapped location, click **Add Knowledge**, and then click **Add File**.
+To load and process a single file from the mapped location, click **Add Knowledge**, and then click **File**.
The file is loaded into your OpenSearch database, and appears in the Knowledge page.
-To load and process a directory from the mapped location, click **Add Knowledge**, and then click **Process Folder**.
+To load and process a directory from the mapped location, click **Add Knowledge**, and then click **Folder**.
The files are loaded into your OpenSearch database, and appear in the Knowledge page.
### Ingest files through OAuth connectors {#oauth-ingestion}
@@ -61,11 +61,11 @@ If you wish to use another provider, add the secrets to another provider.
1. Stop the Docker deployment.
2. Add the OAuth provider's client and secret key in the `.env` file for Docker Compose.
- ```bash
- GOOGLE_OAUTH_CLIENT_ID='YOUR_OAUTH_CLIENT_ID'
- GOOGLE_OAUTH_CLIENT_SECRET='YOUR_OAUTH_CLIENT_SECRET'
- ```
- 3. Save your `.env`. file.
+ ```bash
+ GOOGLE_OAUTH_CLIENT_ID='YOUR_OAUTH_CLIENT_ID'
+ GOOGLE_OAUTH_CLIENT_SECRET='YOUR_OAUTH_CLIENT_SECRET'
+ ```
+ 3. Save your `.env` file.
4. Start the Docker deployment.
@@ -75,11 +75,11 @@ A successful authentication opens OpenRAG with the required scopes for your conn
To add knowledge from an OAuth-connected storage provider, do the following:
-1. Click **Add Knowledge**, and then select the storage provider, for example, **Google Drive**.
+1. Click **Add Knowledge**, and then select the storage provider, for example, **Google Drive**.
The **Add Cloud Knowledge** page opens.
-2. To add files or folders from the connected storage, click **Add Files**.
+2. To add files or folders from the connected storage, click **Add Files**.
Select the files or folders you want and click **Select**.
-You can select multiples.
+You can select multiple files.
3. When your files are selected, click **Ingest Files**.
The ingestion process may take some time, depending on the size of your documents.
4. When ingestion is complete, your documents are available in the Knowledge screen.
@@ -104,11 +104,11 @@ Knowledge filters help agents work more efficiently with large document collecti
To create a knowledge filter, do the following:
-1. Click **All Knowledge**, and then click **Create New Filter**.
- The **Create New Knowledge Filter** pane appears.
-2. Enter a **Name** and **Description**, and then click **Create Filter**.
-A new filter is created with default settings that match everything.
-3. To modify the default filter, click **All Knowledge**, and then click your new filter to edit it in the **Knowledge Filter** pane.
+1. Click **Knowledge**, and then click **Knowledge Filters**.
+ The **Knowledge Filter** pane appears.
+2. Enter a **Name** and **Description**, and then click **Create Filter**.
+A new filter is created with default settings that match all documents.
+3. To modify the filter, click **Knowledge**, and then click your new filter to edit it in the **Knowledge Filter** pane.
The following filter options are configurable.
@@ -116,15 +116,17 @@ A new filter is created with default settings that match everything.
* **Data Sources**: Select specific data sources or folders to include.
* **Document Types**: Filter by file type.
* **Owners**: Filter by who uploaded the documents.
- * **Sources**: Filter by connector types, such as local upload or Google Drive.
- * **Result Limit**: Set maximum number of results. The default is `10`.
+ * **Connectors**: Filter by connector types, such as local upload or Google Drive.
+ * **Response Limit**: Set maximum number of results. The default is `10`.
* **Score Threshold**: Set minimum relevance score. The default score is `0`.
-4. When you're done editing the filter, click **Save Configuration**.
+4. When you're done editing the filter, click **Update Filter**.
-5. To apply the filter to OpenRAG globally, click **All Knowledge**, and then select the filter to apply.
+5. To apply the filter to OpenRAG globally, click **Knowledge**, and then select the filter to apply. One filter can be enabled at a time.
- To apply the filter to a single chat session, in the **Chat** window, click **@**, and then select the filter to apply.
+ To apply the filter to a single chat session, in the **Chat** window, click , and then select the filter to apply.
+
+ To delete the filter, in the **Knowledge Filter** pane, click **Delete Filter**.
## OpenRAG default configuration
@@ -132,7 +134,7 @@ OpenRAG automatically detects and configures the correct vector dimensions for e
The complete list of supported models is available at [`models_service.py` in the OpenRAG repository](https://github.com/langflow-ai/openrag/blob/main/src/services/models_service.py).
-You can use custom embedding models by specifying them in your configuration.
+You can use custom embe*dding models by specifying them in your configuration.
If you use an unknown embedding model, OpenRAG will automatically fall back to `1536` dimensions and log a warning. The system will continue to work, but search quality may be affected if the actual model dimensions differ from `1536`.
From 91a0101c3625e8fd62fad0bec2b68313a85f16a8 Mon Sep 17 00:00:00 2001
From: Mendon Kissling <59585235+mendonk@users.noreply.github.com>
Date: Wed, 29 Oct 2025 10:37:49 -0400
Subject: [PATCH 7/9] add-file-thru-chat
---
docs/docs/core-components/knowledge.mdx | 2 ++
1 file changed, 2 insertions(+)
diff --git a/docs/docs/core-components/knowledge.mdx b/docs/docs/core-components/knowledge.mdx
index 045918f1..d5235b91 100644
--- a/docs/docs/core-components/knowledge.mdx
+++ b/docs/docs/core-components/knowledge.mdx
@@ -37,6 +37,8 @@ The file is loaded into your OpenSearch database, and appears in the Knowledge p
To load and process a directory from the mapped location, click **Add Knowledge**, and then click **Folder**.
The files are loaded into your OpenSearch database, and appear in the Knowledge page.
+To add files directly to a chat session, click in the chat input and select the files you want to include. Files added this way are processed and made available to the agent for the current conversation, and are not permanently added to the knowledge base.
+
### Ingest files through OAuth connectors {#oauth-ingestion}
OpenRAG supports Google Drive, OneDrive, and Sharepoint as OAuth connectors for seamless document synchronization.
From d935b233eee59bfac86c3c01baa3b7e65c1a0a67 Mon Sep 17 00:00:00 2001
From: Mendon Kissling <59585235+mendonk@users.noreply.github.com>
Date: Wed, 29 Oct 2025 13:15:26 -0400
Subject: [PATCH 8/9] Apply suggestion from @aimurphy
Co-authored-by: April I. Murphy <36110273+aimurphy@users.noreply.github.com>
---
docs/docs/core-components/knowledge.mdx | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/docs/docs/core-components/knowledge.mdx b/docs/docs/core-components/knowledge.mdx
index d5235b91..588f1f45 100644
--- a/docs/docs/core-components/knowledge.mdx
+++ b/docs/docs/core-components/knowledge.mdx
@@ -136,7 +136,7 @@ OpenRAG automatically detects and configures the correct vector dimensions for e
The complete list of supported models is available at [`models_service.py` in the OpenRAG repository](https://github.com/langflow-ai/openrag/blob/main/src/services/models_service.py).
-You can use custom embe*dding models by specifying them in your configuration.
+You can use custom embedding models by specifying them in your configuration.
If you use an unknown embedding model, OpenRAG will automatically fall back to `1536` dimensions and log a warning. The system will continue to work, but search quality may be affected if the actual model dimensions differ from `1536`.
From abb1f1e4bb72e92020e228f84a3702ffc05f5b4e Mon Sep 17 00:00:00 2001
From: Mendon Kissling <59585235+mendonk@users.noreply.github.com>
Date: Wed, 29 Oct 2025 13:15:37 -0400
Subject: [PATCH 9/9] Apply suggestion from @aimurphy
Co-authored-by: April I. Murphy <36110273+aimurphy@users.noreply.github.com>
---
docs/docs/get-started/install.mdx | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/docs/docs/get-started/install.mdx b/docs/docs/get-started/install.mdx
index d7faaabb..d598476e 100644
--- a/docs/docs/get-started/install.mdx
+++ b/docs/docs/get-started/install.mdx
@@ -202,7 +202,7 @@ docker system prune -f
### Native services status
-A "native" service in OpenRAG refers to a service run locally on your machine, and not within a container.
+A _native service_ in OpenRAG refers to a service run locally on your machine, and not within a container.
The `docling serve` process is a native service in OpenRAG, because it's a document processing service that is run on your local machine, and controlled separately from the containers.
To start or stop `docling serve` or any other native services, in the TUI Status menu, click **Stop** or **Restart**.