diff --git a/docs/docs/_partial-modify-flows.mdx b/docs/docs/_partial-modify-flows.mdx
new file mode 100644
index 00000000..852777e5
--- /dev/null
+++ b/docs/docs/_partial-modify-flows.mdx
@@ -0,0 +1,5 @@
+import Icon from "@site/src/components/icon/icon";
+
+All flows included with OpenRAG are designed to be modular, performant, and provider-agnostic.
+To modify a flow, click **Settings**, and click **Edit in Langflow**.
+Flows are edited in the same way as in the [Langflow visual editor](https://docs.langflow.org/concepts-overview).
\ No newline at end of file
diff --git a/docs/docs/core-components/agents.mdx b/docs/docs/core-components/agents.mdx
index 121ca3d5..8388bd60 100644
--- a/docs/docs/core-components/agents.mdx
+++ b/docs/docs/core-components/agents.mdx
@@ -3,6 +3,11 @@ title: Agents powered by Langflow
slug: /agents
---
+import Icon from "@site/src/components/icon/icon";
+import Tabs from '@theme/Tabs';
+import TabItem from '@theme/TabItem';
+import PartialModifyFlows from '@site/docs/_partial-modify-flows.mdx';
+
OpenRAG leverages Langflow's Agent component to power the OpenRAG Open Search Agent flow.
This flow intelligently chats with your knowledge by embedding your query, comparing it the vector database embeddings, and generating a response with the LLM.
@@ -25,7 +30,7 @@ In an agentic context, tools are functions that the agent can run to perform tas
## Use the OpenRAG Open Search Agent flow
If you've chatted with your knowledge in OpenRAG, you've already experienced the OpenRAG Open Search Agent chat flow.
-To view the flow, click **Settings**, and then click **Edit in Langflow**.
+To view the flow, click **Settings**, and then click **Edit in Langflow**.
This flow contains seven components:
* The Agent component orchestrates the entire flow by deciding when to search the knowledge base, how to formulate search queries, and how to combine retrieved information with the user's question to generate a comprehensive response.
@@ -38,9 +43,7 @@ The Agent behaves according to the prompt in the **Agent Instructions** field.
This filter is the Knowledge filter, and filters which knowledge sources to search through.
* The Agent component's Output port is connected to the Chat Output component, which returns the final response to the user or application.
-All flows included with OpenRAG are designed to be modular, performant, and provider-agnostic.
-To modify a flow, click **Settings**, and click **Edit in Langflow**.
-Flows are edited in the same way as in the [Langflow visual editor](https://docs.langflow.org/concepts-overview).
+
For an example of changing out the agent's LLM in OpenRAG, see the [Quickstart](/quickstart#change-components).
diff --git a/docs/docs/core-components/knowledge.mdx b/docs/docs/core-components/knowledge.mdx
index 67f1ef24..5e003880 100644
--- a/docs/docs/core-components/knowledge.mdx
+++ b/docs/docs/core-components/knowledge.mdx
@@ -1,4 +1,101 @@
---
title: Knowledge stored with OpenSearch
slug: /knowledge
----
\ No newline at end of file
+---
+
+import Icon from "@site/src/components/icon/icon";
+import Tabs from '@theme/Tabs';
+import TabItem from '@theme/TabItem';
+import PartialModifyFlows from '@site/docs/_partial-modify-flows.mdx';
+
+OpenRAG uses [OpenSearch](https://docs.opensearch.org/latest/) for its vector-backed knowledge store.
+OpenSearch provides powerful hybrid search capabilities with enterprise-grade security and multi-tenancy support.
+
+## OpenRAG default configuration
+
+OpenRAG creates a specialized OpenSearch index called `documents` with the values defined at `src/config/settings.py`.
+- **Vector Dimensions**: 1536-dimensional embeddings using OpenAI's `text-embedding-3-small` model.
+- **KNN Vector Type**: Uses `knn_vector` field type with `disk_ann` method and `jvector` engine.
+- **Distance Metric**: L2 (Euclidean) distance for vector similarity.
+- **Performance Optimization**: Configured with `ef_construction: 100` and `m: 16` parameters.
+
+OpenRAG supports hybrid search, which combines semantic and keyword search.
+
+## Explore knowledge
+
+To explore your current knowledge, click **Knowledge**.
+The Knowledge page lists the documents OpenRAG has ingested into the OpenSearch vector database's `documents` index.
+
+Click on a document to display the chunks derived from splitting the default documents into the vector database.
+Documents are processed with the **Knowledge Ingest** flow, so to split your documents differently, edit the **Knowledge Ingest** flow.
+
+
+
+## Ingest knowledge
+
+OpenRAG supports knowledge ingestion through direct file uploads and OAuth connectors.
+
+### Upload files
+
+- Files uploaded directly through the web interface
+- Processed immediately using the standard pipeline
+
+### Upload files through OAuth connectors
+
+OpenRAG supports the following enterprise-grade OAuth connectors for seamless document synchronization.
+
+- **Google Drive**
+- **OneDrive**
+- **AWS**
+
+OAuth integration allows your OpenRAG server to authenticate users and applications through any OAuth 2.0 compliant service. When users or applications connect to your server, they are redirected to your chosen OAuth provider to authenticate. Upon successful authentication, they are granted access to the connector.
+
+Before configuring OAuth in OpenRAG, you must first set up an OAuth application with an external OAuth 2.0 service provider. You must register your OpenRAG server as an OAuth client, and then obtain the `client` and `secret` keys to complete the configuration in OpenRAG.
+
+To add an OAuth connector to OpenRAG, do the following.
+This example uses Google OAuth.
+If you wish to use another provider, add the secrets to another provider.
+
+
+
+ 1. If OpenRAG is running, stop it with **Status** > **Stop Services**.
+ 2. Click **Advanced Setup**.
+ 3. Add the OAuth provider's client and secret key in the [Advanced Setup](/install#advanced-setup) menu.
+ 4. Click **Save Configuration**.
+ The TUI generates a new `.env` file with your OAuth values.
+ 5. Click **Start Container Services**.
+
+
+ 1. Stop the Docker deployment.
+ 2. Add the OAuth provider's client and secret key in the `.env` file for Docker Compose.
+ ```bash
+ GOOGLE_OAUTH_CLIENT_ID='YOUR_OAUTH_CLIENT_ID'
+ GOOGLE_OAUTH_CLIENT_SECRET='YOUR_OAUTH_CLIENT_SECRET'
+ ```
+ 3. Save your `.env`. file.
+ 4. Start the Docker deployment.
+
+
+
+The OpenRAG frontend at `http://localhost:3000` now redirects to an OAuth callback login page for your OAuth provider.
+A successful authentication opens OpenRAG with the required scopes for your connected storage.
+
+To add knowledge from an OAuth-connected storage provider, do the following:
+
+1. Click **Add Knowledge**, and then select the storage provider, for example, **Google Drive**.
+The **Add Cloud Knowledge** page opens.
+2. To add files or folders from the connected storage, click **Add Files**.
+Select the files or folders you want and click **Select**.
+You can select multiples.
+3. When your files are selected, click **Ingest Files**.
+The ingestion process may take some time, depending on the size of your documents.
+4. When ingestion is complete, your documents are available in the Knowledge screen.
+
+## Knowledge Filter System
+
+OpenRAG includes a knowledge filter system for organizing and managing document collections:
+
+
+
+
+
diff --git a/docs/docs/get-started/quickstart.mdx b/docs/docs/get-started/quickstart.mdx
index fe039859..0e53534f 100644
--- a/docs/docs/get-started/quickstart.mdx
+++ b/docs/docs/get-started/quickstart.mdx
@@ -35,9 +35,9 @@ Get started with OpenRAG by loading your knowledge, swapping out your language m
These events log the agent's request to the tool and the tool's response, so you have direct visibility into your agent's functionality.
If you aren't getting the results you need, you can further tune the knowledge ingestion and agent behavior in the next section.
-## Swap out the language model to modify agent behavior {change-components}
+## Swap out the language model to modify agent behavior {#change-components}
-To modify the knowledge ingestion or Agent behavior, click **Settings**.
+To modify the knowledge ingestion or Agent behavior, click **Settings**.
In this example, you'll try a different LLM to demonstrate how the Agent's response changes.