Compare commits

..

No commits in common. "main" and "add-docling-serve-env-var" have entirely different histories.

63 changed files with 110 additions and 2539 deletions

View file

@ -1,155 +0,0 @@
name: Bug Report
description: Report a bug or unexpected behavior in OpenRAG
title: "[Bug]: "
labels: ["bug"]
body:
- type: markdown
attributes:
value: |
Thanks for taking the time to report a bug! Please fill out the form below to help us understand and fix the issue.
- type: input
id: openrag-version
attributes:
label: OpenRAG Version
description: What version of OpenRAG are you using? Run `openrag --version` or check your package version.
placeholder: "e.g., 0.1.0"
validations:
required: true
- type: dropdown
id: deployment-method
attributes:
label: Deployment Method
description: How are you running OpenRAG?
options:
- uvx (uvx openrag)
- uv add (installed in project)
- Docker
- Podman
- Local development (make dev)
- Other
validations:
required: true
- type: input
id: os
attributes:
label: Operating System
description: What operating system are you using?
placeholder: "e.g., macOS 14.0, Ubuntu 22.04, Windows 11"
validations:
required: true
- type: input
id: python-version
attributes:
label: Python Version
description: What Python version are you using? Run `python --version` to check.
placeholder: "e.g., 3.13.0"
validations:
required: false
- type: dropdown
id: affected-area
attributes:
label: Affected Area
description: Which area(s) of OpenRAG does this bug affect? Select all that apply.
multiple: true
options:
- Ingestion (document processing, upload, Docling)
- Retrieval (search, OpenSearch, hybrid search)
- Chat (chat interface, conversations, AI responses)
- Knowledge Filters (partitions, document filtering)
- Settings (configuration, model providers)
- TUI (Terminal User Interface)
- Connectors (Google Drive, OneDrive, SharePoint)
- Frontend (Next.js UI, components)
- Backend/API (Python/Starlette)
- Infrastructure (Docker, OpenSearch, Langflow)
- SDK (Python or TypeScript SDK)
- Onboarding (setup wizard, initial configuration)
- Authentication (OIDC, API keys)
- Other
validations:
required: true
- type: textarea
id: bug-description
attributes:
label: Bug Description
description: A clear and concise description of what the bug is.
placeholder: Describe the bug...
validations:
required: true
- type: textarea
id: steps-to-reproduce
attributes:
label: Steps to Reproduce
description: Steps to reproduce the behavior.
placeholder: |
1. Go to '...'
2. Click on '...'
3. Scroll down to '...'
4. See error
validations:
required: true
- type: textarea
id: expected-behavior
attributes:
label: Expected Behavior
description: A clear and concise description of what you expected to happen.
placeholder: What should have happened?
validations:
required: true
- type: textarea
id: actual-behavior
attributes:
label: Actual Behavior
description: A clear and concise description of what actually happened.
placeholder: What actually happened?
validations:
required: true
- type: textarea
id: logs
attributes:
label: Relevant Logs
description: |
Please copy and paste any relevant log output.
You can get logs using `make logs` for Docker deployments or check the terminal output.
This will be automatically formatted into code, so no need for backticks.
render: shell
validations:
required: false
- type: textarea
id: screenshots
attributes:
label: Screenshots
description: If applicable, add screenshots to help explain your problem.
validations:
required: false
- type: textarea
id: additional-context
attributes:
label: Additional Context
description: Add any other context about the problem here (e.g., browser version, specific document types, model provider being used).
validations:
required: false
- type: checkboxes
id: checklist
attributes:
label: Checklist
description: Please confirm the following before submitting.
options:
- label: I have searched existing issues to ensure this bug hasn't been reported before.
required: true
- label: I have provided all the requested information.
required: true

View file

@ -1,15 +0,0 @@
blank_issues_enabled: false
contact_links:
- name: OpenRAG Documentation
url: https://docs.openr.ag/
about: Learn more about OpenRAG's features, installation, and configuration.
- name: Troubleshooting Guide
url: https://docs.openr.ag/support/troubleshoot
about: Check the troubleshooting guide for common issues and solutions.
- name: GitHub Discussions
url: https://github.com/langflow-ai/openrag/discussions
about: Ask questions and discuss ideas with the community.
- name: Contributing Guide
url: https://github.com/langflow-ai/openrag/blob/main/CONTRIBUTING.md
about: Learn how to contribute to OpenRAG development.

View file

@ -1,106 +0,0 @@
name: Documentation Issue
description: Report an issue with documentation or request new documentation
title: "[Docs]: "
labels: ["documentation"]
body:
- type: markdown
attributes:
value: |
Thanks for helping improve OpenRAG's documentation! Please provide details about the issue or your request.
- type: dropdown
id: issue-type
attributes:
label: Issue Type
description: What type of documentation issue is this?
options:
- Incorrect information
- Missing documentation
- Outdated content
- Unclear or confusing
- Typo or grammatical error
- Broken links
- Request for new documentation
- Other
validations:
required: true
- type: dropdown
id: doc-area
attributes:
label: Documentation Area
description: Which area of documentation does this relate to?
multiple: true
options:
- Getting Started / Quickstart
- Installation (uvx, Docker, Podman)
- Configuration / Settings
- Ingestion & Document Processing
- Search & Retrieval
- Chat Interface
- Knowledge Filters
- Connectors (Google Drive, OneDrive, SharePoint)
- TUI (Terminal User Interface)
- API Reference
- SDK Documentation (Python/TypeScript)
- Troubleshooting
- Contributing Guide
- Other
validations:
required: true
- type: input
id: doc-url
attributes:
label: Documentation URL
description: If applicable, provide a link to the specific documentation page.
placeholder: "https://docs.openr.ag/..."
validations:
required: false
- type: textarea
id: current-content
attributes:
label: Current Content
description: If reporting an issue, what does the documentation currently say?
placeholder: Quote or describe the current documentation content.
validations:
required: false
- type: textarea
id: issue-description
attributes:
label: Issue Description
description: Describe the problem or what documentation you'd like to see added.
placeholder: |
For issues: Explain what's wrong or confusing about the current documentation.
For requests: Describe what topic you'd like documented and why it would be helpful.
validations:
required: true
- type: textarea
id: suggested-content
attributes:
label: Suggested Content
description: If you have suggestions for how to fix or improve the documentation, please share them.
placeholder: Provide suggested text, corrections, or an outline for new documentation.
validations:
required: false
- type: textarea
id: additional-context
attributes:
label: Additional Context
description: Add any other context, screenshots, or examples here.
validations:
required: false
- type: checkboxes
id: contribution
attributes:
label: Contribution
description: Would you be interested in contributing to fix this documentation issue?
options:
- label: I would be willing to submit a pull request to fix this issue.
required: false

View file

@ -1,113 +0,0 @@
name: Feature Request
description: Suggest a new feature or enhancement for OpenRAG
title: "[Feature]: "
labels: ["enhancement"]
body:
- type: markdown
attributes:
value: |
Thanks for suggesting a feature! Please provide as much detail as possible to help us understand your request.
- type: dropdown
id: feature-area
attributes:
label: Feature Area
description: Which area(s) of OpenRAG does this feature relate to?
multiple: true
options:
- Ingestion (document processing, upload, Docling)
- Retrieval (search, OpenSearch, hybrid search)
- Chat (chat interface, conversations, AI responses)
- Knowledge Filters (partitions, document filtering)
- Settings (configuration, model providers)
- TUI (Terminal User Interface)
- Connectors (Google Drive, OneDrive, SharePoint)
- Frontend (Next.js UI, components)
- Backend/API (Python/Starlette)
- Infrastructure (Docker, OpenSearch, Langflow)
- SDK (Python or TypeScript SDK)
- Onboarding (setup wizard, initial configuration)
- Authentication (OIDC, API keys)
- New Area
validations:
required: true
- type: textarea
id: problem-description
attributes:
label: Problem Description
description: Is your feature request related to a problem? Please describe.
placeholder: A clear and concise description of what the problem is. E.g., "I'm always frustrated when..."
validations:
required: true
- type: textarea
id: proposed-solution
attributes:
label: Proposed Solution
description: Describe the solution you'd like to see implemented.
placeholder: A clear and concise description of what you want to happen.
validations:
required: true
- type: textarea
id: use-case
attributes:
label: Use Case
description: Describe your use case and how this feature would benefit you or others.
placeholder: |
As a [type of user], I want [goal] so that [benefit].
Example: As a developer, I want to filter documents by custom metadata so that I can organize my knowledge base more effectively.
validations:
required: true
- type: textarea
id: alternatives
attributes:
label: Alternatives Considered
description: Describe any alternative solutions or features you've considered.
placeholder: What other approaches have you thought about? Why wouldn't they work as well?
validations:
required: false
- type: dropdown
id: priority
attributes:
label: Priority
description: How important is this feature to your workflow?
options:
- Nice to have
- Would improve my workflow
- Critical for my use case
validations:
required: true
- type: textarea
id: additional-context
attributes:
label: Additional Context
description: Add any other context, mockups, screenshots, or examples about the feature request here.
validations:
required: false
- type: checkboxes
id: contribution
attributes:
label: Contribution
description: Would you be interested in contributing to this feature?
options:
- label: I would be willing to help implement this feature.
required: false
- label: I can help test this feature once implemented.
required: false
- type: checkboxes
id: checklist
attributes:
label: Checklist
description: Please confirm the following before submitting.
options:
- label: I have searched existing issues and discussions to ensure this feature hasn't been requested before.
required: true

View file

@ -69,7 +69,8 @@ jobs:
tag: langflowai/openrag-backend
platform: linux/arm64
arch: arm64
runs-on: [self-hosted, Linux, ARM64, langflow-ai-arm64-40gb-ephemeral]
#runs-on: [self-hosted, linux, ARM64, langflow-ai-arm64-2]
runs-on: RagRunner
# frontend
- image: frontend
@ -83,7 +84,8 @@ jobs:
tag: langflowai/openrag-frontend
platform: linux/arm64
arch: arm64
runs-on: [self-hosted, Linux, ARM64, langflow-ai-arm64-40gb-ephemeral]
#runs-on: [self-hosted, linux, ARM64, langflow-ai-arm64-2]
runs-on: RagRunner
# langflow
- image: langflow
@ -97,7 +99,8 @@ jobs:
tag: langflowai/openrag-langflow
platform: linux/arm64
arch: arm64
runs-on: [self-hosted, Linux, ARM64, langflow-ai-arm64-40gb-ephemeral]
#runs-on: self-hosted
runs-on: RagRunner
# opensearch
- image: opensearch
@ -111,7 +114,9 @@ jobs:
tag: langflowai/openrag-opensearch
platform: linux/arm64
arch: arm64
runs-on: [self-hosted, Linux, ARM64, langflow-ai-arm64-40gb-ephemeral]
#runs-on: [self-hosted, linux, ARM64, langflow-ai-arm64-2]
#runs-on: self-hosted
runs-on: RagRunner
runs-on: ${{ matrix.runs-on }}
@ -206,7 +211,7 @@ jobs:
uses: actions/checkout@v4
- name: Set up Python
uses: actions/setup-python@v6
uses: actions/setup-python@v5
with:
python-version: '3.13'

View file

@ -45,7 +45,7 @@ jobs:
- uses: actions/checkout@v4
- name: Setup Python
uses: actions/setup-python@v6
uses: actions/setup-python@v5
with:
python-version: '3.11'

View file

@ -23,7 +23,7 @@ jobs:
- name: Setup Node.js
uses: actions/setup-node@v4
with:
node-version: 20.20.0
node-version: 20
cache: npm
cache-dependency-path: ./docs/package-lock.json

View file

@ -16,7 +16,7 @@ jobs:
- uses: actions/checkout@v4
- uses: actions/setup-node@v4
with:
node-version: 20.20.0
node-version: 20
cache: npm
cache-dependency-path: ./docs/package-lock.json

View file

@ -21,7 +21,7 @@ jobs:
uses: actions/checkout@v4
- name: Set up Python
uses: actions/setup-python@v6
uses: actions/setup-python@v5
with:
python-version: '3.12'

View file

@ -20,7 +20,7 @@ jobs:
token: ${{ secrets.GITHUB_TOKEN }}
- name: Set up Python
uses: actions/setup-python@v6
uses: actions/setup-python@v5
with:
python-version: '3.13'

View file

@ -1,4 +1,4 @@
FROM node:20.20.0-slim
FROM node:18-slim
# Set working directory
WORKDIR /app

View file

@ -3,10 +3,5 @@ services:
environment:
- NVIDIA_DRIVER_CAPABILITIES=compute,utility
- NVIDIA_VISIBLE_DEVICES=all
deploy:
resources:
reservations:
devices:
- driver: nvidia
count: all
capabilities: [gpu]
gpus: all

View file

@ -29,7 +29,7 @@ services:
- "9200:9200"
- "9600:9600"
volumes:
- ${OPENSEARCH_DATA_PATH:-./opensearch-data}:/usr/share/opensearch/data:U,z
- ${OPENSEARCH_DATA_PATH:-./opensearch-data}:/usr/share/opensearch/data:Z
dashboards:
image: opensearchproject/opensearch-dashboards:3.0.0
@ -97,7 +97,7 @@ services:
environment:
- OPENRAG_BACKEND_HOST=openrag-backend
ports:
- "3003:3003"
- "3000:3000"
langflow:
volumes:

View file

@ -1 +0,0 @@
In no-auth mode, all documents are attributed to **Anonymous User** because there is no distinct document ownership or unique JWTs. For more control over document ownership and visibility, use OAuth mode. For more information, see [OpenSearch authentication and document access](/knowledge#auth).

View file

@ -1,5 +0,0 @@
GPU acceleration isn't required for most use cases.
OpenRAG's CPU-only deployment doesn't prevent you from using GPU acceleration in external services, such as Ollama servers.
GPU acceleration is required only for specific use cases, typically involving customization of the ingestion flows or ingestion logic.
For example, writing alternate ingest logic in OpenRAG that uses GPUs directly in the container, or customizing the ingestion flows to use Langflow's Docling component with GPU acceleration instead of OpenRAG's Docling Serve service.

View file

@ -2,7 +2,7 @@ import Icon from "@site/src/components/icon/icon";
import Tabs from '@theme/Tabs';
import TabItem from '@theme/TabItem';
1. Open the **OpenRAG OpenSearch Agent** flow in the Langflow visual editor: Click <Icon name="Settings2" aria-hidden="true"/> **Settings**, click **Edit in Langflow**, and then click **Proceed**.
1. Open the **OpenRAG OpenSearch Agent** flow in the Langflow visual editor: From the **Chat** window, click <Icon name="Settings2" aria-hidden="true"/> **Settings**, click **Edit in Langflow**, and then click **Proceed**.
2. Optional: If you don't want to use the Langflow API key that is generated automatically when you install OpenRAG, you can create a [Langflow API key](https://docs.langflow.org/api-keys-and-authentication).
This key doesn't grant access to OpenRAG; it is only for authenticating with the Langflow API.

View file

@ -15,6 +15,4 @@ If a provider offers only one type, you must select two providers.
<PartialOllamaModels />
:::
* Optional: Install GPU support with an NVIDIA GPU, [CUDA](https://docs.nvidia.com/cuda/) support, and compatible NVIDIA drivers on the OpenRAG host machine.
If you don't have GPU capabilities, OpenRAG provides an alternate CPU-only deployment that is suitable for most use cases.
The default CPU-only deployment doesn't prevent you from using GPU acceleration in external services, such as Ollama servers.
* Optional: Install GPU support with an NVIDIA GPU, [CUDA](https://docs.nvidia.com/cuda/) support, and compatible NVIDIA drivers on the OpenRAG host machine. If you don't have GPU capabilities, OpenRAG provides an alternate CPU-only deployment.

View file

@ -1,5 +1,5 @@
import Icon from "@site/src/components/icon/icon";
When using the OpenRAG **Chat**, click <Icon name="Plus" aria-hidden="true"/> **Add** in the chat input field to upload a file to the current chat session.
When using the OpenRAG **Chat**, click <Icon name="Plus" aria-hidden="true"/> in the chat input field to upload a file to the current chat session.
Files added this way are processed and made available to the agent for the current conversation only.
These files aren't stored in the knowledge base permanently.

View file

@ -20,26 +20,15 @@ You can customize these flows and create your own flows using OpenRAG's embedded
All OpenRAG flows are designed to be modular, performant, and provider-agnostic.
To view and modify a flow in OpenRAG, click <Icon name="Settings2" aria-hidden="true"/> **Settings**.
From here, you can manage OAuth connectors, model providers, and common parameters for the **Agent** and **Knowledge Ingestion** flows.
To modify a flow in OpenRAG, click <Icon name="Settings2" aria-hidden="true"/> **Settings**.
From here, you can quickly edit commonly used parameters, such as the **Language model** and **Agent Instructions**.
To further explore and edit the flow, click **Edit in Langflow** to launch the embedded [Langflow visual editor](https://docs.langflow.org/concepts-overview) where you can fully [customize the flow](https://docs.langflow.org/concepts-flows) to suit your use case.
To further explore and edit flows, click **Edit in Langflow** to launch the embedded [Langflow visual editor](https://docs.langflow.org/concepts-overview) where you can fully [customize the flow](https://docs.langflow.org/concepts-flows) to suit your use case.
For example, to view and edit the built-in **Chat** flow (the **OpenRAG OpenSearch Agent** flow), do the following:
:::tip
After you click **Edit in Langflow**, you can access and edit all of OpenRAG's built-in flows from the Langflow editor's [**Projects** page](https://docs.langflow.org/concepts-flows#projects).
1. In OpenRAG, click <Icon name="MessageSquare" aria-hidden="true"/> **Chat**.
If you edit any flows other than the **Agent** or **Knowledge Ingestion** flows, it is recommended that you [export the flows](https://docs.langflow.org/concepts-flows-import) before editing so you can revert them to their original state if needed.
:::
For example, the following steps explain how to edit the built-in **Agent** flow, which is the **OpenRAG OpenSearch Agent** flow used for the OpenRAG <Icon name="MessageSquare" aria-hidden="true"/> **Chat**:
1. In OpenRAG, click <Icon name="Settings2" aria-hidden="true"/> **Settings**, and then find the **Agent** section.
2. If you only need to edit the language model or agent instructions, edit those fields directly on the **Settings** page.
Language model changes are saved automatically.
To apply new instructions, click **Save Agent Instructions**.
3. To edit all flow settings and components with full customization capabilities, click **Edit in Langflow** to launch the Langflow visual editor in a new browser tab.
2. Click <Icon name="Settings2" aria-hidden="true"/> **Settings**, and then click **Edit in Langflow** to launch the Langflow visual editor in a new browser window.
If prompted to acknowledge that you are entering Langflow, click **Proceed**.
@ -47,21 +36,19 @@ To apply new instructions, click **Save Agent Instructions**.
![OpenRAG OpenSearch Agent flow](/img/opensearch-agent-flow.png)
4. Modify the flow as desired, and then press <kbd>Command</kbd>+<kbd>S</kbd> (<kbd>Ctrl</kbd>+<kbd>S</kbd>) to save your changes.
3. Modify the flow as desired, and then press <kbd>Command</kbd>+<kbd>S</kbd> (<kbd>Ctrl</kbd>+<kbd>S</kbd>) to save your changes.
You can close the Langflow browser tab, or leave it open if you want to continue experimenting with the flow editor.
You can close the Langflow browser window, or leave it open if you want to continue experimenting with the flow editor.
5. After you modify any **Agent** flow settings, go to the OpenRAG <Icon name="MessageSquare" aria-hidden="true"/> **Chat**, and then click <Icon name="Plus" aria-hidden="true"/> **Start new conversation** in the **Conversations** list.
This ensures that the chat doesn't persist any context from the previous conversation with the original flow settings.
:::tip
If you modify the built-in **Chat** flow, make sure you click <Icon name="Plus" aria-hidden="true"/> in the **Conversations** tab to start a new conversation. This ensures that the chat doesn't persist any context from the previous conversation with the original flow settings.
:::
### Revert a built-in flow to its original configuration {#revert-a-built-in-flow-to-its-original-configuration}
After you edit the **Agent** or **Knowledge Ingestion** built-in flows, you can click **Restore flow** on the **Settings** page to revert either flow to its original state when you first installed OpenRAG.
After you edit a built-in flow, you can click **Restore flow** on the **Settings** page to revert the flow to its original state when you first installed OpenRAG.
This is a destructive action that discards all customizations to the flow.
This option isn't available for other built-in flows such as the **Nudges** flow.
To restore these flows to their original state, you must reimport the flow from a backup (if you exported one before editing), or [reset](/manage-services#reset-containers) or [reinstall](/reinstall) OpenRAG.
## Build custom flows and use other Langflow functionality
In addition to OpenRAG's built-in flows, all Langflow features are available through OpenRAG, including the ability to [create your own flows](https://docs.langflow.org/concepts-flows) and popular extensibility features such as the following:

View file

@ -11,7 +11,7 @@ import PartialTempKnowledge from '@site/docs/_partial-temp-knowledge.mdx';
After you [upload documents to your knowledge base](/ingestion), you can use the OpenRAG <Icon name="MessageSquare" aria-hidden="true"/> **Chat** feature to interact with your knowledge through natural language queries.
The OpenRAG <Icon name="MessageSquare" aria-hidden="true"/> **Chat** uses an LLM-powered agent to understand your queries, retrieve relevant information from your knowledge base, and generate context-aware responses.
The OpenRAG **Chat** uses an LLM-powered agent to understand your queries, retrieve relevant information from your knowledge base, and generate context-aware responses.
The agent can also fetch information from URLs and new documents that you provide during the chat session.
To limit the knowledge available to the agent, use [filters](/knowledge-filters).
@ -24,7 +24,7 @@ Try chatting, uploading documents, and modifying chat settings in the [quickstar
## OpenRAG OpenSearch Agent flow {#flow}
When you use the OpenRAG <Icon name="MessageSquare" aria-hidden="true"/> **Chat**, the **OpenRAG OpenSearch Agent** flow runs in the background to retrieve relevant information from your knowledge base and generate a response.
When you use the OpenRAG **Chat**, the **OpenRAG OpenSearch Agent** flow runs in the background to retrieve relevant information from your knowledge base and generate a response.
If you [inspect the flow in Langflow](/agents#inspect-and-modify-flows), you'll see that it is comprised of eight components that work together to ingest chat messages, retrieve relevant information from your knowledge base, and then generate responses.
When you inspect this flow, you can edit the components to customize the agent's behavior.
@ -32,7 +32,7 @@ When you inspect this flow, you can edit the components to customize the agent's
![OpenRAG Open Search Agent Flow](/img/opensearch-agent-flow.png)
* [**Chat Input** component](https://docs.langflow.org/chat-input-and-output#chat-input): This component starts the flow when it receives a chat message. It is connected to the **Agent** component's **Input** port.
When you use the OpenRAG <Icon name="MessageSquare" aria-hidden="true"/> **Chat**, your chat messages are passed to the **Chat Input** component, which then sends them to the **Agent** component for processing.
When you use the OpenRAG **Chat**, your chat messages are passed to the **Chat Input** component, which then sends them to the **Agent** component for processing.
* [**Agent** component](https://docs.langflow.org/components-agents): This component orchestrates the entire flow by processing chat messages, searching the knowledge base, and organizing the retrieved information into a cohesive response.
The agent's general behavior is defined by the prompt in the **Agent Instructions** field and the model connected to the **Language Model** port.
@ -73,18 +73,12 @@ If no knowledge filter is set, then the `OPENRAG-QUERY-FILTER` variable is empty
## Nudges {#nudges}
When you use the OpenRAG <Icon name="MessageSquare" aria-hidden="true"/> **Chat**, the **OpenRAG OpenSearch Nudges** flow runs in the background to pull additional context from your knowledge base and chat history.
When you use the OpenRAG **Chat**, the **OpenRAG OpenSearch Nudges** flow runs in the background to pull additional context from your knowledge base and chat history.
Nudges appear as prompts in the chat, and they are based on the contents of your OpenRAG OpenSearch knowledge base.
Click a nudge to accept it and start a chat based on the nudge.
Nudges appear as prompts in the chat.
Click a nudge to accept it and provide the nudge's context to the OpenRAG **Chat** agent (the **OpenRAG OpenSearch Agent** flow).
Like OpenRAG's other built-in flows, you can [inspect the flow in Langflow](/agents#inspect-and-modify-flows), and you can customize it if you want to change the nudge behavior.
However, this flow is specifically designed to work with the OpenRAG chat and knowledge base.
Major changes to this flow might break the nudge functionality or produce irrelevant nudges.
The **Nudges** flow consists of **Embedding model**, **Language model**, **OpenSearch**, **Input/Output*, and other components that browse your knowledge base, identify key themes and possible insights, and then produce prompts based on the findings.
For example, if your knowledge base contains documents about cybersecurity, possible nudges might include `Explain zero trust architecture principles` or `How to identify a social engineering attack`.
## Upload documents to the chat

View file

@ -171,37 +171,26 @@ The agent can call this component to fetch web content from a given URL, and the
Like all OpenRAG flows, you can [inspect the flow in Langflow](/agents#inspect-and-modify-flows), and you can customize it.
For more information about MCP in Langflow, see the Langflow documentation on [MCP clients](https://docs.langflow.org/mcp-client) and [MCP servers](https://docs.langflow.org/mcp-tutorial).
## Monitor ingestion {#monitor-ingestion}
## Monitor ingestion
Depending on the amount of data to ingest, document ingestion can take a few seconds, minutes, or longer.
For this reason, document ingestion tasks run in the background.
Document ingestion tasks run in the background.
In the OpenRAG user interface, a badge is shown on <Icon name="Bell" aria-hidden="true"/> **Tasks** when OpenRAG tasks are active.
Click <Icon name="Bell" aria-hidden="true"/> **Tasks** to inspect and cancel tasks.
Tasks are separated into multiple sections:
Click <Icon name="Bell" aria-hidden="true"/> **Tasks** to inspect and cancel tasks:
* The **Active Tasks** section includes all tasks that are **Pending**, **Running**, or **Processing**:
* **Active Tasks**: All tasks that are **Pending**, **Running**, or **Processing**.
For each active task, depending on its state, you can find the task ID, start time, duration, number of files processed, and the total files enqueued for processing.
* **Pending**: The task is queued and waiting to start.
* **Running**: The task is actively processing files.
* **Processing**: The task is performing ingestion operations.
* **Pending**: The task is queued and waiting to start.
To stop an active task, click <Icon name="X" aria-hidden="true"/> **Cancel**. Canceling a task stops processing immediately and marks the ingestion as failed.
* **Running**: The task is actively processing files.
* The **Recent Tasks** section lists recently finished tasks.
* **Processing**: The task is performing ingestion operations.
:::warning
**Completed** doesn't mean success.
* **Failed**: Something went wrong during ingestion, or the task was manually canceled.
For troubleshooting advice, see [Troubleshoot ingestion](#troubleshoot-ingestion).
A completed task can report successful ingestions, failed ingestions, or both, depending on the number of files processed.
:::
Check the **Success** and **Failed** counts for each completed task to determine the overall success rate.
**Failed** means something went wrong during ingestion, or the task was manually canceled.
For more information, see [Troubleshoot ingestion](#troubleshoot-ingestion).
For each task, depending on its state, you can find the task ID, start time, duration, number of files processed successfully, number of files that failed, and the number of files enqueued for processing.
To stop an active task, click <Icon name="X" aria-hidden="true"/> **Cancel**. Canceling a task stops processing immediately and marks the task as **Failed**.
### Ingestion performance expectations
@ -258,9 +247,9 @@ The following issues can occur during document ingestion.
If an ingestion task fails, do the following:
* Make sure you uploaded only supported file types.
* Split very large files into smaller files.
* Remove unusual or complex embedded content, such as videos or animations. Although Docling can replace some non-text content with placeholders during ingestion, some embedded content might cause errors.
* Make sure you are uploading supported file types.
* Split excessively large files into smaller files before uploading.
* Remove unusual embedded content, such as videos or animations, before uploading. Although Docling can replace some non-text content with placeholders during ingestion, some embedded content might cause errors.
* Make sure your Podman/Docker VM has sufficient memory for the ingestion tasks.
The minimum recommendation is 8 GB of RAM.
If you regularly upload large files, more RAM is recommended.
@ -272,17 +261,17 @@ For more information, see [Memory issue with Podman on macOS](/support/troublesh
If the OpenRAG **Chat** doesn't seem to use your documents correctly, [browse your knowledge base](/knowledge#browse-knowledge) to confirm that the documents are uploaded in full, and the chunks are correct.
If the documents are present and well-formed, check your [knowledge filters](/knowledge-filters).
If you applied a filter to the chat, make sure the expected documents aren't excluded by the filter settings.
You can test this by applying the filter when you [browse the knowledge base](/knowledge#browse-knowledge).
If the filter excludes any documents, the agent cannot access those documents.
Be aware that some settings create dynamic filters that don't always produce the same results, such as a **Search query** combined with a low **Response limit**.
If a global filter is applied, make sure the expected documents are included in the global filter.
If the global filter excludes any documents, the agent cannot access those documents unless you apply a chat-level filter or change the global filter.
If the document chunks have missing, incorrect, or unexpected text, you must [delete the documents](/knowledge#delete-knowledge) from your knowledge base, modify the [ingestion parameters](/knowledge#knowledge-ingestion-settings) or the documents themselves, and then reingest the documents.
If text is missing or incorrectly processed, you need to reupload the documents after modifying the ingestion parameters or the documents themselves.
For example:
* Break combined documents into separate files for better metadata context.
* Make sure scanned documents are legible enough for extraction, and enable the **OCR** option. Poorly scanned documents might require additional preparation or rescanning before ingestion.
* Adjust the **Chunk size** and **Chunk overlap** settings to better suit your documents. Larger chunks provide more context but can include irrelevant information, while smaller chunks yield more precise semantic search but can lack context.
* Adjust the **Chunk Size** and **Chunk Overlap** settings to better suit your documents. Larger chunks provide more context but can include irrelevant information, while smaller chunks yield more precise semantic search but can lack context.
For more information about modifying ingestion parameters and flows, see [Knowledge ingestion settings](/knowledge#knowledge-ingestion-settings).
## See also

View file

@ -4,7 +4,6 @@ slug: /knowledge-filters
---
import Icon from "@site/src/components/icon/icon";
import PartialAnonymousUserOwner from '@site/docs/_partial-anonymous-user-owner.mdx';
OpenRAG's knowledge filters help you organize and manage your [knowledge base](/knowledge) by creating pre-defined views of your documents.
@ -27,61 +26,36 @@ After uploading your own documents, it is recommended that you create your own f
To create a knowledge filter, do the following:
1. Click <Icon name="Library" aria-hidden="true"/> **Knowledge**, and then click <Icon name="Plus" aria-hidden="true"/> **Knowledge Filters**.
1. Click **Knowledge**, and then click <Icon name="Plus" aria-hidden="true"/> **Knowledge Filters**.
2. Enter a **Name**.
2. Enter a **Name** and **Description**, and then click **Create Filter**.
3. Optional: Click the filter icon next to the filter name to select a different icon and color for the filter.
This is purely cosmetic, but it can help you visually distinguish different sets of filters, such as different projects or sources.
By default, new filters match all documents in your knowledge base.
Modify the filter to customize it.
4. Optional: Enter a **Description**.
5. Customize the filter settings.
By default, filters match all documents in your knowledge base.
Use the filter settings to narrow the scope of documents that the filter captures:
* **Search Query**: Enter a natural language text string for semantic search.
When you apply a filter that has a **Search Query**, only documents matching the search query are included.
It is recommended that you also use the **Score Threshold** setting to avoid returning irrelevant documents.
* **Data Sources**: Select specific files and folders to include in the filter.
This is useful if you want to create a filter for a specific project or topic and you know the specific documents you want to include.
Similarly, if you upload a folder of documents or enable an OAuth connector, you might want to create a filter that only includes the documents from that source.
3. To modify the filter, click <Icon name="Library" aria-hidden="true"/> **Knowledge**, and then click your new filter. You can edit the following settings:
* **Search Query**: Enter text for semantic search, such as `financial reports from Q4`.
* **Data Sources**: Select specific data sources or folders to include.
* **Document Types**: Filter by file type.
* **Owners**: Filter by the user that uploaded the documents.
* **Connectors**: Filter by [upload source](/ingestion), such as the local file system or a Google Drive OAuth connector.
* **Response Limit**: Set the maximum number of results to return from the knowledge base. The default is `10`.
* **Score Threshold**: Set the minimum relevance score for similarity search. The default score is `0`.
<PartialAnonymousUserOwner />
* **Connectors**: Filter by [upload source](/ingestion), such as the local file system or an OAuth connector.
* **Response Limit**: Set the maximum number of results to return from the knowledge base. The default is `10`, which means the filter returns only the top 10 most relevant documents.
* **Score Threshold**: Set the minimum relevance score for similarity search. The default score is `0`. A threshold is recommended to avoid returned irrelevant documents.
6. Click **Create Filter**.
## Edit a filter
To modify a filter, click <Icon name="Library" aria-hidden="true"/> **Knowledge**, and then click the filter you want to edit in the **Knowledge Filters** list.
On the filter settings pane, edit the filter as desired, and then click **Update Filter**.
4. To save your changes, click **Update Filter**.
## Apply a filter {#apply-a-filter}
In the OpenRAG <Icon name="MessageSquare" aria-hidden="true"/> **Chat**, click <Icon name="Funnel" aria-hidden="true"/> **Filter**, and then select the filter to apply.
Chat filters apply to one chat session only.
* **Apply a global filter**: Click <Icon name="Library" aria-hidden="true"/> **Knowledge**, and then enable the toggle next to your preferred filter. Only one filter can be the global filter. The global filter applies to all chat sessions.
You can also use filters when [browsing your knowledge base](/knowledge#browse-knowledge).
This is a helpful way to test filters and manage knowledge bases that have many documents.
* **Apply a chat filter**: In the <Icon name="MessageSquare" aria-hidden="true"/> **Chat** window, click <Icon name="Funnel" aria-hidden="true"/> **Filter**, and then select the filter to apply.
Chat filters apply to one chat session only.
## Delete a filter
1. Click <Icon name="Library" aria-hidden="true"/> **Knowledge**.
2. In the **Knowledge Filters** list, click the filter that you want to delete.
2. Click the filter that you want to delete.
3. In the filter settings pane, click **Delete Filter**.
3. Click **Delete Filter**.

View file

@ -5,7 +5,6 @@ slug: /knowledge
import Icon from "@site/src/components/icon/icon";
import PartialOpenSearchAuthMode from '@site/docs/_partial-opensearch-auth-mode.mdx';
import PartialAnonymousUserOwner from '@site/docs/_partial-anonymous-user-owner.mdx';
OpenRAG includes a built-in [OpenSearch](https://docs.opensearch.org/latest/) instance that serves as the underlying datastore for your _knowledge_ (documents).
This specialized database is used to store and retrieve your documents and the associated vector data (embeddings).
@ -24,61 +23,17 @@ You can configure how documents are ingested and how the **Chat** interacts with
The **Knowledge** page lists the documents OpenRAG has ingested into your OpenSearch database, specifically in an [OpenSearch index](https://docs.opensearch.org/latest/getting-started/intro/#index) named `documents`.
To explore the raw contents of your knowledge base, click <Icon name="Library" aria-hidden="true"/> **Knowledge** to get a list of all ingested documents.
### Inspect knowledge
For each document, the **Knowledge** page provides the following information:
* **Source**: Name of the ingested content, such as the file name.
* **Size**
* **Type**
* **Owner**: User that uploaded the document.
<PartialAnonymousUserOwner />
* **Chunks**: Number of chunks created by splitting the document during ingestion.
Click a document to view the individual chunks and technical details related to chunking.
If the chunks seem incorrect or incomplete, see [Troubleshoot ingestion](/ingestion#troubleshoot-ingestion).
* **Avg score**: Average similarity score across all chunks of the document.
If you [search the knowledge base](#search-knowledge), the **Avg score** column shows the similarity score for your search query or filter.
* **Embedding model** and **Dimensions**: The embedding model and dimensions used to embed the chunks.
* **Status**: Status of document ingestion.
If ingestion is complete and successful, then the status is **Active**.
For more information, see [Monitor ingestion](/ingestion#monitor-ingestion).
### Search knowledge {#search-knowledge}
You can use the search field on the **Knowledge** page to find documents using semantic search and knowledge filters:
To search all documents, enter a search string in the search field, and then press <kbd>Enter</kbd>.
To apply a [knowledge filter](/knowledge-filters), select the filter from the **Knowledge Filters** list.
The filter settings pane opens, and the filter appears in the search field.
To remove the filter, close the filter settings pane or clear the filter from the search field.
You can use the filter alone or in combination with a search string.
If a knowledge filter has a **Search Query**, that query is applied in addition to any text string you enter in the search field.
Only one filter can be applied at a time.
Click a document to view the chunks produced from splitting the document during ingestion.
### Default documents {#default-documents}
By default, OpenRAG includes some initial documents about OpenRAG.
These documents are ingested automatically during the [application onboarding process](/install#application-onboarding).
You can use these documents to ask OpenRAG about itself, or to test the [**Chat**](/chat) feature before uploading your own documents.
You can use these documents to ask OpenRAG about itself, and to test the [**Chat**](/chat) feature before uploading your own documents.
If you [delete these documents](#delete-knowledge), then you won't be able to ask OpenRAG about itself and it's own functionality.
If you [delete](#delete-knowledge) these documents, you won't be able to ask OpenRAG about itself and it's own functionality.
It is recommended that you keep these documents, and use [filters](/knowledge-filters) to separate them from your other knowledge.
An **OpenRAG Docs** filter is created automatically for these documents.
## OpenSearch authentication and document access {#auth}
@ -94,7 +49,7 @@ An [OpenSearch index](https://docs.opensearch.org/latest/getting-started/intro/#
By default, all documents you upload to your OpenRAG knowledge base are stored in an index named `documents`.
It is possible to change the index name by [editing the ingestion flow](/agents#inspect-and-modify-flows).
However, this can impact dependent processes, such as the [filters](/knowledge-filters) and [**Chat**](/chat), that reference the `documents` index by default.
However, this can impact dependent processes, such as the [filters](/knowledge-filters) and [**Chat**](/chat) flow, that reference the `documents` index by default.
Make sure you edit other flows as needed to ensure all processes use the same index name.
If you encounter errors or unexpected behavior after changing the index name, you can [revert the flows to their original configuration](/agents#revert-a-built-in-flow-to-its-original-configuration), or [delete knowledge](/knowledge#delete-knowledge) to clear the existing documents from your knowledge base.
@ -126,10 +81,8 @@ The default embedding dimension is `1536`, and the default model is the OpenAI `
If you want to use an unsupported model, you must manually set the model in your [OpenRAG `.env` file](/reference/configuration).
If you use an unsupported embedding model that doesn't have defined dimensions in `settings.py`, then OpenRAG falls back to the default dimensions (1536) and logs a warning. OpenRAG's OpenSearch instance and flows continue to work, but [similarity search](https://www.ibm.com/think/topics/vector-search) quality can be affected if the actual model dimensions aren't 1536.
To change the embedding model after onboarding, modify the embedding model configuration on the OpenRAG **Settings** page or in your [OpenRAG `.env` file](/reference/configuration).
This ensures that all relevant [OpenRAG flows](/agents) are updated to use the new embedding model configuration.
If you edit these settings in the `.env` file, you must [stop and restart the OpenRAG containers](/manage-services#stop-and-start-containers) to apply the changes.
To change the embedding model after onboarding, it is recommended that you modify the embedding model setting in the OpenRAG **Settings** page or in your [OpenRAG `.env` file](/reference/configuration).
This will automatically update all relevant [OpenRAG flows](/agents) to use the new embedding model configuration.
### Set Docling parameters
@ -137,39 +90,32 @@ OpenRAG uses [Docling](https://docling-project.github.io/docling/) for document
When you [upload documents](/ingestion), Docling processes the files, splits them into chunks, and stores them as separate, structured documents in your OpenSearch knowledge base.
#### Select a Docling implementation {#select-a-docling-implementation}
You can use either Docling Serve or OpenRAG's built-in Docling ingestion pipeline to process documents.
* **Docling Serve ingestion**: By default, OpenRAG uses [Docling Serve](https://github.com/docling-project/docling-serve).
It starts a local `docling serve` process, and then runs Docling ingestion through the Docling Serve API.
This means that OpenRAG starts a `docling serve` process on your local machine and runs Docling ingestion through an API service.
To use a remote `docling serve` instance or your own local instance, set `DOCLING_SERVE_URL=http://**HOST_IP**:5001` in your [OpenRAG `.env` file](/reference/configuration#document-processing-settings).
The service must run on port 5001.
* **Built-in Docling ingestion**: If you want to use OpenRAG's built-in Docling ingestion pipeline instead of the separate Docling Serve service, set `DISABLE_INGEST_WITH_LANGFLOW=true` in your [OpenRAG environment variables](/reference/configuration#document-processing-settings).
* **Built-in Docling ingestion**: If you want to use OpenRAG's built-in Docling ingestion pipeline instead of the separate Docling Serve service, set `DISABLE_INGEST_WITH_LANGFLOW=true` in your [OpenRAG `.env` file](/reference/configuration#document-processing-settings).
The built-in pipeline uses the Docling processor directly instead of through the Docling Serve API.
For the underlying functionality, see [`processors.py`](https://github.com/langflow-ai/openrag/blob/main/src/models/processors.py#L58) in the OpenRAG repository.
The built-in pipeline uses the Docling processor directly instead of through the Docling Serve API.
#### Configure Docling ingestion settings
For the underlying functionality, see [`processors.py`](https://github.com/langflow-ai/openrag/blob/main/src/models/processors.py#L58) in the OpenRAG repository.
To modify the Docling document processing and embedding parameters, click <Icon name="Settings2" aria-hidden="true"/> **Settings** in OpenRAG, and then find the **Knowledge Ingest** section.
To modify the Docling ingestion and embedding parameters, click <Icon name="Settings2" aria-hidden="true"/> **Settings** in the OpenRAG user interface.
:::tip
The TUI warns you if `docling serve` isn't running.
OpenRAG warns you if `docling serve` isn't running.
For information about starting and stopping OpenRAG native services, like Docling, see [Manage OpenRAG services](/manage-services).
:::
You can edit the following parameters:
* **Embedding model**: Select the model to use to generate vector embeddings for your documents.
This is initially set during installation.
The recommended way to change this setting is in the OpenRAG <Icon name="Settings2" aria-hidden="true"/> **Settings** or your [OpenRAG `.env` file](/reference/configuration).
This ensures that all relevant [OpenRAG flows](/agents) are updated to use the new embedding model configuration.
The recommended way to change this setting is in the OpenRAG **Settings** or your [OpenRAG `.env` file](/reference/configuration).
This will automatically update all relevant [OpenRAG flows](/agents) to use the new embedding model configuration.
If you uploaded documents prior to changing the embedding model, you can [create filters](/knowledge-filters) to separate documents embedded with different models, or you can reupload all documents to regenerate embeddings with the new model.
If you want to use multiple embeddings models, similarity search (in the **Chat**) can take longer as it searches each model's embeddings separately.
If you want to use multiple embeddings models, similarity search (in the **Chat**) can take longer as it searching each model's embeddings separately.
* **Chunk size**: Set the number of characters for each text chunk when breaking down a file.
Larger chunks yield more context per chunk, but can include irrelevant information. Smaller chunks yield more precise semantic search, but can lack context.
@ -179,7 +125,7 @@ The default value is 1000 characters, which is usually a good balance between co
Use larger overlap values for documents where context is most important. Use smaller overlap values for simpler documents or when optimization is most important.
The default value is 200 characters, which represents an overlap of 20 percent if the **Chunk size** is 1000. This is suitable for general use. For faster processing, decrease the overlap to approximately 10 percent. For more complex documents where you need to preserve context across chunks, increase it to approximately 40 percent.
* **Table structure**: Enables Docling's [`DocumentConverter`](https://docling-project.github.io/docling/reference/document_converter/) tool for parsing tables. Instead of treating tables as plain text, tables are output as structured table data with preserved relationships and metadata. This option is enabled by default.
* **Table Structure**: Enables Docling's [`DocumentConverter`](https://docling-project.github.io/docling/reference/document_converter/) tool for parsing tables. Instead of treating tables as plain text, tables are output as structured table data with preserved relationships and metadata. This option is enabled by default.
* **OCR**: Enables Optical Character Recognition (OCR) processing when extracting text from images and ingesting scanned documents. This setting is best suited for processing text-based documents faster with Docling's [`DocumentConverter`](https://docling-project.github.io/docling/reference/document_converter/). Images are ignored and not processed.
@ -201,12 +147,7 @@ To change this location, modify the **Documents Paths** variable in either the [
This is a destructive operation that cannot be undone.
:::
To delete documents from your knowledge base, click <Icon name="Library" aria-hidden="true"/> **Knowledge**, use the checkboxes to select one or more documents, and then click **Delete**.
If you select the checkbox at the top of the list, all documents are selected and your entire knowledge base will be deleted.
To delete an individual document, you can also click <Icon name="Ellipsis" aria-hidden="true"/> **More** next to that document, and then select **Delete**.
To completely clear your entire knowledge base and OpenSearch index, [reset your OpenRAG containers](/manage-services#reset-containers) or [reinstall OpenRAG](/reinstall).
To clear your entire knowledge base, [reset your OpenRAG containers](/manage-services#reset-containers) or [reinstall OpenRAG](/reinstall).
## See also

View file

@ -12,7 +12,6 @@ import PartialPrereqWindows from '@site/docs/_partial-prereq-windows.mdx';
import PartialPrereqPython from '@site/docs/_partial-prereq-python.mdx';
import PartialInstallNextSteps from '@site/docs/_partial-install-next-steps.mdx';
import PartialOllamaModels from '@site/docs/_partial-ollama-models.mdx';
import PartialGpuModeTip from '@site/docs/_partial-gpu-mode-tip.mdx';
To manage your own OpenRAG services, deploy OpenRAG with Docker or Podman.
@ -88,7 +87,7 @@ The following variables are required or recommended:
## Start services
1. To use the default Docling Serve implementation, start `docling serve` on port 5001 on the host machine using the included script:
1. Start `docling serve` on port 5001 on the host machine:
```bash
uv run python scripts/docling_ctl.py start --port 5001
@ -97,17 +96,11 @@ The following variables are required or recommended:
Docling cannot run inside a Docker container due to system-level dependencies, so you must manage it as a separate service on the host machine.
For more information, see [Stop, start, and inspect native services](/manage-services#start-native-services).
Port 5001 is required to deploy OpenRAG successfully; don't use a different port.
This port is required to deploy OpenRAG successfully; don't use a different port.
Additionally, this enables the [MLX framework](https://opensource.apple.com/projects/mlx/) for accelerated performance on Apple Silicon Mac machines.
:::tip
If you don't want to use the default Docling Serve implementation, see [Select a Docling implementation](/knowledge#select-a-docling-implementation).
:::
2. Confirm `docling serve` is running.
The following command checks the status of the default Docling Serve implementation:
```bash
uv run python scripts/docling_ctl.py status
```
@ -123,17 +116,7 @@ The following variables are required or recommended:
3. Deploy the OpenRAG containers locally using the appropriate Docker Compose configuration for your environment:
* **CPU-only deployment** (default, recommended): If your host machine doesn't have NVIDIA GPU support, use the base `docker-compose.yml` file:
```bash title="Docker"
docker compose up -d
```
```bash title="Podman"
podman compose up -d
```
* **GPU-accelerated deployment**: If your host machine has an NVIDIA GPU with CUDA support and compatible NVIDIA drivers, use the base `docker-compose.yml` file with the `docker-compose.gpu.yml` override:
* **GPU-accelerated deployment**: If your host machine has an NVIDIA GPU with CUDA support and compatible NVIDIA drivers, use the base `docker-compose.yml` file with the `docker-compose.gpu.yml` override.
```bash title="Docker"
docker compose -f docker-compose.yml -f docker-compose.gpu.yml up -d
@ -143,9 +126,15 @@ The following variables are required or recommended:
podman compose -f docker-compose.yml -f docker-compose.gpu.yml up -d
```
:::tip
<PartialGpuModeTip />
:::
* **CPU-only deployment** (default): If your host machine doesn't have NVIDIA GPU support, use the base `docker-compose.yml` file.
```bash title="Docker"
docker compose up -d
```
```bash title="Podman"
podman compose up -d
```
4. Wait for the OpenRAG containers to start, and then confirm that all containers are running:

View file

@ -3,8 +3,6 @@ title: Use the TUI
slug: /tui
---
import PartialGpuModeTip from '@site/docs/_partial-gpu-mode-tip.mdx';
The OpenRAG Terminal User Interface (TUI) provides a simplified and guided experience for configuring, managing, and monitoring your OpenRAG deployment directly from the terminal.
![OpenRAG TUI Interface](@site/static/img/openrag_tui_dec_2025.png)
@ -38,10 +36,6 @@ In the TUI, click **Status**, and then click **Switch to GPU Mode** or **Switch
This change requires restarting all OpenRAG services because each mode has its own `docker-compose` file.
:::tip
<PartialGpuModeTip />
:::
## Exit the OpenRAG TUI
To exit the OpenRAG TUI, press <kbd>q</kbd> on the TUI main page.

View file

@ -1,12 +0,0 @@
---
title: OpenRAG APIs and SDKs
slug: /reference/api-sdk-overview
---
You can use OpenRAG's APIs and SDKs to integrate and extend OpenRAG's capabilities:
* [Python SDK](https://github.com/langflow-ai/openrag/tree/main/sdks/python)
* [TypeScript/JavaScript SDK](https://github.com/langflow-ai/openrag/tree/main/sdks/typescript)
<!-- TBD: MCP: See https://github.com/langflow-ai/openrag/pull/729 -->
<!-- TBD: API Reference: See https://github.com/langflow-ai/openrag/issues/734 -->

View file

@ -62,15 +62,12 @@ Some of these variables are immutable and can only be changed by redeploying Ope
Control how OpenRAG [processes and ingests documents](/ingestion) into your knowledge base.
Most of these settings can be configured on the OpenRAG **Settings** page or in the `.env` file.
| Variable | Default | Description |
|----------|---------|-------------|
| `CHUNK_OVERLAP` | `200` | Overlap between chunks. |
| `CHUNK_SIZE` | `1000` | Text chunk size for document processing. |
| `DISABLE_INGEST_WITH_LANGFLOW` | `false` | Disable Langflow ingestion pipeline. |
| `DOCLING_OCR_ENGINE` | Set by OS | OCR engine for document processing. For macOS, `ocrmac`. For any other OS, `easyocr`. |
| `DOCLING_SERVE_URL` | `http://**HOST_IP**:5001` | URL for the [Docling Serve instance](/knowledge#select-a-docling-implementation). By default, OpenRAG starts a local `docling serve` process and auto-detects the host. To use your own local or remote Docling Serve instance, set this variable to the full path to the target instance. The service must run on port 5001. |
| `OCR_ENABLED` | `false` | Enable OCR for image processing. |
| `OPENRAG_DOCUMENTS_PATH` | `~/.openrag/documents` | The [local documents path](/knowledge#set-the-local-documents-path) for ingestion. |
| `PICTURE_DESCRIPTIONS_ENABLED` | `false` | Enable picture descriptions. |
@ -99,7 +96,7 @@ For better security, it is recommended to set `LANGFLOW_SUPERUSER_PASSWORD` so t
| `LANGFLOW_SUPERUSER_PASSWORD` | Not set | Langflow administrator password. If this variable isn't set, then the Langflow server starts _without_ authentication enabled. It is recommended to set `LANGFLOW_SUPERUSER_PASSWORD` so the [Langflow server starts with authentication enabled](https://docs.langflow.org/api-keys-and-authentication#start-a-langflow-server-with-authentication-enabled). |
| `LANGFLOW_URL` | `http://localhost:7860` | URL for the Langflow instance. |
| `LANGFLOW_CHAT_FLOW_ID`, `LANGFLOW_INGEST_FLOW_ID`, `NUDGES_FLOW_ID` | Built-in flow IDs | These variables are set automatically to the IDs of the chat, ingestion, and nudges [flows](/agents). The default values are found in [`.env.example`](https://github.com/langflow-ai/openrag/blob/main/.env.example). Only change these values if you want to replace a built-in flow with your own custom flow. The flow JSON must be present in your version of the OpenRAG codebase. For example, if you [deploy self-managed services](/docker), you can add the flow JSON to your local clone of the OpenRAG repository before deploying OpenRAG. |
| `SYSTEM_PROMPT` | `You are a helpful AI assistant with access to a knowledge base. Answer questions based on the provided context.` | System prompt instructions for the agent driving the **Agent** flow (OpenRAG **Chat**). |
| `SYSTEM_PROMPT` | `You are a helpful AI assistant with access to a knowledge base. Answer questions based on the provided context.` | System prompt instructions for the agent driving the **Chat** flow. |
## OAuth provider settings

View file

@ -127,7 +127,7 @@ const config = {
baseUrl: process.env.BASE_URL ? process.env.BASE_URL : '/',
// Control search engine indexing - set to true to prevent indexing
noIndex: false,
noIndex: true,
// GitHub pages deployment config.
// If you aren't using GitHub pages, you don't need these.
@ -176,19 +176,6 @@ const config = {
theme: {
customCss: './src/css/custom.css',
},
// Use preset-classic sitemap https://docusaurus.io/docs/api/plugins/@docusaurus/plugin-sitemap
sitemap: {
lastmod: 'date',
changefreq: 'weekly',
priority: 0.5,
ignorePatterns: ['/tags/**'],
filename: 'sitemap.xml',
createSitemapItems: async (params) => {
const {defaultCreateSitemapItems, ...rest} = params;
const items = await defaultCreateSitemapItems(rest);
return items.filter((item) => !item.url.includes('/page/'));
},
},
}),
],
],
@ -236,15 +223,6 @@ const config = {
},
],
},
algolia: {
appId: "SMEA51Q5OL",
// public key, safe to commit
apiKey: "b2ec302e9880e8979ad6a68f0c36271e",
indexName: "openrag-algolia",
contextualSearch: true,
searchParameters: {},
searchPagePath: "search",
},
prism: {
theme: prismThemes.github,
darkTheme: prismThemes.dracula,

View file

@ -26,7 +26,7 @@
"typescript": "~5.9.3"
},
"engines": {
"node": ">=20.20.0"
"node": ">=18.0"
}
},
"node_modules/@ai-sdk/gateway": {

View file

@ -46,6 +46,6 @@
]
},
"engines": {
"node": ">=20.20.0"
"node": ">=18.0"
}
}

View file

@ -75,18 +75,8 @@ const sidebars = {
label: "Chat",
},
"reference/configuration",
{
type: "doc",
id: "reference/api-sdk-overview",
label: "APIs and SDKs",
},
"support/contribute",
"support/troubleshoot",
{
type: "link",
label: "Changelog",
href: "https://github.com/langflow-ai/openrag/releases",
},
],
};

View file

@ -2,9 +2,6 @@
"name": "frontend",
"version": "0.1.0",
"private": true,
"engines": {
"node": ">=20.20.0"
},
"scripts": {
"dev": "next dev",
"build": "next build",

View file

@ -1,18 +0,0 @@
apiVersion: v2
name: openrag
description: A Helm chart for deploying OpenRAG - an open-source agentic RAG platform
type: application
version: 0.1.0
appVersion: "0.1.52"
keywords:
- openrag
- rag
- langflow
- opensearch
- ai
- llm
home: https://github.com/langflow-ai/openrag
sources:
- https://github.com/langflow-ai/openrag
maintainers:
- name: OpenRAG Team

View file

@ -1,80 +0,0 @@
OpenRAG has been deployed successfully!
{{- if .Values.global.tenant.name }}
Tenant: {{ .Values.global.tenant.name }}
Namespace: {{ include "openrag.namespace" . }}
{{- end }}
=== Services Deployed ===
{{- if .Values.langflow.enabled }}
Langflow:
- Internal URL: http://{{ include "openrag.fullname" . }}-langflow:{{ .Values.langflow.service.port }}
{{- if and .Values.ingress.enabled .Values.ingress.hosts.langflow.enabled .Values.ingress.hosts.langflow.host }}
- External URL: http{{ if .Values.ingress.tls.enabled }}s{{ end }}://{{ .Values.ingress.hosts.langflow.host }}
{{- end }}
{{- end }}
{{- if .Values.backend.enabled }}
Backend API:
- Internal URL: http://{{ include "openrag.fullname" . }}-backend:{{ .Values.backend.service.port }}
{{- if and .Values.ingress.enabled .Values.ingress.hosts.backend.host }}
- External URL: http{{ if .Values.ingress.tls.enabled }}s{{ end }}://{{ .Values.ingress.hosts.backend.host }}
{{- end }}
{{- end }}
{{- if .Values.frontend.enabled }}
Frontend UI:
- Internal URL: http://{{ include "openrag.fullname" . }}-frontend:{{ .Values.frontend.service.port }}
{{- if and .Values.ingress.enabled .Values.ingress.hosts.frontend.host }}
- External URL: http{{ if .Values.ingress.tls.enabled }}s{{ end }}://{{ .Values.ingress.hosts.frontend.host }}
{{- end }}
{{- end }}
{{- if .Values.dashboards.enabled }}
OpenSearch Dashboards:
- Internal URL: http://{{ include "openrag.fullname" . }}-dashboards:{{ .Values.dashboards.service.port }}
{{- if and .Values.ingress.enabled .Values.ingress.hosts.dashboards.enabled .Values.ingress.hosts.dashboards.host }}
- External URL: http{{ if .Values.ingress.tls.enabled }}s{{ end }}://{{ .Values.ingress.hosts.dashboards.host }}
{{- end }}
{{- end }}
=== External Dependencies ===
OpenSearch (External SaaS):
- Host: {{ .Values.global.opensearch.host }}
- Port: {{ .Values.global.opensearch.port }}
- Scheme: {{ .Values.global.opensearch.scheme }}
=== Credentials ===
To retrieve the Langflow superuser credentials:
kubectl get secret {{ include "openrag.fullname" . }}-langflow -n {{ include "openrag.namespace" . }} -o jsonpath='{.data.superuser}' | base64 -d
kubectl get secret {{ include "openrag.fullname" . }}-langflow -n {{ include "openrag.namespace" . }} -o jsonpath='{.data.superuser-password}' | base64 -d
=== Quick Start ===
{{- if and .Values.ingress.enabled .Values.ingress.hosts.frontend.host }}
1. Access the OpenRAG UI at: http{{ if .Values.ingress.tls.enabled }}s{{ end }}://{{ .Values.ingress.hosts.frontend.host }}
{{- else }}
1. Port-forward the frontend service:
kubectl port-forward svc/{{ include "openrag.fullname" . }}-frontend {{ .Values.frontend.service.port }}:{{ .Values.frontend.service.port }} -n {{ include "openrag.namespace" . }}
Then access: http://localhost:{{ .Values.frontend.service.port }}
{{- end }}
2. Upload documents to your knowledge base
3. Start chatting with your documents using the AI-powered chat interface
=== Troubleshooting ===
Check pod status:
kubectl get pods -n {{ include "openrag.namespace" . }} -l app.kubernetes.io/instance={{ .Release.Name }}
Check pod logs:
kubectl logs -n {{ include "openrag.namespace" . }} -l app.kubernetes.io/component=langflow
kubectl logs -n {{ include "openrag.namespace" . }} -l app.kubernetes.io/component=backend
kubectl logs -n {{ include "openrag.namespace" . }} -l app.kubernetes.io/component=frontend
For more information, visit: https://github.com/langflow-ai/openrag

View file

@ -1,163 +0,0 @@
{{/*
Expand the name of the chart.
*/}}
{{- define "openrag.name" -}}
{{- default .Chart.Name .Values.nameOverride | trunc 63 | trimSuffix "-" }}
{{- end }}
{{/*
Create a default fully qualified app name.
If tenant name is provided, prefix with tenant name.
*/}}
{{- define "openrag.fullname" -}}
{{- if .Values.fullnameOverride }}
{{- .Values.fullnameOverride | trunc 63 | trimSuffix "-" }}
{{- else }}
{{- $name := default .Chart.Name .Values.nameOverride }}
{{- if .Values.global.tenant.name }}
{{- printf "%s-%s" .Values.global.tenant.name $name | trunc 63 | trimSuffix "-" }}
{{- else }}
{{- printf "%s" $name | trunc 63 | trimSuffix "-" }}
{{- end }}
{{- end }}
{{- end }}
{{/*
Create the namespace name.
Uses tenant namespace if specified, otherwise tenant name, otherwise release namespace.
*/}}
{{- define "openrag.namespace" -}}
{{- if .Values.global.tenant.namespace }}
{{- .Values.global.tenant.namespace }}
{{- else if .Values.global.tenant.name }}
{{- .Values.global.tenant.name }}
{{- else }}
{{- .Release.Namespace }}
{{- end }}
{{- end }}
{{/*
Create chart name and version as used by the chart label.
*/}}
{{- define "openrag.chart" -}}
{{- printf "%s-%s" .Chart.Name .Chart.Version | replace "+" "_" | trunc 63 | trimSuffix "-" }}
{{- end }}
{{/*
Common labels
*/}}
{{- define "openrag.labels" -}}
helm.sh/chart: {{ include "openrag.chart" . }}
{{ include "openrag.selectorLabels" . }}
{{- if .Chart.AppVersion }}
app.kubernetes.io/version: {{ .Chart.AppVersion | quote }}
{{- end }}
app.kubernetes.io/managed-by: {{ .Release.Service }}
{{- if .Values.global.tenant.name }}
openrag.io/tenant: {{ .Values.global.tenant.name }}
{{- end }}
{{- end }}
{{/*
Selector labels
*/}}
{{- define "openrag.selectorLabels" -}}
app.kubernetes.io/name: {{ include "openrag.name" . }}
app.kubernetes.io/instance: {{ .Release.Name }}
{{- end }}
{{/*
Create the name of the service account to use
*/}}
{{- define "openrag.serviceAccountName" -}}
{{- if .Values.serviceAccount.create }}
{{- default (include "openrag.fullname" .) .Values.serviceAccount.name }}
{{- else }}
{{- default "default" .Values.serviceAccount.name }}
{{- end }}
{{- end }}
{{/*
Langflow component labels
*/}}
{{- define "openrag.langflow.labels" -}}
{{ include "openrag.labels" . }}
app.kubernetes.io/component: langflow
{{- end }}
{{/*
Langflow selector labels
*/}}
{{- define "openrag.langflow.selectorLabels" -}}
{{ include "openrag.selectorLabels" . }}
app.kubernetes.io/component: langflow
{{- end }}
{{/*
Backend component labels
*/}}
{{- define "openrag.backend.labels" -}}
{{ include "openrag.labels" . }}
app.kubernetes.io/component: backend
{{- end }}
{{/*
Backend selector labels
*/}}
{{- define "openrag.backend.selectorLabels" -}}
{{ include "openrag.selectorLabels" . }}
app.kubernetes.io/component: backend
{{- end }}
{{/*
Frontend component labels
*/}}
{{- define "openrag.frontend.labels" -}}
{{ include "openrag.labels" . }}
app.kubernetes.io/component: frontend
{{- end }}
{{/*
Frontend selector labels
*/}}
{{- define "openrag.frontend.selectorLabels" -}}
{{ include "openrag.selectorLabels" . }}
app.kubernetes.io/component: frontend
{{- end }}
{{/*
Dashboards component labels
*/}}
{{- define "openrag.dashboards.labels" -}}
{{ include "openrag.labels" . }}
app.kubernetes.io/component: dashboards
{{- end }}
{{/*
Dashboards selector labels
*/}}
{{- define "openrag.dashboards.selectorLabels" -}}
{{ include "openrag.selectorLabels" . }}
app.kubernetes.io/component: dashboards
{{- end }}
{{/*
Generate the Langflow service URL
*/}}
{{- define "openrag.langflow.url" -}}
http://{{ include "openrag.fullname" . }}-langflow:{{ .Values.langflow.service.port }}
{{- end }}
{{/*
Generate the Backend service URL
*/}}
{{- define "openrag.backend.url" -}}
http://{{ include "openrag.fullname" . }}-backend:{{ .Values.backend.service.port }}
{{- end }}
{{/*
Generate the OpenSearch URL
*/}}
{{- define "openrag.opensearch.url" -}}
{{ .Values.global.opensearch.scheme }}://{{ .Values.global.opensearch.host }}:{{ .Values.global.opensearch.port }}
{{- end }}

View file

@ -1,273 +0,0 @@
{{- if .Values.backend.enabled }}
apiVersion: apps/v1
kind: Deployment
metadata:
name: {{ include "openrag.fullname" . }}-backend
namespace: {{ include "openrag.namespace" . }}
labels:
{{- include "openrag.backend.labels" . | nindent 4 }}
spec:
replicas: 1 # Single pod for vertical scaling
strategy:
type: Recreate # Required for RWO PVCs
selector:
matchLabels:
{{- include "openrag.backend.selectorLabels" . | nindent 6 }}
template:
metadata:
labels:
{{- include "openrag.backend.selectorLabels" . | nindent 8 }}
annotations:
checksum/secret-opensearch: {{ include (print $.Template.BasePath "/secrets/opensearch-secret.yaml") . | sha256sum }}
checksum/secret-langflow: {{ include (print $.Template.BasePath "/secrets/langflow-secret.yaml") . | sha256sum }}
checksum/config-flows: {{ include (print $.Template.BasePath "/configmaps/flow-ids-configmap.yaml") . | sha256sum }}
checksum/config-app: {{ include (print $.Template.BasePath "/configmaps/app-config-configmap.yaml") . | sha256sum }}
spec:
serviceAccountName: {{ include "openrag.serviceAccountName" . }}
{{- with .Values.podSecurityContext }}
securityContext:
{{- toYaml . | nindent 8 }}
{{- end }}
{{- with .Values.global.imagePullSecrets }}
imagePullSecrets:
{{- toYaml . | nindent 8 }}
{{- end }}
{{- if .Values.langflow.enabled }}
initContainers:
- name: wait-for-langflow
image: busybox:1.36
command:
- sh
- -c
- |
echo "Waiting for Langflow to be ready..."
until nc -z {{ include "openrag.fullname" . }}-langflow {{ .Values.langflow.service.port }}; do
echo "Langflow not ready, sleeping 5s..."
sleep 5
done
echo "Langflow is ready!"
{{- end }}
containers:
- name: backend
image: "{{ .Values.backend.image.repository }}:{{ .Values.backend.image.tag | default .Values.global.imageTag }}"
imagePullPolicy: {{ .Values.global.imagePullPolicy }}
{{- with .Values.securityContext }}
securityContext:
{{- toYaml . | nindent 12 }}
{{- end }}
ports:
- name: http
containerPort: 8000
protocol: TCP
env:
# OpenSearch connection (external SaaS)
- name: OPENSEARCH_HOST
value: {{ .Values.global.opensearch.host | quote }}
- name: OPENSEARCH_PORT
value: {{ .Values.global.opensearch.port | quote }}
- name: OPENSEARCH_USERNAME
value: {{ .Values.global.opensearch.username | quote }}
{{- if .Values.global.opensearch.password }}
- name: OPENSEARCH_PASSWORD
valueFrom:
secretKeyRef:
name: {{ include "openrag.fullname" . }}-opensearch
key: password
{{- end }}
# Langflow connection
- name: LANGFLOW_URL
value: {{ include "openrag.langflow.url" . | quote }}
{{- if .Values.backend.langflowPublicUrl }}
- name: LANGFLOW_PUBLIC_URL
value: {{ .Values.backend.langflowPublicUrl | quote }}
{{- end }}
- name: LANGFLOW_AUTO_LOGIN
value: {{ .Values.langflow.auth.autoLogin | quote }}
- name: LANGFLOW_SUPERUSER
valueFrom:
secretKeyRef:
name: {{ include "openrag.fullname" . }}-langflow
key: superuser
- name: LANGFLOW_SUPERUSER_PASSWORD
valueFrom:
secretKeyRef:
name: {{ include "openrag.fullname" . }}-langflow
key: superuser-password
# Flow IDs from ConfigMap
- name: LANGFLOW_CHAT_FLOW_ID
valueFrom:
configMapKeyRef:
name: {{ include "openrag.fullname" . }}-flow-ids
key: chat-flow-id
- name: LANGFLOW_INGEST_FLOW_ID
valueFrom:
configMapKeyRef:
name: {{ include "openrag.fullname" . }}-flow-ids
key: ingest-flow-id
- name: LANGFLOW_URL_INGEST_FLOW_ID
valueFrom:
configMapKeyRef:
name: {{ include "openrag.fullname" . }}-flow-ids
key: url-ingest-flow-id
- name: NUDGES_FLOW_ID
valueFrom:
configMapKeyRef:
name: {{ include "openrag.fullname" . }}-flow-ids
key: nudges-flow-id
# Feature flags
- name: DISABLE_INGEST_WITH_LANGFLOW
value: {{ .Values.backend.features.disableIngestWithLangflow | quote }}
# LLM Provider keys
{{- if .Values.llmProviders.openai.enabled }}
- name: OPENAI_API_KEY
valueFrom:
secretKeyRef:
name: {{ include "openrag.fullname" . }}-llm-providers
key: openai-api-key
{{- end }}
{{- if .Values.llmProviders.anthropic.enabled }}
- name: ANTHROPIC_API_KEY
valueFrom:
secretKeyRef:
name: {{ include "openrag.fullname" . }}-llm-providers
key: anthropic-api-key
{{- end }}
{{- if .Values.llmProviders.watsonx.enabled }}
- name: WATSONX_API_KEY
valueFrom:
secretKeyRef:
name: {{ include "openrag.fullname" . }}-llm-providers
key: watsonx-api-key
- name: WATSONX_ENDPOINT
valueFrom:
secretKeyRef:
name: {{ include "openrag.fullname" . }}-llm-providers
key: watsonx-endpoint
- name: WATSONX_PROJECT_ID
valueFrom:
secretKeyRef:
name: {{ include "openrag.fullname" . }}-llm-providers
key: watsonx-project-id
{{- end }}
{{- if .Values.llmProviders.ollama.enabled }}
- name: OLLAMA_ENDPOINT
value: {{ .Values.llmProviders.ollama.endpoint | quote }}
{{- end }}
# OAuth credentials
{{- if .Values.global.oauth.google.enabled }}
- name: GOOGLE_OAUTH_CLIENT_ID
valueFrom:
secretKeyRef:
name: {{ include "openrag.fullname" . }}-oauth
key: google-client-id
- name: GOOGLE_OAUTH_CLIENT_SECRET
valueFrom:
secretKeyRef:
name: {{ include "openrag.fullname" . }}-oauth
key: google-client-secret
{{- end }}
{{- if .Values.global.oauth.microsoft.enabled }}
- name: MICROSOFT_GRAPH_OAUTH_CLIENT_ID
valueFrom:
secretKeyRef:
name: {{ include "openrag.fullname" . }}-oauth
key: microsoft-client-id
- name: MICROSOFT_GRAPH_OAUTH_CLIENT_SECRET
valueFrom:
secretKeyRef:
name: {{ include "openrag.fullname" . }}-oauth
key: microsoft-client-secret
{{- end }}
# AWS credentials
{{- if .Values.backend.aws.enabled }}
- name: AWS_ACCESS_KEY_ID
valueFrom:
secretKeyRef:
name: {{ include "openrag.fullname" . }}-aws
key: access-key-id
- name: AWS_SECRET_ACCESS_KEY
valueFrom:
secretKeyRef:
name: {{ include "openrag.fullname" . }}-aws
key: secret-access-key
{{- end }}
# Webhook configuration
{{- if .Values.backend.webhook.enabled }}
- name: WEBHOOK_BASE_URL
value: {{ .Values.backend.webhook.baseUrl | quote }}
{{- end }}
volumeMounts:
{{- if .Values.backend.persistence.documents.enabled }}
- name: documents
mountPath: {{ .Values.backend.persistence.documents.mountPath }}
{{- end }}
{{- if .Values.backend.persistence.keys.enabled }}
- name: keys
mountPath: {{ .Values.backend.persistence.keys.mountPath }}
{{- end }}
{{- if .Values.backend.persistence.config.enabled }}
- name: config
mountPath: {{ .Values.backend.persistence.config.mountPath }}
{{- end }}
{{- if .Values.langflow.persistence.enabled }}
- name: flows
mountPath: /app/flows
subPath: {{ .Values.langflow.persistence.flowsSubPath }}
readOnly: true
{{- end }}
resources:
{{- toYaml .Values.backend.resources | nindent 12 }}
{{- if .Values.backend.livenessProbe.enabled }}
livenessProbe:
httpGet:
path: /health
port: http
initialDelaySeconds: {{ .Values.backend.livenessProbe.initialDelaySeconds }}
periodSeconds: {{ .Values.backend.livenessProbe.periodSeconds }}
timeoutSeconds: {{ .Values.backend.livenessProbe.timeoutSeconds }}
failureThreshold: {{ .Values.backend.livenessProbe.failureThreshold }}
{{- end }}
{{- if .Values.backend.readinessProbe.enabled }}
readinessProbe:
httpGet:
path: /health
port: http
initialDelaySeconds: {{ .Values.backend.readinessProbe.initialDelaySeconds }}
periodSeconds: {{ .Values.backend.readinessProbe.periodSeconds }}
timeoutSeconds: {{ .Values.backend.readinessProbe.timeoutSeconds }}
failureThreshold: {{ .Values.backend.readinessProbe.failureThreshold }}
{{- end }}
volumes:
{{- if .Values.backend.persistence.documents.enabled }}
- name: documents
persistentVolumeClaim:
claimName: {{ include "openrag.fullname" . }}-documents
{{- end }}
{{- if .Values.backend.persistence.keys.enabled }}
- name: keys
persistentVolumeClaim:
claimName: {{ include "openrag.fullname" . }}-keys
{{- end }}
{{- if .Values.backend.persistence.config.enabled }}
- name: config
persistentVolumeClaim:
claimName: {{ include "openrag.fullname" . }}-config
{{- end }}
{{- if .Values.langflow.persistence.enabled }}
- name: flows
persistentVolumeClaim:
claimName: {{ include "openrag.fullname" . }}-langflow
{{- end }}
{{- with .Values.nodeSelector }}
nodeSelector:
{{- toYaml . | nindent 8 }}
{{- end }}
{{- with .Values.affinity }}
affinity:
{{- toYaml . | nindent 8 }}
{{- end }}
{{- with .Values.tolerations }}
tolerations:
{{- toYaml . | nindent 8 }}
{{- end }}
{{- end }}

View file

@ -1,18 +0,0 @@
{{- if .Values.backend.enabled }}
apiVersion: v1
kind: Service
metadata:
name: {{ include "openrag.fullname" . }}-backend
namespace: {{ include "openrag.namespace" . }}
labels:
{{- include "openrag.backend.labels" . | nindent 4 }}
spec:
type: {{ .Values.backend.service.type }}
ports:
- port: {{ .Values.backend.service.port }}
targetPort: http
protocol: TCP
name: http
selector:
{{- include "openrag.backend.selectorLabels" . | nindent 4 }}
{{- end }}

View file

@ -1,39 +0,0 @@
apiVersion: v1
kind: ConfigMap
metadata:
name: {{ include "openrag.fullname" . }}-app-config
namespace: {{ include "openrag.namespace" . }}
labels:
{{- include "openrag.labels" . | nindent 4 }}
data:
config.yaml: |
agent:
llm_model: {{ .Values.appConfig.agent.llmModel | quote }}
llm_provider: {{ .Values.appConfig.agent.llmProvider | quote }}
{{- if .Values.appConfig.agent.systemPrompt }}
system_prompt: {{ .Values.appConfig.agent.systemPrompt | quote }}
{{- end }}
edited: false
knowledge:
chunk_overlap: {{ .Values.appConfig.knowledge.chunkOverlap }}
chunk_size: {{ .Values.appConfig.knowledge.chunkSize }}
embedding_model: {{ .Values.appConfig.knowledge.embeddingModel | quote }}
embedding_provider: {{ .Values.appConfig.knowledge.embeddingProvider | quote }}
ocr: {{ .Values.appConfig.knowledge.ocr }}
picture_descriptions: {{ .Values.appConfig.knowledge.pictureDescriptions }}
table_structure: {{ .Values.appConfig.knowledge.tableStructure }}
providers:
anthropic:
configured: {{ .Values.llmProviders.anthropic.enabled }}
ollama:
configured: {{ .Values.llmProviders.ollama.enabled }}
{{- if .Values.llmProviders.ollama.endpoint }}
endpoint: {{ .Values.llmProviders.ollama.endpoint | quote }}
{{- end }}
openai:
configured: {{ .Values.llmProviders.openai.enabled }}
watsonx:
configured: {{ .Values.llmProviders.watsonx.enabled }}
{{- if .Values.llmProviders.watsonx.endpoint }}
endpoint: {{ .Values.llmProviders.watsonx.endpoint | quote }}
{{- end }}

View file

@ -1,14 +0,0 @@
{{- if .Values.langflow.enabled }}
apiVersion: v1
kind: ConfigMap
metadata:
name: {{ include "openrag.fullname" . }}-flow-ids
namespace: {{ include "openrag.namespace" . }}
labels:
{{- include "openrag.labels" . | nindent 4 }}
data:
chat-flow-id: {{ .Values.langflow.flows.chatFlowId | quote }}
ingest-flow-id: {{ .Values.langflow.flows.ingestFlowId | quote }}
url-ingest-flow-id: {{ .Values.langflow.flows.urlIngestFlowId | quote }}
nudges-flow-id: {{ .Values.langflow.flows.nudgesFlowId | quote }}
{{- end }}

View file

@ -1,12 +0,0 @@
{{- if .Values.langflow.flows.loadDefaults }}
apiVersion: v1
kind: ConfigMap
metadata:
name: {{ include "openrag.fullname" . }}-flow-agent
namespace: {{ include "openrag.namespace" . }}
labels:
{{- include "openrag.labels" . | nindent 4 }}
data:
openrag_agent.json: |-
{{ .Files.Get "flows/openrag_agent.json" | indent 4 }}
{{- end }}

View file

@ -1,12 +0,0 @@
{{- if .Values.langflow.flows.loadDefaults }}
apiVersion: v1
kind: ConfigMap
metadata:
name: {{ include "openrag.fullname" . }}-flow-ingestion
namespace: {{ include "openrag.namespace" . }}
labels:
{{- include "openrag.labels" . | nindent 4 }}
data:
ingestion_flow.json: |-
{{ .Files.Get "flows/ingestion_flow.json" | indent 4 }}
{{- end }}

View file

@ -1,12 +0,0 @@
{{- if .Values.langflow.flows.loadDefaults }}
apiVersion: v1
kind: ConfigMap
metadata:
name: {{ include "openrag.fullname" . }}-flow-nudges
namespace: {{ include "openrag.namespace" . }}
labels:
{{- include "openrag.labels" . | nindent 4 }}
data:
openrag_nudges.json: |-
{{ .Files.Get "flows/openrag_nudges.json" | indent 4 }}
{{- end }}

View file

@ -1,12 +0,0 @@
{{- if .Values.langflow.flows.loadDefaults }}
apiVersion: v1
kind: ConfigMap
metadata:
name: {{ include "openrag.fullname" . }}-flow-url
namespace: {{ include "openrag.namespace" . }}
labels:
{{- include "openrag.labels" . | nindent 4 }}
data:
openrag_url_mcp.json: |-
{{ .Files.Get "flows/openrag_url_mcp.json" | indent 4 }}
{{- end }}

View file

@ -1,81 +0,0 @@
{{- if .Values.dashboards.enabled }}
apiVersion: apps/v1
kind: Deployment
metadata:
name: {{ include "openrag.fullname" . }}-dashboards
namespace: {{ include "openrag.namespace" . }}
labels:
{{- include "openrag.dashboards.labels" . | nindent 4 }}
spec:
replicas: {{ .Values.dashboards.replicaCount }}
selector:
matchLabels:
{{- include "openrag.dashboards.selectorLabels" . | nindent 6 }}
template:
metadata:
labels:
{{- include "openrag.dashboards.selectorLabels" . | nindent 8 }}
annotations:
checksum/secret: {{ include (print $.Template.BasePath "/secrets/opensearch-secret.yaml") . | sha256sum }}
spec:
serviceAccountName: {{ include "openrag.serviceAccountName" . }}
{{- with .Values.podSecurityContext }}
securityContext:
{{- toYaml . | nindent 8 }}
{{- end }}
{{- with .Values.global.imagePullSecrets }}
imagePullSecrets:
{{- toYaml . | nindent 8 }}
{{- end }}
containers:
- name: dashboards
image: "{{ .Values.dashboards.image.repository }}:{{ .Values.dashboards.image.tag }}"
imagePullPolicy: {{ .Values.global.imagePullPolicy }}
ports:
- name: http
containerPort: 5601
protocol: TCP
env:
# OpenSearch connection (external SaaS)
- name: OPENSEARCH_HOSTS
value: '["{{ include "openrag.opensearch.url" . }}"]'
- name: OPENSEARCH_USERNAME
value: {{ .Values.global.opensearch.username | quote }}
{{- if .Values.global.opensearch.password }}
- name: OPENSEARCH_PASSWORD
valueFrom:
secretKeyRef:
name: {{ include "openrag.fullname" . }}-opensearch
key: password
{{- end }}
resources:
{{- toYaml .Values.dashboards.resources | nindent 12 }}
{{- if .Values.dashboards.livenessProbe.enabled }}
livenessProbe:
httpGet:
path: /api/status
port: http
initialDelaySeconds: {{ .Values.dashboards.livenessProbe.initialDelaySeconds }}
periodSeconds: {{ .Values.dashboards.livenessProbe.periodSeconds }}
{{- end }}
{{- if .Values.dashboards.readinessProbe.enabled }}
readinessProbe:
httpGet:
path: /api/status
port: http
initialDelaySeconds: {{ .Values.dashboards.readinessProbe.initialDelaySeconds }}
periodSeconds: {{ .Values.dashboards.readinessProbe.periodSeconds }}
{{- end }}
{{- with .Values.nodeSelector }}
nodeSelector:
{{- toYaml . | nindent 8 }}
{{- end }}
{{- with .Values.affinity }}
affinity:
{{- toYaml . | nindent 8 }}
{{- end }}
{{- with .Values.tolerations }}
tolerations:
{{- toYaml . | nindent 8 }}
{{- end }}
{{- end }}

View file

@ -1,18 +0,0 @@
{{- if .Values.dashboards.enabled }}
apiVersion: v1
kind: Service
metadata:
name: {{ include "openrag.fullname" . }}-dashboards
namespace: {{ include "openrag.namespace" . }}
labels:
{{- include "openrag.dashboards.labels" . | nindent 4 }}
spec:
type: {{ .Values.dashboards.service.type }}
ports:
- port: {{ .Values.dashboards.service.port }}
targetPort: http
protocol: TCP
name: http
selector:
{{- include "openrag.dashboards.selectorLabels" . | nindent 4 }}
{{- end }}

View file

@ -1,80 +0,0 @@
{{- if .Values.frontend.enabled }}
apiVersion: apps/v1
kind: Deployment
metadata:
name: {{ include "openrag.fullname" . }}-frontend
namespace: {{ include "openrag.namespace" . }}
labels:
{{- include "openrag.frontend.labels" . | nindent 4 }}
spec:
{{- if not .Values.frontend.autoscaling.enabled }}
replicas: {{ .Values.frontend.replicaCount }}
{{- end }}
selector:
matchLabels:
{{- include "openrag.frontend.selectorLabels" . | nindent 6 }}
template:
metadata:
labels:
{{- include "openrag.frontend.selectorLabels" . | nindent 8 }}
spec:
serviceAccountName: {{ include "openrag.serviceAccountName" . }}
{{- with .Values.podSecurityContext }}
securityContext:
{{- toYaml . | nindent 8 }}
{{- end }}
{{- with .Values.global.imagePullSecrets }}
imagePullSecrets:
{{- toYaml . | nindent 8 }}
{{- end }}
containers:
- name: frontend
image: "{{ .Values.frontend.image.repository }}:{{ .Values.frontend.image.tag | default .Values.global.imageTag }}"
imagePullPolicy: {{ .Values.global.imagePullPolicy }}
{{- with .Values.securityContext }}
securityContext:
{{- toYaml . | nindent 12 }}
{{- end }}
ports:
- name: http
containerPort: 3000
protocol: TCP
env:
# Backend connection (uses internal service name)
- name: OPENRAG_BACKEND_HOST
value: "{{ include "openrag.fullname" . }}-backend"
resources:
{{- toYaml .Values.frontend.resources | nindent 12 }}
{{- if .Values.frontend.livenessProbe.enabled }}
livenessProbe:
httpGet:
path: /
port: http
initialDelaySeconds: {{ .Values.frontend.livenessProbe.initialDelaySeconds }}
periodSeconds: {{ .Values.frontend.livenessProbe.periodSeconds }}
timeoutSeconds: {{ .Values.frontend.livenessProbe.timeoutSeconds }}
failureThreshold: {{ .Values.frontend.livenessProbe.failureThreshold }}
{{- end }}
{{- if .Values.frontend.readinessProbe.enabled }}
readinessProbe:
httpGet:
path: /
port: http
initialDelaySeconds: {{ .Values.frontend.readinessProbe.initialDelaySeconds }}
periodSeconds: {{ .Values.frontend.readinessProbe.periodSeconds }}
timeoutSeconds: {{ .Values.frontend.readinessProbe.timeoutSeconds }}
failureThreshold: {{ .Values.frontend.readinessProbe.failureThreshold }}
{{- end }}
{{- with .Values.nodeSelector }}
nodeSelector:
{{- toYaml . | nindent 8 }}
{{- end }}
{{- with .Values.affinity }}
affinity:
{{- toYaml . | nindent 8 }}
{{- end }}
{{- with .Values.tolerations }}
tolerations:
{{- toYaml . | nindent 8 }}
{{- end }}
{{- end }}

View file

@ -1,33 +0,0 @@
{{- if and .Values.frontend.enabled .Values.frontend.autoscaling.enabled }}
apiVersion: autoscaling/v2
kind: HorizontalPodAutoscaler
metadata:
name: {{ include "openrag.fullname" . }}-frontend
namespace: {{ include "openrag.namespace" . }}
labels:
{{- include "openrag.frontend.labels" . | nindent 4 }}
spec:
scaleTargetRef:
apiVersion: apps/v1
kind: Deployment
name: {{ include "openrag.fullname" . }}-frontend
minReplicas: {{ .Values.frontend.autoscaling.minReplicas }}
maxReplicas: {{ .Values.frontend.autoscaling.maxReplicas }}
metrics:
{{- if .Values.frontend.autoscaling.targetCPUUtilizationPercentage }}
- type: Resource
resource:
name: cpu
target:
type: Utilization
averageUtilization: {{ .Values.frontend.autoscaling.targetCPUUtilizationPercentage }}
{{- end }}
{{- if .Values.frontend.autoscaling.targetMemoryUtilizationPercentage }}
- type: Resource
resource:
name: memory
target:
type: Utilization
averageUtilization: {{ .Values.frontend.autoscaling.targetMemoryUtilizationPercentage }}
{{- end }}
{{- end }}

View file

@ -1,18 +0,0 @@
{{- if .Values.frontend.enabled }}
apiVersion: v1
kind: Service
metadata:
name: {{ include "openrag.fullname" . }}-frontend
namespace: {{ include "openrag.namespace" . }}
labels:
{{- include "openrag.frontend.labels" . | nindent 4 }}
spec:
type: {{ .Values.frontend.service.type }}
ports:
- port: {{ .Values.frontend.service.port }}
targetPort: http
protocol: TCP
name: http
selector:
{{- include "openrag.frontend.selectorLabels" . | nindent 4 }}
{{- end }}

View file

@ -1,104 +0,0 @@
{{- if .Values.ingress.enabled }}
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: {{ include "openrag.fullname" . }}
namespace: {{ include "openrag.namespace" . }}
labels:
{{- include "openrag.labels" . | nindent 4 }}
{{- with .Values.ingress.annotations }}
annotations:
{{- toYaml . | nindent 4 }}
{{- end }}
{{- if .Values.ingress.tls.certManager.enabled }}
cert-manager.io/cluster-issuer: {{ .Values.ingress.tls.certManager.issuerRef.name }}
{{- end }}
spec:
{{- if .Values.ingress.className }}
ingressClassName: {{ .Values.ingress.className }}
{{- end }}
{{- if .Values.ingress.tls.enabled }}
tls:
{{- if .Values.ingress.hosts.frontend.host }}
- hosts:
- {{ .Values.ingress.hosts.frontend.host }}
secretName: {{ .Values.ingress.tls.secretName | default (printf "%s-frontend-tls" (include "openrag.fullname" .)) }}
{{- end }}
{{- if .Values.ingress.hosts.backend.host }}
- hosts:
- {{ .Values.ingress.hosts.backend.host }}
secretName: {{ .Values.ingress.tls.secretName | default (printf "%s-backend-tls" (include "openrag.fullname" .)) }}
{{- end }}
{{- if and .Values.ingress.hosts.langflow.enabled .Values.ingress.hosts.langflow.host }}
- hosts:
- {{ .Values.ingress.hosts.langflow.host }}
secretName: {{ .Values.ingress.tls.secretName | default (printf "%s-langflow-tls" (include "openrag.fullname" .)) }}
{{- end }}
{{- if and .Values.dashboards.enabled .Values.ingress.hosts.dashboards.enabled .Values.ingress.hosts.dashboards.host }}
- hosts:
- {{ .Values.ingress.hosts.dashboards.host }}
secretName: {{ .Values.ingress.tls.secretName | default (printf "%s-dashboards-tls" (include "openrag.fullname" .)) }}
{{- end }}
{{- end }}
rules:
{{- if .Values.ingress.hosts.frontend.host }}
# Frontend ingress rule
- host: {{ .Values.ingress.hosts.frontend.host }}
http:
paths:
{{- range .Values.ingress.hosts.frontend.paths }}
- path: {{ .path }}
pathType: {{ .pathType }}
backend:
service:
name: {{ include "openrag.fullname" $ }}-frontend
port:
number: {{ $.Values.frontend.service.port }}
{{- end }}
{{- end }}
{{- if .Values.ingress.hosts.backend.host }}
# Backend API ingress rule
- host: {{ .Values.ingress.hosts.backend.host }}
http:
paths:
{{- range .Values.ingress.hosts.backend.paths }}
- path: {{ .path }}
pathType: {{ .pathType }}
backend:
service:
name: {{ include "openrag.fullname" $ }}-backend
port:
number: {{ $.Values.backend.service.port }}
{{- end }}
{{- end }}
{{- if and .Values.ingress.hosts.langflow.enabled .Values.ingress.hosts.langflow.host }}
# Optional Langflow direct access
- host: {{ .Values.ingress.hosts.langflow.host }}
http:
paths:
{{- range .Values.ingress.hosts.langflow.paths }}
- path: {{ .path }}
pathType: {{ .pathType }}
backend:
service:
name: {{ include "openrag.fullname" $ }}-langflow
port:
number: {{ $.Values.langflow.service.port }}
{{- end }}
{{- end }}
{{- if and .Values.dashboards.enabled .Values.ingress.hosts.dashboards.enabled .Values.ingress.hosts.dashboards.host }}
# Optional Dashboards access
- host: {{ .Values.ingress.hosts.dashboards.host }}
http:
paths:
{{- range .Values.ingress.hosts.dashboards.paths }}
- path: {{ .path }}
pathType: {{ .pathType }}
backend:
service:
name: {{ include "openrag.fullname" $ }}-dashboards
port:
number: {{ $.Values.dashboards.service.port }}
{{- end }}
{{- end }}
{{- end }}

View file

@ -1,247 +0,0 @@
{{- if .Values.langflow.enabled }}
apiVersion: apps/v1
kind: Deployment
metadata:
name: {{ include "openrag.fullname" . }}-langflow
namespace: {{ include "openrag.namespace" . }}
labels:
{{- include "openrag.langflow.labels" . | nindent 4 }}
spec:
replicas: 1 # Always 1 for SQLite-based Langflow
strategy:
type: Recreate # Required for RWO PVC
selector:
matchLabels:
{{- include "openrag.langflow.selectorLabels" . | nindent 6 }}
template:
metadata:
labels:
{{- include "openrag.langflow.selectorLabels" . | nindent 8 }}
annotations:
checksum/secret: {{ include (print $.Template.BasePath "/secrets/langflow-secret.yaml") . | sha256sum }}
checksum/config: {{ include (print $.Template.BasePath "/configmaps/flow-ids-configmap.yaml") . | sha256sum }}
spec:
serviceAccountName: {{ include "openrag.serviceAccountName" . }}
{{- with .Values.podSecurityContext }}
securityContext:
{{- toYaml . | nindent 8 }}
{{- end }}
{{- with .Values.global.imagePullSecrets }}
imagePullSecrets:
{{- toYaml . | nindent 8 }}
{{- end }}
{{- if .Values.langflow.flows.loadDefaults }}
initContainers:
- name: load-default-flows
image: busybox:1.36
command:
- /bin/sh
- -c
- |
FLOWS_DIR="{{ .Values.langflow.persistence.mountPath }}/{{ .Values.langflow.persistence.flowsSubPath }}"
mkdir -p "$FLOWS_DIR"
if [ -z "$(ls -A $FLOWS_DIR 2>/dev/null)" ]; then
echo "Loading default flows..."
cp /default-flows/*.json "$FLOWS_DIR/"
echo "Flows loaded: $(ls $FLOWS_DIR)"
else
echo "Flows already exist, skipping."
fi
volumeMounts:
- name: langflow-data
mountPath: {{ .Values.langflow.persistence.mountPath }}
- name: default-flows
mountPath: /default-flows
{{- end }}
containers:
- name: langflow
image: "{{ .Values.langflow.image.repository }}:{{ .Values.langflow.image.tag | default .Values.global.imageTag }}"
imagePullPolicy: {{ .Values.global.imagePullPolicy }}
{{- with .Values.securityContext }}
securityContext:
{{- toYaml . | nindent 12 }}
{{- end }}
ports:
- name: http
containerPort: 7860
protocol: TCP
env:
# Langflow core settings
- name: LANGFLOW_LOAD_FLOWS_PATH
value: {{ .Values.langflow.persistence.mountPath }}/{{ .Values.langflow.persistence.flowsSubPath }}
- name: LANGFLOW_DATABASE_URL
value: "sqlite:///{{ .Values.langflow.persistence.mountPath }}/{{ .Values.langflow.persistence.dbSubPath }}"
- name: LANGFLOW_DEACTIVATE_TRACING
value: {{ .Values.langflow.deactivateTracing | quote }}
- name: LANGFLOW_LOG_LEVEL
value: {{ .Values.langflow.logLevel | quote }}
- name: HIDE_GETTING_STARTED_PROGRESS
value: "true"
# Auth settings
- name: LANGFLOW_AUTO_LOGIN
value: {{ .Values.langflow.auth.autoLogin | quote }}
- name: LANGFLOW_NEW_USER_IS_ACTIVE
value: {{ .Values.langflow.auth.newUserIsActive | quote }}
- name: LANGFLOW_ENABLE_SUPERUSER_CLI
value: {{ .Values.langflow.auth.enableSuperuserCli | quote }}
# Variables to expose to flows
- name: LANGFLOW_VARIABLES_TO_GET_FROM_ENVIRONMENT
value: {{ .Values.langflow.variablesToGetFromEnvironment | quote }}
# Flow context variables (defaults for flow execution)
- name: JWT
value: "None"
- name: OWNER
value: "None"
- name: OWNER_NAME
value: "None"
- name: OWNER_EMAIL
value: "None"
- name: CONNECTOR_TYPE
value: "system"
- name: CONNECTOR_TYPE_URL
value: "url"
- name: OPENRAG-QUERY-FILTER
value: "{}"
- name: FILENAME
value: "None"
- name: MIMETYPE
value: "None"
- name: FILESIZE
value: "0"
- name: SELECTED_EMBEDDING_MODEL
value: ""
# Secrets from langflow secret
- name: LANGFLOW_SECRET_KEY
valueFrom:
secretKeyRef:
name: {{ include "openrag.fullname" . }}-langflow
key: secret-key
- name: LANGFLOW_SUPERUSER
valueFrom:
secretKeyRef:
name: {{ include "openrag.fullname" . }}-langflow
key: superuser
- name: LANGFLOW_SUPERUSER_PASSWORD
valueFrom:
secretKeyRef:
name: {{ include "openrag.fullname" . }}-langflow
key: superuser-password
# OpenSearch password (for flows)
{{- if .Values.global.opensearch.password }}
- name: OPENSEARCH_PASSWORD
valueFrom:
secretKeyRef:
name: {{ include "openrag.fullname" . }}-opensearch
key: password
{{- end }}
# LLM Provider keys
{{- if .Values.llmProviders.openai.enabled }}
- name: OPENAI_API_KEY
valueFrom:
secretKeyRef:
name: {{ include "openrag.fullname" . }}-llm-providers
key: openai-api-key
{{- else }}
- name: OPENAI_API_KEY
value: "None"
{{- end }}
{{- if .Values.llmProviders.anthropic.enabled }}
- name: ANTHROPIC_API_KEY
valueFrom:
secretKeyRef:
name: {{ include "openrag.fullname" . }}-llm-providers
key: anthropic-api-key
{{- else }}
- name: ANTHROPIC_API_KEY
value: "None"
{{- end }}
{{- if .Values.llmProviders.watsonx.enabled }}
- name: WATSONX_API_KEY
valueFrom:
secretKeyRef:
name: {{ include "openrag.fullname" . }}-llm-providers
key: watsonx-api-key
- name: WATSONX_ENDPOINT
valueFrom:
secretKeyRef:
name: {{ include "openrag.fullname" . }}-llm-providers
key: watsonx-endpoint
- name: WATSONX_PROJECT_ID
valueFrom:
secretKeyRef:
name: {{ include "openrag.fullname" . }}-llm-providers
key: watsonx-project-id
{{- else }}
- name: WATSONX_API_KEY
value: "None"
- name: WATSONX_ENDPOINT
value: "None"
- name: WATSONX_PROJECT_ID
value: "None"
{{- end }}
{{- if .Values.llmProviders.ollama.enabled }}
- name: OLLAMA_BASE_URL
value: {{ .Values.llmProviders.ollama.endpoint | quote }}
{{- else }}
- name: OLLAMA_BASE_URL
value: "None"
{{- end }}
volumeMounts:
- name: langflow-data
mountPath: {{ .Values.langflow.persistence.mountPath }}
resources:
{{- toYaml .Values.langflow.resources | nindent 12 }}
{{- if .Values.langflow.livenessProbe.enabled }}
livenessProbe:
httpGet:
path: /health
port: http
initialDelaySeconds: {{ .Values.langflow.livenessProbe.initialDelaySeconds }}
periodSeconds: {{ .Values.langflow.livenessProbe.periodSeconds }}
timeoutSeconds: {{ .Values.langflow.livenessProbe.timeoutSeconds }}
failureThreshold: {{ .Values.langflow.livenessProbe.failureThreshold }}
{{- end }}
{{- if .Values.langflow.readinessProbe.enabled }}
readinessProbe:
httpGet:
path: /health
port: http
initialDelaySeconds: {{ .Values.langflow.readinessProbe.initialDelaySeconds }}
periodSeconds: {{ .Values.langflow.readinessProbe.periodSeconds }}
timeoutSeconds: {{ .Values.langflow.readinessProbe.timeoutSeconds }}
failureThreshold: {{ .Values.langflow.readinessProbe.failureThreshold }}
{{- end }}
volumes:
- name: langflow-data
{{- if .Values.langflow.persistence.enabled }}
persistentVolumeClaim:
claimName: {{ include "openrag.fullname" . }}-langflow
{{- else }}
emptyDir: {}
{{- end }}
{{- if .Values.langflow.flows.loadDefaults }}
- name: default-flows
projected:
sources:
- configMap:
name: {{ include "openrag.fullname" . }}-flow-ingestion
- configMap:
name: {{ include "openrag.fullname" . }}-flow-agent
- configMap:
name: {{ include "openrag.fullname" . }}-flow-nudges
- configMap:
name: {{ include "openrag.fullname" . }}-flow-url
{{- end }}
{{- with .Values.nodeSelector }}
nodeSelector:
{{- toYaml . | nindent 8 }}
{{- end }}
{{- with .Values.affinity }}
affinity:
{{- toYaml . | nindent 8 }}
{{- end }}
{{- with .Values.tolerations }}
tolerations:
{{- toYaml . | nindent 8 }}
{{- end }}
{{- end }}

View file

@ -1,18 +0,0 @@
{{- if .Values.langflow.enabled }}
apiVersion: v1
kind: Service
metadata:
name: {{ include "openrag.fullname" . }}-langflow
namespace: {{ include "openrag.namespace" . }}
labels:
{{- include "openrag.langflow.labels" . | nindent 4 }}
spec:
type: {{ .Values.langflow.service.type }}
ports:
- port: {{ .Values.langflow.service.port }}
targetPort: http
protocol: TCP
name: http
selector:
{{- include "openrag.langflow.selectorLabels" . | nindent 4 }}
{{- end }}

View file

@ -1,13 +0,0 @@
{{- if .Values.backend.aws.enabled }}
apiVersion: v1
kind: Secret
metadata:
name: {{ include "openrag.fullname" . }}-aws
namespace: {{ include "openrag.namespace" . }}
labels:
{{- include "openrag.labels" . | nindent 4 }}
type: Opaque
stringData:
access-key-id: {{ .Values.backend.aws.accessKeyId | quote }}
secret-access-key: {{ .Values.backend.aws.secretAccessKey | quote }}
{{- end }}

View file

@ -1,14 +0,0 @@
{{- if .Values.langflow.enabled }}
apiVersion: v1
kind: Secret
metadata:
name: {{ include "openrag.fullname" . }}-langflow
namespace: {{ include "openrag.namespace" . }}
labels:
{{- include "openrag.labels" . | nindent 4 }}
type: Opaque
stringData:
secret-key: {{ .Values.langflow.auth.secretKey | default (randAlphaNum 32) | quote }}
superuser: {{ .Values.langflow.auth.superuser | quote }}
superuser-password: {{ .Values.langflow.auth.superuserPassword | default (randAlphaNum 16) | quote }}
{{- end }}

View file

@ -1,22 +0,0 @@
{{- if or .Values.llmProviders.openai.enabled .Values.llmProviders.anthropic.enabled .Values.llmProviders.watsonx.enabled }}
apiVersion: v1
kind: Secret
metadata:
name: {{ include "openrag.fullname" . }}-llm-providers
namespace: {{ include "openrag.namespace" . }}
labels:
{{- include "openrag.labels" . | nindent 4 }}
type: Opaque
stringData:
{{- if .Values.llmProviders.openai.enabled }}
openai-api-key: {{ .Values.llmProviders.openai.apiKey | quote }}
{{- end }}
{{- if .Values.llmProviders.anthropic.enabled }}
anthropic-api-key: {{ .Values.llmProviders.anthropic.apiKey | quote }}
{{- end }}
{{- if .Values.llmProviders.watsonx.enabled }}
watsonx-api-key: {{ .Values.llmProviders.watsonx.apiKey | quote }}
watsonx-endpoint: {{ .Values.llmProviders.watsonx.endpoint | quote }}
watsonx-project-id: {{ .Values.llmProviders.watsonx.projectId | quote }}
{{- end }}
{{- end }}

View file

@ -1,19 +0,0 @@
{{- if or .Values.global.oauth.google.enabled .Values.global.oauth.microsoft.enabled }}
apiVersion: v1
kind: Secret
metadata:
name: {{ include "openrag.fullname" . }}-oauth
namespace: {{ include "openrag.namespace" . }}
labels:
{{- include "openrag.labels" . | nindent 4 }}
type: Opaque
stringData:
{{- if .Values.global.oauth.google.enabled }}
google-client-id: {{ .Values.global.oauth.google.clientId | quote }}
google-client-secret: {{ .Values.global.oauth.google.clientSecret | quote }}
{{- end }}
{{- if .Values.global.oauth.microsoft.enabled }}
microsoft-client-id: {{ .Values.global.oauth.microsoft.clientId | quote }}
microsoft-client-secret: {{ .Values.global.oauth.microsoft.clientSecret | quote }}
{{- end }}
{{- end }}

View file

@ -1,13 +0,0 @@
{{- if .Values.global.opensearch.password }}
apiVersion: v1
kind: Secret
metadata:
name: {{ include "openrag.fullname" . }}-opensearch
namespace: {{ include "openrag.namespace" . }}
labels:
{{- include "openrag.labels" . | nindent 4 }}
type: Opaque
stringData:
password: {{ .Values.global.opensearch.password | quote }}
username: {{ .Values.global.opensearch.username | quote }}
{{- end }}

View file

@ -1,13 +0,0 @@
{{- if .Values.serviceAccount.create }}
apiVersion: v1
kind: ServiceAccount
metadata:
name: {{ include "openrag.serviceAccountName" . }}
namespace: {{ include "openrag.namespace" . }}
labels:
{{- include "openrag.labels" . | nindent 4 }}
{{- with .Values.serviceAccount.annotations }}
annotations:
{{- toYaml . | nindent 4 }}
{{- end }}
{{- end }}

View file

@ -1,18 +0,0 @@
{{- if and .Values.backend.enabled .Values.backend.persistence.config.enabled }}
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: {{ include "openrag.fullname" . }}-config
namespace: {{ include "openrag.namespace" . }}
labels:
{{- include "openrag.backend.labels" . | nindent 4 }}
spec:
accessModes:
- {{ .Values.backend.persistence.config.accessMode }}
{{- if .Values.backend.persistence.config.storageClass }}
storageClassName: {{ .Values.backend.persistence.config.storageClass | quote }}
{{- end }}
resources:
requests:
storage: {{ .Values.backend.persistence.config.size }}
{{- end }}

View file

@ -1,18 +0,0 @@
{{- if and .Values.backend.enabled .Values.backend.persistence.documents.enabled }}
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: {{ include "openrag.fullname" . }}-documents
namespace: {{ include "openrag.namespace" . }}
labels:
{{- include "openrag.backend.labels" . | nindent 4 }}
spec:
accessModes:
- {{ .Values.backend.persistence.documents.accessMode }}
{{- if .Values.backend.persistence.documents.storageClass }}
storageClassName: {{ .Values.backend.persistence.documents.storageClass | quote }}
{{- end }}
resources:
requests:
storage: {{ .Values.backend.persistence.documents.size }}
{{- end }}

View file

@ -1,18 +0,0 @@
{{- if and .Values.backend.enabled .Values.backend.persistence.keys.enabled }}
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: {{ include "openrag.fullname" . }}-keys
namespace: {{ include "openrag.namespace" . }}
labels:
{{- include "openrag.backend.labels" . | nindent 4 }}
spec:
accessModes:
- {{ .Values.backend.persistence.keys.accessMode }}
{{- if .Values.backend.persistence.keys.storageClass }}
storageClassName: {{ .Values.backend.persistence.keys.storageClass | quote }}
{{- end }}
resources:
requests:
storage: {{ .Values.backend.persistence.keys.size }}
{{- end }}

View file

@ -1,18 +0,0 @@
{{- if and .Values.langflow.enabled .Values.langflow.persistence.enabled }}
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: {{ include "openrag.fullname" . }}-langflow
namespace: {{ include "openrag.namespace" . }}
labels:
{{- include "openrag.langflow.labels" . | nindent 4 }}
spec:
accessModes:
- {{ .Values.langflow.persistence.accessMode }}
{{- if .Values.langflow.persistence.storageClass }}
storageClassName: {{ .Values.langflow.persistence.storageClass | quote }}
{{- end }}
resources:
requests:
storage: {{ .Values.langflow.persistence.size }}
{{- end }}

View file

@ -1,408 +0,0 @@
# OpenRAG Helm Chart Values
# This chart deploys OpenRAG with external OpenSearch SaaS connection
# Override names
nameOverride: ""
fullnameOverride: ""
# Global settings
global:
# Tenant identification - used for resource naming and namespace
tenant:
name: "" # Required for multi-tenant: tenant identifier (e.g., "acme")
namespace: "" # Optional: override namespace (defaults to tenant name or release namespace)
# Image settings
imageRegistry: "langflowai"
imagePullPolicy: IfNotPresent
imageTag: "latest" # Override with specific version in production
imagePullSecrets: []
# External OpenSearch SaaS connection (OpenSearch is NOT deployed by this chart)
opensearch:
host: "" # Required: OpenSearch SaaS endpoint (e.g., "my-cluster.us-east-1.es.amazonaws.com")
port: 443 # Default HTTPS port for managed OpenSearch
scheme: "https" # https for production SaaS
username: "admin" # OpenSearch username
password: "" # OpenSearch password (stored in secret)
# Shared OAuth credentials (same across all tenants)
oauth:
google:
enabled: false
clientId: "" # Google OAuth client ID
clientSecret: "" # Google OAuth client secret
microsoft:
enabled: false
clientId: "" # Microsoft Graph OAuth client ID
clientSecret: "" # Microsoft Graph OAuth client secret
# ============================================================================
# Langflow Configuration
# ============================================================================
langflow:
enabled: true
image:
repository: langflowai/openrag-langflow
tag: "" # Uses global.imageTag if empty
# Single pod - vertical scaling only (SQLite requires single writer)
replicaCount: 1
# Resource requests/limits for vertical scaling
resources:
requests:
cpu: "500m"
memory: "1Gi"
limits:
cpu: "4"
memory: "8Gi"
# Persistence for SQLite DB and flows
persistence:
enabled: true
storageClass: "" # Empty uses cluster default
accessMode: ReadWriteOnce
size: 10Gi
mountPath: /app/data
flowsSubPath: flows
dbSubPath: langflow.db
# Flow configuration (UUIDs for Langflow workflows)
flows:
loadDefaults: true # Load default OpenRAG flows on first deployment
chatFlowId: "1098eea1-6649-4e1d-aed1-b77249fb8dd0"
ingestFlowId: "5488df7c-b93f-4f87-a446-b67028bc0813"
urlIngestFlowId: "72c3d17c-2dac-4a73-b48a-6518473d7830"
nudgesFlowId: "ebc01d31-1976-46ce-a385-b0240327226c"
loadPath: /app/flows
# Authentication settings
auth:
autoLogin: false
superuser: "admin" # Langflow superuser username
superuserPassword: "" # Langflow superuser password (stored in secret)
secretKey: "" # Langflow secret key for JWT (stored in secret)
newUserIsActive: false
enableSuperuserCli: false
# Runtime settings
deactivateTracing: true
logLevel: "INFO" # DEBUG, INFO, WARNING, ERROR
# Variables to expose to flows
variablesToGetFromEnvironment: "JWT,OPENRAG-QUERY-FILTER,OPENSEARCH_PASSWORD,OWNER,OWNER_NAME,OWNER_EMAIL,CONNECTOR_TYPE,FILENAME,MIMETYPE,FILESIZE,SELECTED_EMBEDDING_MODEL,OPENAI_API_KEY,ANTHROPIC_API_KEY,WATSONX_API_KEY,WATSONX_ENDPOINT,WATSONX_PROJECT_ID,OLLAMA_BASE_URL"
# Probes
livenessProbe:
enabled: true
initialDelaySeconds: 60
periodSeconds: 30
timeoutSeconds: 10
failureThreshold: 3
readinessProbe:
enabled: true
initialDelaySeconds: 30
periodSeconds: 10
timeoutSeconds: 5
failureThreshold: 3
# Service configuration
service:
type: ClusterIP
port: 7860
# ============================================================================
# OpenRAG Backend Configuration
# ============================================================================
backend:
enabled: true
image:
repository: langflowai/openrag-backend
tag: "" # Uses global.imageTag if empty
# Single pod for vertical scaling
replicaCount: 1
# Resource requests/limits
resources:
requests:
cpu: "500m"
memory: "2Gi"
limits:
cpu: "4"
memory: "16Gi"
# Persistence for documents, keys, and config
persistence:
documents:
enabled: true
storageClass: ""
accessMode: ReadWriteOnce
size: 50Gi
mountPath: /app/openrag-documents
keys:
enabled: true
storageClass: ""
accessMode: ReadWriteOnce
size: 1Gi
mountPath: /app/keys
config:
enabled: true
storageClass: ""
accessMode: ReadWriteOnce
size: 1Gi
mountPath: /app/config
# Feature flags
features:
disableIngestWithLangflow: false # Set true to use traditional processor instead of Langflow
# Langflow public URL (for UI links to Langflow)
langflowPublicUrl: "" # e.g., "https://langflow.example.com"
# Webhook configuration for continuous ingestion
webhook:
enabled: false
baseUrl: "" # DNS routable URL for webhooks (e.g., ngrok URL)
# AWS credentials for S3 integration
aws:
enabled: false
accessKeyId: ""
secretAccessKey: ""
# Probes
livenessProbe:
enabled: true
initialDelaySeconds: 30
periodSeconds: 30
timeoutSeconds: 10
failureThreshold: 3
readinessProbe:
enabled: true
initialDelaySeconds: 15
periodSeconds: 10
timeoutSeconds: 5
failureThreshold: 3
# Service configuration
service:
type: ClusterIP
port: 8000
# ============================================================================
# OpenRAG Frontend Configuration
# ============================================================================
frontend:
enabled: true
image:
repository: langflowai/openrag-frontend
tag: "" # Uses global.imageTag if empty
# Can be multiple replicas (stateless)
replicaCount: 2
# Resource requests/limits
resources:
requests:
cpu: "100m"
memory: "256Mi"
limits:
cpu: "1"
memory: "1Gi"
# Horizontal Pod Autoscaler
autoscaling:
enabled: false
minReplicas: 2
maxReplicas: 10
targetCPUUtilizationPercentage: 70
targetMemoryUtilizationPercentage: 80
# Probes
livenessProbe:
enabled: true
initialDelaySeconds: 30
periodSeconds: 10
timeoutSeconds: 5
failureThreshold: 3
readinessProbe:
enabled: true
initialDelaySeconds: 10
periodSeconds: 5
timeoutSeconds: 3
failureThreshold: 3
# Service configuration
service:
type: ClusterIP
port: 3000
# ============================================================================
# OpenSearch Dashboards Configuration (Optional)
# ============================================================================
dashboards:
enabled: false # Enable only if dashboards available in OS SaaS
image:
repository: opensearchproject/opensearch-dashboards
tag: "3.0.0"
replicaCount: 1
# Resource requests/limits
resources:
requests:
cpu: "100m"
memory: "512Mi"
limits:
cpu: "1"
memory: "2Gi"
# Probes
livenessProbe:
enabled: true
initialDelaySeconds: 60
periodSeconds: 30
readinessProbe:
enabled: true
initialDelaySeconds: 30
periodSeconds: 10
# Service configuration
service:
type: ClusterIP
port: 5601
# ============================================================================
# Ingress Configuration
# ============================================================================
ingress:
enabled: true
className: "nginx" # nginx, alb, traefik, etc.
# Annotations for ingress controller
annotations: {}
# For nginx:
# nginx.ingress.kubernetes.io/proxy-body-size: "100m"
# nginx.ingress.kubernetes.io/proxy-read-timeout: "300"
# For AWS ALB:
# alb.ingress.kubernetes.io/scheme: internet-facing
# alb.ingress.kubernetes.io/target-type: ip
# Host configuration
hosts:
frontend:
host: "" # e.g., "openrag.example.com"
paths:
- path: /
pathType: Prefix
backend:
host: "" # e.g., "api.openrag.example.com"
paths:
- path: /
pathType: Prefix
langflow:
enabled: false # Optional: expose Langflow directly
host: "" # e.g., "langflow.openrag.example.com"
paths:
- path: /
pathType: Prefix
dashboards:
enabled: false # Only if dashboards.enabled is true
host: ""
paths:
- path: /
pathType: Prefix
# TLS configuration
tls:
enabled: false
# Use existing secret:
# secretName: "openrag-tls"
# Or use cert-manager:
certManager:
enabled: false
issuerRef:
name: "letsencrypt-prod"
kind: "ClusterIssuer"
# ============================================================================
# LLM Provider API Keys
# ============================================================================
llmProviders:
openai:
enabled: false
apiKey: "" # OpenAI API key (stored in secret)
anthropic:
enabled: false
apiKey: "" # Anthropic API key (stored in secret)
watsonx:
enabled: false
apiKey: "" # WatsonX API key (stored in secret)
endpoint: "https://us-south.ml.cloud.ibm.com"
projectId: "" # WatsonX project ID
ollama:
enabled: false
endpoint: "" # Ollama endpoint URL (e.g., "http://ollama:11434")
# ============================================================================
# Application Config (config.yaml contents)
# ============================================================================
appConfig:
agent:
llmModel: "claude-sonnet-4-5-20250929"
llmProvider: "anthropic"
# System prompt can be customized here
systemPrompt: "" # Leave empty to use default
knowledge:
chunkOverlap: 200
chunkSize: 1000
embeddingModel: "text-embedding-3-large"
embeddingProvider: "openai"
ocr: false
pictureDescriptions: false
tableStructure: true
# ============================================================================
# Service Account
# ============================================================================
serviceAccount:
create: true
name: ""
annotations: {}
# ============================================================================
# Pod Security
# ============================================================================
podSecurityContext:
fsGroup: 1000
runAsNonRoot: true
securityContext:
allowPrivilegeEscalation: false
readOnlyRootFilesystem: false
runAsUser: 1000
runAsGroup: 1000
# ============================================================================
# Node Placement
# ============================================================================
nodeSelector: {}
tolerations: []
affinity: {}
# ============================================================================
# Pod Disruption Budgets
# ============================================================================
podDisruptionBudget:
enabled: false
minAvailable: 1
# maxUnavailable: 1

View file

@ -21,7 +21,6 @@ from functools import partial
from starlette.applications import Starlette
from starlette.routing import Route
from starlette.responses import JSONResponse
# Set multiprocessing start method to 'spawn' for CUDA compatibility
multiprocessing.set_start_method("spawn", force=True)
@ -457,24 +456,6 @@ async def _ingest_default_documents_langflow(services, file_paths):
file_count=len(file_paths),
)
async def opensearch_health_ready(request):
"""Readiness probe: verifies OpenSearch dependency is reachable."""
try:
# Fast check that the cluster is reachable/auth works
await asyncio.wait_for(clients.opensearch.info(), timeout=5.0)
return JSONResponse(
{"status": "ready", "dependencies": {"opensearch": "up"}},
status_code=200,
)
except Exception as e:
return JSONResponse(
{
"status": "not_ready",
"dependencies": {"opensearch": "down"},
"error": str(e),
},
status_code=503,
)
async def _ingest_default_documents_openrag(services, file_paths):
"""Ingest default documents using traditional OpenRAG processor."""
@ -1167,11 +1148,6 @@ async def create_app():
),
methods=["GET"],
),
Route(
"/search/health",
opensearch_health_ready,
methods=["GET"],
),
# Models endpoints
Route(
"/models/openai",