Skip to main content

OpenSearch in OpenRAG

OpenRAG uses OpenSearch for its vector-backed knowledge store. This is a specialized database for storing and retrieving embeddings, which helps your Agent efficiently find relevant information. OpenSearch provides powerful hybrid search capabilities with enterprise-grade security and multi-tenancy support.

Authentication and document access

OpenRAG supports two authentication modes based on how you install OpenRAG, and which mode you choose affects document access.

No-auth mode (Basic Setup): This mode uses a single anonymous JWT token for OpenSearch authentication, so documents uploaded to the documents index by one user are visible to all other users on the OpenRAG server.

OAuth mode (Advanced Setup): Each OpenRAG user is granted a JWT token, and each document is tagged with user ownership. Documents are filtered by user ownership, ensuring users only see documents they uploaded or have access to.

Ingest knowledge

OpenRAG supports knowledge ingestion through direct file uploads and OAuth connectors. To configure the knowledge ingestion pipeline parameters, see Docling Ingestion.

Direct file ingestion

The Knowledge Ingest flow uses Langflow's File component to split and embed files loaded from your local machine into the OpenSearch database.

The default path to your local folder is mounted from the ./documents folder in your OpenRAG project directory to the /app/documents/ directory inside the Docker container. Files added to the host or the container will be visible in both locations. To configure this location, modify the Documents Paths variable in either the TUI's Advanced Setup menu or in the .env used by Docker Compose.

To load and process a single file from the mapped location, click Add Knowledge, and then click File. The file is loaded into your OpenSearch database, and appears in the Knowledge page.

To load and process a directory from the mapped location, click Add Knowledge, and then click Folder. The files are loaded into your OpenSearch database, and appear in the Knowledge page.

To add files directly to a chat session, click in the chat input and select the files you want to include. Files added this way are processed and made available to the agent for the current conversation, and are not permanently added to the knowledge base.

Ingest files through OAuth connectors

OpenRAG supports Google Drive, OneDrive, and Sharepoint as OAuth connectors for seamless document synchronization.

OAuth integration allows individual users to connect their personal cloud storage accounts to OpenRAG. Each user must separately authorize OpenRAG to access their own cloud storage files. When a user connects a cloud service, they are redirected to authenticate with that service provider and grant OpenRAG permission to sync documents from their personal cloud storage.

Before users can connect their cloud storage accounts, you must configure OAuth credentials in OpenRAG. This requires registering OpenRAG as an OAuth application with a cloud provider and obtaining client ID and secret keys for each service you want to support.

To add an OAuth connector to OpenRAG, do the following. This example uses Google OAuth. If you wish to use another provider, add the secrets to another provider.

  1. If OpenRAG is running, stop it with Status > Stop Services.
  2. Click Advanced Setup.
  3. Add the OAuth provider's client and secret key in the Advanced Setup menu.
  4. Click Save Configuration. The TUI generates a new .env file with your OAuth values.
  5. Click Start Container Services.

The OpenRAG frontend at http://localhost:3000 now redirects to an OAuth callback login page for your OAuth provider. A successful authentication opens OpenRAG with the required scopes for your connected storage.

To add knowledge from an OAuth-connected storage provider, do the following:

  1. Click Add Knowledge, and then select the storage provider, for example, Google Drive. The Add Cloud Knowledge page opens.
  2. To add files or folders from the connected storage, click Add Files. Select the files or folders you want and click Select. You can select multiple files.
  3. When your files are selected, click Ingest Files. The ingestion process can take some time depending on the size of your documents.
  4. When ingestion is complete, your documents are available in the Knowledge screen.

If ingestion fails, click Status to view the logged error.

Monitor ingestion tasks

When you upload files, process folders, or sync documents, OpenRAG processes them as background tasks. A badge appears on the Tasks icon when there are active tasks running. To open the Tasks menu, click Tasks.

Active Tasks shows tasks that are currently processing. A Pending task is queued and waiting to start, a Running task is actively processing files, and a Processing task is performing ingestion operations. For each active task, you can find the task ID, start time, duration, the number of files processed so far, and the total files.

You can cancel active tasks by clicking Cancel. Canceling a task stops processing immediately and marks the task as failed.

Explore knowledge

The Knowledge page lists the documents OpenRAG has ingested into the OpenSearch vector database's documents index.

To explore your current knowledge, click Knowledge. Click on a document to display the chunks derived from splitting the default documents into the vector database.

Documents are processed with the default Knowledge Ingest flow, so if you want to split your documents differently, edit the Knowledge Ingest flow.

All flows included with OpenRAG are designed to be modular, performant, and provider-agnostic. To modify a flow, click Settings, and click Edit in Langflow. OpenRAG's visual editor is based on the Langflow visual editor, so you can edit your flows to match your specific use case.

Create knowledge filters

OpenRAG includes a knowledge filter system for organizing and managing document collections. Knowledge filters are saved search configurations that allow you to create custom views of your document collection. They store search queries, filter criteria, and display settings that can be reused across different parts of OpenRAG.

Knowledge filters help agents work more efficiently with large document collections by focusing their context within relevant documents sets.

To create a knowledge filter, do the following:

  1. Click Knowledge, and then click Knowledge Filters. The Knowledge Filter pane appears.

  2. Enter a Name and Description, and then click Create Filter. A new filter is created with default settings that match all documents.

  3. To modify the filter, click Knowledge, and then click your new filter to edit it in the Knowledge Filter pane.

    The following filter options are configurable.

    • Search Query: Enter text for semantic search, such as "financial reports from Q4".
    • Data Sources: Select specific data sources or folders to include.
    • Document Types: Filter by file type.
    • Owners: Filter by who uploaded the documents.
    • Connectors: Filter by connector types, such as local upload or Google Drive.
    • Response Limit: Set maximum number of results. The default is 10.
    • Score Threshold: Set minimum relevance score. The default score is 0.
  4. When you're done editing the filter, click Update Filter.

  5. To apply the filter to OpenRAG globally, click Knowledge, and then select the filter to apply. One filter can be enabled at a time.

    To apply the filter to a single chat session, in the Chat window, click , and then select the filter to apply.

    To delete the filter, in the Knowledge Filter pane, click Delete Filter.

OpenRAG default configuration

OpenRAG automatically detects and configures the correct vector dimensions for embedding models, ensuring optimal search performance and compatibility.

The complete list of supported models is available at models_service.py in the OpenRAG repository.

You can use custom embedding models by specifying them in your configuration.

If you use an unknown embedding model, OpenRAG automatically falls back to 1536 dimensions and logs a warning. The system continues to work, but search quality can be affected if the actual model dimensions differ from 1536.

The default embedding dimension is 1536 and the default model is text-embedding-3-small.

For models with known vector dimensions, see settings.py in the OpenRAG repository.