Skip to main content

Configure knowledge

OpenRAG includes a built-in OpenSearch instance that serves as the underlying datastore for your knowledge (documents). This specialized database is used to store and retrieve your documents and the associated vector data (embeddings).

The documents in your OpenSearch knowledge base provide specialized context in addition to the general knowledge available to the language model that you select when you install OpenRAG or edit a flow.

You can upload documents from a variety of sources to populate your knowledge base with unique content, such as your own company documents, research papers, or websites. Documents are processed through OpenRAG's knowledge ingestion flows with Docling.

Then, the OpenRAG Chat can run similarity searches against your OpenSearch database to retrieve relevant information and generate context-aware responses.

You can configure how documents are ingested and how the Chat interacts with your knowledge base.

Browse knowledge

The Knowledge page lists the documents OpenRAG has ingested into your OpenSearch database, specifically in an OpenSearch index named documents.

To explore the raw contents of your knowledge base, click Knowledge to get a list of all ingested documents.

Inspect knowledge

For each document, the Knowledge page provides the following information:

  • Source: Name of the ingested content, such as the file name.

  • Size

  • Type

  • Owner: User that uploaded the document.

    In no-auth mode, all documents are attributed to Anonymous User because there is no distinct document ownership or unique JWTs. For more control over document ownership and visibility, use OAuth mode. For more information, see OpenSearch authentication and document access.

  • Chunks: Number of chunks created by splitting the document during ingestion.

    Click a document to view the individual chunks and technical details related to chunking. If the chunks seem incorrect or incomplete, see Troubleshoot ingestion.

  • Avg score: Average similarity score across all chunks of the document.

    If you search the knowledge base, the Avg score column shows the similarity score for your search query or filter.

  • Embedding model and Dimensions: The embedding model and dimensions used to embed the chunks.

  • Status: Status of document ingestion. If ingestion is complete and successful, then the status is Active. For more information, see Monitor ingestion.

Search knowledge

You can use the search field on the Knowledge page to find documents using semantic search and knowledge filters:

To search all documents, enter a search string in the search field, and then press Enter.

To apply a knowledge filter, select the filter from the Knowledge Filters list. The filter settings pane opens, and the filter appears in the search field. To remove the filter, close the filter settings pane or clear the filter from the search field.

You can use the filter alone or in combination with a search string. If a knowledge filter has a Search Query, that query is applied in addition to any text string you enter in the search field.

Only one filter can be applied at a time.

Default documents

By default, OpenRAG includes some initial documents about OpenRAG. These documents are ingested automatically during the application onboarding process.

You can use these documents to ask OpenRAG about itself, or to test the Chat feature before uploading your own documents.

If you delete these documents, then you won't be able to ask OpenRAG about itself and it's own functionality. It is recommended that you keep these documents, and use filters to separate them from your other knowledge. An OpenRAG Docs filter is created automatically for these documents.

OpenSearch authentication and document access

When you install OpenRAG, you provide the initial configuration values for your OpenRAG services, including authentication credentials for OpenSearch and OAuth connectors. This configuration determines how OpenRAG authenticates with your deployment's OpenSearch instance, and it controls user access to documents in your knowledge base:

  • No-auth mode: If you select Basic Setup in the TUI, or your OpenRAG .env file doesn't include OAuth credentials, then the OpenRAG OpenSearch instance runs in no-auth mode.

    This mode uses one anonymous JWT token for OpenSearch authentication. There is no differentiation between users; all users that access your OpenRAG instance can access all documents uploaded to your knowledge base.

  • OAuth mode: If you select Advanced Setup in the TUI, or your OpenRAG .env file includes OAuth credentials, then the OpenRAG OpenSearch instance runs in OAuth mode.

    This mode uses a unique JWT token for each OpenRAG user, and each document is tagged with user ownership. Documents are filtered by user owner; users see only the documents that they uploaded or have access to through their cloud storage accounts.

    To enable OAuth mode after initial setup, see Ingest files with OAuth connectors.

OpenSearch indexes

An OpenSearch index is a collection of documents in an OpenSearch database.

By default, all documents you upload to your OpenRAG knowledge base are stored in an index named documents.

It is possible to change the index name by editing the ingestion flow. However, this can impact dependent processes, such as the filters and Chat, that reference the documents index by default. Make sure you edit other flows as needed to ensure all processes use the same index name.

If you encounter errors or unexpected behavior after changing the index name, you can revert the flows to their original configuration, or delete knowledge to clear the existing documents from your knowledge base.

Knowledge ingestion settings

warning

Knowledge ingestion settings apply to documents you upload after making the changes. Documents uploaded before changing these settings aren't reprocessed.

After changing knowledge ingestion settings, you must determine if you need to reupload any documents to be consistent with the new settings.

It isn't always necessary to reupload documents after changing knowledge ingestion settings. For example, it is typical to upload some documents with OCR enabled and others without OCR enabled.

If needed, you can use filters to separate documents that you uploaded with different settings, such as different embedding models.

Set the embedding model and dimensions

When you install OpenRAG, you select at least one embedding model during the application onboarding process. OpenRAG automatically detects and configures the appropriate vector dimensions for your selected embedding model, ensuring optimal search performance and compatibility.

In the OpenRAG repository, you can find the complete list of supported models in models_service.py and the corresponding vector dimensions in settings.py.

During the application onboarding process, you can select from the supported models. The default embedding dimension is 1536, and the default model is the OpenAI text-embedding-3-small.

If you want to use an unsupported model, you must manually set the model in your OpenRAG .env file. If you use an unsupported embedding model that doesn't have defined dimensions in settings.py, then OpenRAG falls back to the default dimensions (1536) and logs a warning. OpenRAG's OpenSearch instance and flows continue to work, but similarity search quality can be affected if the actual model dimensions aren't 1536.

To change the embedding model after onboarding, modify the embedding model configuration on the OpenRAG Settings page or in your OpenRAG .env file. This ensures that all relevant OpenRAG flows are updated to use the new embedding model configuration.

If you edit these settings in the .env file, you must stop and restart the OpenRAG containers to apply the changes.

Set Docling parameters

OpenRAG uses Docling for document ingestion because it supports many file formats, processes tables and images well, and performs efficiently.

When you upload documents, Docling processes the files, splits them into chunks, and stores them as separate, structured documents in your OpenSearch knowledge base.

Select a Docling implementation

You can use either Docling Serve or OpenRAG's built-in Docling ingestion pipeline to process documents.

  • Docling Serve ingestion: By default, OpenRAG uses Docling Serve. It starts a local docling serve process, and then runs Docling ingestion through the Docling Serve API.

    To use a remote docling serve instance or your own local instance, set DOCLING_SERVE_URL=http://**HOST_IP**:5001 in your OpenRAG .env file. The service must run on port 5001.

  • Built-in Docling ingestion: If you want to use OpenRAG's built-in Docling ingestion pipeline instead of the separate Docling Serve service, set DISABLE_INGEST_WITH_LANGFLOW=true in your OpenRAG .env file. The built-in pipeline uses the Docling processor directly instead of through the Docling Serve API. For the underlying functionality, see processors.py in the OpenRAG repository.

Configure Docling ingestion settings

To modify the Docling document processing and embedding parameters, click Settings in OpenRAG, and then find the Knowledge Ingest section.

tip

The TUI warns you if docling serve isn't running. For information about starting and stopping OpenRAG native services, like Docling, see Manage OpenRAG services.

You can edit the following parameters:

  • Embedding model: Select the model to use to generate vector embeddings for your documents.

    This is initially set during installation. The recommended way to change this setting is in the OpenRAG Settings or your OpenRAG .env file. This ensures that all relevant OpenRAG flows are updated to use the new embedding model configuration.

    If you uploaded documents prior to changing the embedding model, you can create filters to separate documents embedded with different models, or you can reupload all documents to regenerate embeddings with the new model. If you want to use multiple embeddings models, similarity search (in the Chat) can take longer as it searches each model's embeddings separately.

  • Chunk size: Set the number of characters for each text chunk when breaking down a file. Larger chunks yield more context per chunk, but can include irrelevant information. Smaller chunks yield more precise semantic search, but can lack context. The default value is 1000 characters, which is usually a good balance between context and precision.

  • Chunk overlap: Set the number of characters to overlap over chunk boundaries. Use larger overlap values for documents where context is most important. Use smaller overlap values for simpler documents or when optimization is most important. The default value is 200 characters, which represents an overlap of 20 percent if the Chunk size is 1000. This is suitable for general use. For faster processing, decrease the overlap to approximately 10 percent. For more complex documents where you need to preserve context across chunks, increase it to approximately 40 percent.

  • Table structure: Enables Docling's DocumentConverter tool for parsing tables. Instead of treating tables as plain text, tables are output as structured table data with preserved relationships and metadata. This option is enabled by default.

  • OCR: Enables Optical Character Recognition (OCR) processing when extracting text from images and ingesting scanned documents. This setting is best suited for processing text-based documents faster with Docling's DocumentConverter. Images are ignored and not processed.

    This option is disabled by default. Enabling OCR can slow ingestion performance.

    If OpenRAG detects that the local machine is running on macOS, OpenRAG uses the ocrmac OCR engine. Other platforms use easyocr.

  • Picture descriptions: Only applicable if OCR is enabled. Adds image descriptions generated by the SmolVLM-256M-Instruct model. Enabling picture descriptions can slow ingestion performance.

Set the local documents path

The default path for local uploads is ~/.openrag/documents. This is mounted to the /app/openrag-documents/ directory inside the OpenRAG container. Files added to the host or container directory are visible in both locations.

To change this location, modify the Documents Paths variable in either the Basic/Advanced Setup menu or in your OpenRAG .env file.

Delete knowledge

warning

This is a destructive operation that cannot be undone.

To delete documents from your knowledge base, click Knowledge, use the checkboxes to select one or more documents, and then click Delete. If you select the checkbox at the top of the list, all documents are selected and your entire knowledge base will be deleted.

To delete an individual document, you can also click More next to that document, and then select Delete.

To completely clear your entire knowledge base and OpenSearch index, reset your OpenRAG containers or reinstall OpenRAG.

See also