diff --git a/404.html b/404.html index 537892fc..ebc0764b 100644 --- a/404.html +++ b/404.html @@ -4,7 +4,7 @@
OpenRAG leverages Langflow's Agent component to power the OpenRAG OpenSearch Agent flow.
Flows in Langflow are functional representations of application workflows, with multiple component nodes connected as single steps in a workflow.
In the OpenRAG OpenSearch Agent flow, components like the Langflow Agent component and OpenSearch component are connected to intelligently chat with your knowledge by embedding your query, comparing it the vector database embeddings, and generating a response with the LLM.
-

The Agent component shines here in its ability to make decisions on not only what query should be sent, but when a query is necessary to solve the problem at hand.
Agents extend Large Language Models (LLMs) by integrating tools, which are functions that provide additional context and enable autonomous task execution. These integrations make agents more specialized and powerful than standalone LLMs.
Whereas an LLM might generate acceptable, inert responses to general queries and tasks, an agent can leverage the integrated context and tools to provide more relevant responses and even take action. For example, you might create an agent that can access your company's documentation, repositories, and other resources to help your team with tasks that require knowledge of your specific products, customers, and code.
Agents use LLMs as a reasoning engine to process input, determine which actions to take to address the query, and then generate a response. The response could be a typical text-based LLM response, or it could involve an action, like editing a file, running a script, or calling an external API.
In an agentic context, tools are functions that the agent can run to perform tasks or access external resources. A function is wrapped as a Tool object with a common interface that the agent understands. Agents become aware of tools through tool registration, which is when the agent is provided a list of available tools typically at agent initialization. The Tool object's description tells the agent what the tool can do so that it can decide whether the tool is appropriate for a given request.
If you've chatted with your knowledge in OpenRAG, you've already experienced the OpenRAG OpenSearch Agent chat flow. To switch OpenRAG over to the Langflow visual editor and view the OpenRAG OpenSearch Agentflow, click Settings, and then click Edit in Langflow. -This flow contains seven components connected together to chat with your data:
+This flow contains eight components connected together to chat with your data:OPENRAG-QUERY-FILTER.
This filter is the Knowledge filter, and filters which knowledge sources to search through.All flows included with OpenRAG are designed to be modular, performant, and provider-agnostic.
To modify a flow, click Settings, and click Edit in Langflow.
@@ -52,7 +53,7 @@ OpenRAG warns you that this discards all custom settings. Click Restore<
Langflow offers component bundles to integrate with many popular vector stores, AI/ML providers, and search APIs.
OpenRAG supports knowledge ingestion through direct file uploads and OAuth connectors.
+OpenRAG supports knowledge ingestion through direct file uploads and OAuth connectors. +To configure the knowledge ingestion pipeline parameters, see Docling Ingestion.
The Knowledge Ingest flow uses Langflow's File component to split and embed files loaded from your local machine into the OpenSearch database.
The default path to your local folder is mounted from the ./documents folder in your OpenRAG project directory to the /app/documents/ directory inside the Docker container. Files added to the host or the container will be visible in both locations. To configure this location, modify the Documents Paths variable in either the TUI's Advanced Setup menu or in the .env used by Docker Compose.
All flows included with OpenRAG are designed to be modular, performant, and provider-agnostic. To modify a flow, click Settings, and click Edit in Langflow. OpenRAG's visual editor is based on the Langflow visual editor, so you can edit your flows to match your specific use case.
-To configure the knowledge ingestion pipeline parameters, see Docling Ingestion.
OpenRAG includes a knowledge filter system for organizing and managing document collections. Knowledge filters are saved search configurations that allow you to create custom views of your document collection. They store search queries, filter criteria, and display settings that can be reused across different parts of OpenRAG.
@@ -116,7 +115,7 @@ A new filter is created with default settings that match everything.You can use custom embedding models by specifying them in your configuration.
If you use an unknown embedding model, OpenRAG will automatically fall back to 1536 dimensions and log a warning. The system will continue to work, but search quality may be affected if the actual model dimensions differ from 1536.
The default embedding dimension is 1536 and the default model is text-embedding-3-small.
For models with known vector dimensions, see settings.py in the OpenRAG repository.
The OpenRAG OpenSearch Agent flow appears.
-
In the Language Model component, under Model, select a different OpenAI model.
diff --git a/reference/configuration/index.html b/reference/configuration/index.html index ac667a28..7048195b 100644 --- a/reference/configuration/index.html +++ b/reference/configuration/index.html @@ -4,7 +4,7 @@