276 lines
No EOL
13 KiB
Text
276 lines
No EOL
13 KiB
Text
---
|
|
title: Quickstart
|
|
slug: /quickstart
|
|
---
|
|
|
|
import Icon from "@site/src/components/icon/icon";
|
|
import Tabs from '@theme/Tabs';
|
|
import TabItem from '@theme/TabItem';
|
|
import PartialWsl from '@site/docs/_partial-wsl-install.mdx';
|
|
|
|
Use this quickstart to install OpenRAG, and then try some of OpenRAG's core features.
|
|
|
|
## Prerequisites
|
|
|
|
This quickstart requires the following:
|
|
|
|
- An [OpenAI API key](https://platform.openai.com/api-keys).
|
|
This quickstart uses OpenAI for simplicity.
|
|
For other providers, see the complete [installation guide](/install).
|
|
|
|
- [Python](https://www.python.org/downloads/release/python-3100/) version 3.10 to 3.13.
|
|
|
|
- Microsoft Windows only: To run OpenRAG on Windows, you must use the Windows Subsystem for Linux (WSL).
|
|
|
|
<details>
|
|
<summary>Install WSL for OpenRAG</summary>
|
|
|
|
<PartialWsl />
|
|
|
|
</details>
|
|
|
|
|
|
## Install OpenRAG
|
|
|
|
For this quickstart, install OpenRAG with the automatic installer script and basic setup:
|
|
|
|
1. Create a directory to store the OpenRAG configuration files, and then change to that directory:
|
|
|
|
```bash
|
|
mkdir openrag-workspace
|
|
cd openrag-workspace
|
|
```
|
|
|
|
2. [Download the OpenRAG install script](https://docs.openr.ag/files/run_openrag_with_prereqs.sh), move it to your OpenRAG directory, and then run it:
|
|
|
|
```bash
|
|
bash run_openrag_with_prereqs.sh
|
|
```
|
|
|
|
This script installs OpenRAG and its dependencies, including Docker or Podman, and it creates a `.env` file and `docker-compose` files in the current working directory.
|
|
You might be prompted to install certain dependencies if they aren't already present in your environment.
|
|
This process can take a few minutes.
|
|
Once the environment is ready, OpenRAG starts.
|
|
|
|
3. Click **Basic Setup**.
|
|
|
|
4. Create passwords for your OpenRAG installation's OpenSearch and Langflow services. You can click **Generate Passwords** to automatically generate passwords.
|
|
|
|
The OpenSearch password is required. The Langflow admin password is optional.
|
|
If you don't generate a Langflow admin password, Langflow runs in [autologin mode](https://docs.langflow.org/api-keys-and-authentication#langflow-auto-login) with no password required.
|
|
|
|
Your passwords are saved in the `.env` file that is used to start OpenRAG.
|
|
You can find this file in your OpenRAG installation directory.
|
|
|
|
5. Click **Save Configuration**, and then click **Start All Services**.
|
|
|
|
Wait a few minutes while the startup process pulls and runs the necessary container images.
|
|
Proceed when you see the following messages in the terminal user interface (TUI):
|
|
|
|
```bash
|
|
Services started successfully
|
|
Command completed successfully
|
|
```
|
|
|
|
6. To open the OpenRAG application, go to the TUI main menu, and then click **Open App**.
|
|
Alternatively, in your browser, navigate to `localhost:3000`.
|
|
|
|
7. Select the **OpenAI** model provider, enter your OpenAI API key, and then click **Complete**.
|
|
|
|
For this quickstart, you can use the default options for the model settings.
|
|
|
|
8. Click through the overview slides for a brief introduction to OpenRAG and basic setup, or click <Icon name="ArrowRight" aria-hidden="true"/> **Skip overview**.
|
|
You can complete this quickstart without going through the overview.
|
|
|
|
## Load and chat with documents {#chat-with-documents}
|
|
|
|
OpenRAG's knowledge base chat is powered by the [OpenRAG OpenSearch Agent](/agents).
|
|
Some documents are included by default to get you started, and you can load your own documents.
|
|
|
|
1. In OpenRAG, click <Icon name="MessageSquare" aria-hidden="true"/> **Chat**.
|
|
|
|
2. For this quickstart, ask the agent what documents are available.
|
|
For example: `What documents are available to you?`
|
|
|
|
The agent responds with a summary of OpenRAG's default documents.
|
|
|
|
3. To verify the agent's response, click <Icon name="Library" aria-hidden="true"/> **Knowledge** to view the documents stored in the OpenRAG OpenSearch vector database.
|
|
You can click a document to view the chunks of the document as they are stored in the database.
|
|
|
|
4. Click **Add Knowledge** to add your own documents to your OpenRAG knowledge base.
|
|
|
|
For this quickstart, use either the <Icon name="File" aria-hidden="true"/> **File** or <Icon name="Folder" aria-hidden="true"/> **Folder** upload options to load documents from your local machine.
|
|
**Folder** uploads an entire directory.
|
|
The default directory is the `/documents` subdirectory in your OpenRAG installation directory.
|
|
|
|
For information about the cloud storage provider options, see [Ingest files through OAuth connectors](/knowledge#oauth-ingestion).
|
|
|
|
5. Return to the **Chat** window, and then ask a question related to the documents that you just uploaded.
|
|
|
|
If the agent's response doesn't seem to reference your documents correctly, try the following:
|
|
|
|
* Click <Icon name="Gear" aria-hidden="true"/> **Function Call: search_documents (tool_call)** to view the log of tool calls made by the agent. This is helpful for troubleshooting because it shows you how the agent used particular tools.
|
|
|
|
* Click <Icon name="Library" aria-hidden="true"/> **Knowledge** to confirm that the documents are present in the OpenRAG OpenSearch vector database, and then click each document to see how the document was chunked.
|
|
If a document was chunked improperly, you might need to tweak the ingestion or modify and reupload the document.
|
|
|
|
* Click <Icon name="Settings2" aria-hidden="true"/> **Settings** to modify the knowledge ingestion settings.
|
|
|
|
For more information about knowledge bases and knowledge ingestion, see [OpenSearch in OpenRAG](/knowledge).
|
|
|
|
## Change the language model and chat settings {#change-components}
|
|
|
|
1. To change the knowledge ingestion settings, agent behavior, or language model, click <Icon name="Settings2" aria-hidden="true"/> **Settings**.
|
|
|
|
The **Settings** page provides quick access to commonly used parameters like the **Language model** and **Agent Instructions**.
|
|
|
|
2. For greater insight into the underlying [Langflow flow](/agents) that drives the OpenRAG chat, click **Edit in Langflow** and then click **Proceed** to launch the Langflow visual editor in a new browser window.
|
|
|
|
If Langflow requests login information, enter the `LANGFLOW_SUPERUSER` and `LANGFLOW_SUPERUSER_PASSWORD` from the `.env` file in your OpenRAG installation directory.
|
|
|
|
The OpenRAG OpenSearch Agent flow opens in a new browser window.
|
|
|
|

|
|
|
|
3. For this quickstart, try changing the model.
|
|
Click the **Language Model** component, and then change the **Model Name** to a different OpenAI model.
|
|
|
|
After you edit a built-in flow, you can click **Restore flow** on the **Settings** page to revert the flow to its original state when you first installed OpenRAG.
|
|
This is a destructive action that discards all customizations to the flow.
|
|
|
|
4. Press <kbd>Command</kbd>+<kbd>S</kbd> (<kbd>Ctrl</kbd>+<kbd>S</kbd>) to save your changes.
|
|
|
|
You can close the Langflow browser window, or leave it open if you want to continue experimenting with the flow editor.
|
|
|
|
5. Switch to your OpenRAG browser window, and then click <Icon name="Plus" aria-hidden="true"/> in the **Conversations** tab to start a new conversation.
|
|
This ensures that the chat doesn't persist any context from the previous conversation with the original model.
|
|
|
|
6. Ask the same question you asked in [Load and chat with documents](#chat-with-documents) to see how the response differs from the original model.
|
|
|
|
## Integrate OpenRAG into an application
|
|
|
|
Langflow in OpenRAG includes pre-built flows that you can integrate into your applications using the [Langflow API](https://docs.langflow.org/api-reference-api-examples).
|
|
You can use these flows as-is or modify them to better suit your needs, as demonstrated in [Change the language model and chat settings](#change-components).
|
|
|
|
You can send and receive requests with the Langflow API using Python, TypeScript, or curl.
|
|
|
|
1. Open the OpenRAG OpenSearch Agent flow in the Langflow visual editor: From the **Chat** window, click <Icon name="Settings2" aria-hidden="true"/> **Settings**, click **Edit in Langflow**, and then click **Proceed**.
|
|
|
|
2. Create a [Langflow API key](https://docs.langflow.org/api-keys-and-authentication), which is a user-specific token required to send requests to the Langflow server.
|
|
This key doesn't grant access to OpenRAG.
|
|
|
|
1. In the Langflow visual editor, click your user icon in the header, and then select **Settings**.
|
|
2. Click **Langflow API Keys**, and then click <Icon name="Plus" aria-hidden="true"/> **Add New**.
|
|
3. Name your key, and then click **Create API Key**.
|
|
4. Copy the API key and store it securely.
|
|
5. Exit the Langflow **Settings** page to return to the visual editor.
|
|
|
|
3. Click **Share**, and then select **API access** to get pregenerated code snippets that call the Langflow API and run the flow.
|
|
|
|
These code snippets construct API requests with your Langflow server URL (`LANGFLOW_SERVER_ADDRESS`), the flow to run (`FLOW_ID`), required headers (`LANGFLOW_API_KEY`, `Content-Type`), and a payload containing the required inputs to run the flow, including a default chat input message.
|
|
|
|
In production, you would modify the inputs to suit your application logic. For example, you could replace the default chat input message with dynamic user input.
|
|
|
|
<Tabs>
|
|
<TabItem value="python" label="Python">
|
|
|
|
```python
|
|
import requests
|
|
import os
|
|
import uuid
|
|
|
|
api_key = 'LANGFLOW_API_KEY'
|
|
url = "http://LANGFLOW_SERVER_ADDRESS/api/v1/run/FLOW_ID" # The complete API endpoint URL for this flow
|
|
|
|
# Request payload configuration
|
|
payload = {
|
|
"output_type": "chat",
|
|
"input_type": "chat",
|
|
"input_value": "hello world!"
|
|
}
|
|
payload["session_id"] = str(uuid.uuid4())
|
|
|
|
headers = {"x-api-key": api_key}
|
|
|
|
try:
|
|
# Send API request
|
|
response = requests.request("POST", url, json=payload, headers=headers)
|
|
response.raise_for_status() # Raise exception for bad status codes
|
|
|
|
# Print response
|
|
print(response.text)
|
|
|
|
except requests.exceptions.RequestException as e:
|
|
print(f"Error making API request: {e}")
|
|
except ValueError as e:
|
|
print(f"Error parsing response: {e}")
|
|
```
|
|
|
|
</TabItem>
|
|
<TabItem value="typescript" label="TypeScript">
|
|
|
|
```typescript
|
|
const crypto = require('crypto');
|
|
const apiKey = 'LANGFLOW_API_KEY';
|
|
const payload = {
|
|
"output_type": "chat",
|
|
"input_type": "chat",
|
|
"input_value": "hello world!"
|
|
};
|
|
payload.session_id = crypto.randomUUID();
|
|
|
|
const options = {
|
|
method: 'POST',
|
|
headers: {
|
|
'Content-Type': 'application/json',
|
|
"x-api-key": apiKey
|
|
},
|
|
body: JSON.stringify(payload)
|
|
};
|
|
|
|
fetch('http://LANGFLOW_SERVER_ADDRESS/api/v1/run/FLOW_ID', options)
|
|
.then(response => response.json())
|
|
.then(response => console.warn(response))
|
|
.catch(err => console.error(err));
|
|
```
|
|
|
|
</TabItem>
|
|
<TabItem value="curl" label="curl">
|
|
|
|
```bash
|
|
curl --request POST \
|
|
--url 'http://LANGFLOW_SERVER_ADDRESS/api/v1/run/FLOW_ID?stream=false' \
|
|
--header 'Content-Type: application/json' \
|
|
--header "x-api-key: LANGFLOW_API_KEY" \
|
|
--data '{
|
|
"output_type": "chat",
|
|
"input_type": "chat",
|
|
"input_value": "hello world!"
|
|
}'
|
|
```
|
|
|
|
</TabItem>
|
|
</Tabs>
|
|
|
|
4. Copy your preferred snippet, and then run it:
|
|
|
|
* **Python**: Paste the snippet into a `.py` file, save it, and then run it with `python filename.py`.
|
|
* **TypeScript**: Paste the snippet into a `.ts` file, save it, and then run it with `ts-node filename.ts`.
|
|
* **curl**: Paste and run snippet directly in your terminal.
|
|
|
|
If the request is successful, the response includes many details about the flow run, including the session ID, inputs, outputs, components, durations, and more.
|
|
|
|
In production, you won't pass the raw response to the user in its entirety.
|
|
Instead, you extract and reformat relevant fields for different use cases, as demonstrated in the [Langflow quickstart](https://docs.langflow.org/quickstart#extract-data-from-the-response).
|
|
For example, you could pass the chat output text to a front-end user-facing application, and store specific fields in logs and backend data stores for monitoring, chat history, or analytics.
|
|
You could also pass the output from one flow as input to another flow.
|
|
|
|
## Next steps
|
|
|
|
* **Reinstall OpenRAG with your preferred settings**: This quickstart used a minimal setup to demonstrate OpenRAG's core functionality.
|
|
It is recommended that you reinstall OpenRAG with your preferred configuration because some settings are immutable after initial setup.
|
|
For all installation options, see [Install OpenRAG with TUI](/install) and [Install OpenRAG with containers](/docker).
|
|
|
|
* **Learn more about OpenRAG**: Explore OpenRAG and the OpenRAG documentation to learn more about its features and functionality.
|
|
|
|
* **Learn more about Langflow**: For a deep dive on the Langflow API and visual editor, see the [Langflow documentation](https://docs.langflow.org/). |