Merge branch 'shared-process-document-common-refactor' of github.com:phact/gendb into shared-process-document-common-refactor

This commit is contained in:
phact 2025-09-29 14:54:37 -04:00
commit c6e6ecfcdc
3 changed files with 345 additions and 337 deletions

View file

@ -38,13 +38,9 @@ The file is loaded into your OpenSearch database, and appears in the Knowledge p
To load and process a directory from the mapped location, click <Icon name="Plus" aria-hidden="true"/> **Add Knowledge**, and then click **Process Folder**. To load and process a directory from the mapped location, click <Icon name="Plus" aria-hidden="true"/> **Add Knowledge**, and then click **Process Folder**.
The files are loaded into your OpenSearch database, and appear in the Knowledge page. The files are loaded into your OpenSearch database, and appear in the Knowledge page.
### Ingest files through OAuth connectors (#oauth-ingestion) ### Ingest files through OAuth connectors {#oauth-ingestion}
OpenRAG supports the following enterprise-grade OAuth connectors for seamless document synchronization. OpenRAG supports Google Drive, OneDrive, and AWS S3 as OAuth connectors for seamless document synchronization.
- **Google Drive**
- **OneDrive**
- **AWS**
OAuth integration allows individual users to connect their personal cloud storage accounts to OpenRAG. Each user must separately authorize OpenRAG to access their own cloud storage files. When a user connects a cloud service, they are redirected to authenticate with that service provider and grant OpenRAG permission to sync documents from their personal cloud storage. OAuth integration allows individual users to connect their personal cloud storage accounts to OpenRAG. Each user must separately authorize OpenRAG to access their own cloud storage files. When a user connects a cloud service, they are redirected to authenticate with that service provider and grant OpenRAG permission to sync documents from their personal cloud storage.

View file

@ -79,8 +79,46 @@ For more information on virtual environments, see [uv](https://docs.astral.sh/uv
Command completed successfully Command completed successfully
``` ```
7. To open the OpenRAG application, click **Open App** or press <kbd>6</kbd>. 7. To open the OpenRAG application, click **Open App**, press <kbd>6</kbd>, or navigate to `http://localhost:3000`.
8. Continue with the Quickstart. The application opens.
8. Select your language model and embedding model provider, and complete the required fields.
**Your provider can only be selected once, and you must use the same provider for your language model and embedding model.**
The language model can be changed, but the embeddings model cannot be changed.
To change your provider selection, you must restart OpenRAG and delete the `config.yml` file.
<Tabs groupId="Embedding provider">
<TabItem value="OpenAI" label="OpenAI" default>
9. If you already entered a value for `OPENAI_API_KEY` in the TUI in Step 5, enable **Get API key from environment variable**.
10. Under **Advanced settings**, select your **Embedding Model** and **Language Model**.
11. To load 2 sample PDFs, enable **Sample dataset**.
This is recommended, but not required.
12. Click **Complete**.
</TabItem>
<TabItem value="IBM watsonx.ai" label="IBM watsonx.ai">
9. Complete the fields for **watsonx.ai API Endpoint**, **IBM API key**, and **IBM Project ID**.
These values are found in your IBM watsonx deployment.
10. Under **Advanced settings**, select your **Embedding Model** and **Language Model**.
11. To load 2 sample PDFs, enable **Sample dataset**.
This is recommended, but not required.
12. Click **Complete**.
</TabItem>
<TabItem value="Ollama" label="Ollama">
9. Enter your Ollama server's base URL address.
The default Ollama server address is `http://localhost:11434`.
Since OpenRAG is running in a container, you may need to change `localhost` to access services outside of the container. For example, change `http://localhost:11434` to `http://host.docker.internal:11434` to connect to Ollama.
OpenRAG automatically sends a test connection to your Ollama server to confirm connectivity.
10. Select the **Embedding Model** and **Language Model** your Ollama server is running.
OpenRAG automatically lists the available models from your Ollama server.
11. To load 2 sample PDFs, enable **Sample dataset**.
This is recommended, but not required.
12. Click **Complete**.
</TabItem>
</Tabs>
13. Continue with the [Quickstart](/quickstart).
### Advanced Setup {#advanced-setup} ### Advanced Setup {#advanced-setup}

View file

@ -11,7 +11,41 @@ Get started with OpenRAG by loading your knowledge, swapping out your language m
## Prerequisites ## Prerequisites
- Install and start OpenRAG - [Install and start OpenRAG](/install)
- Create a [Langflow API key](https://docs.langflow.org/api-keys-and-authentication)
<details>
<summary>Create a Langflow API key</summary>
A Langflow API key is a user-specific token you can use with Langflow.
It is **only** used for sending requests to the Langflow server.
It does **not** access to OpenRAG.
To create a Langflow API key, do the following:
1. In Langflow, click your user icon, and then select **Settings**.
2. Click **Langflow API Keys**, and then click <Icon name="Plus" aria-hidden="true"/> **Add New**.
3. Name your key, and then click **Create API Key**.
4. Copy the API key and store it securely.
5. To use your Langflow API key in a request, set a `LANGFLOW_API_KEY` environment variable in your terminal, and then include an `x-api-key` header or query parameter with your request.
For example:
```bash
# Set variable
export LANGFLOW_API_KEY="sk..."
# Send request
curl --request POST \
--url "http://LANGFLOW_SERVER_ADDRESS/api/v1/run/FLOW_ID" \
--header "Content-Type: application/json" \
--header "x-api-key: $LANGFLOW_API_KEY" \
--data '{
"output_type": "chat",
"input_type": "chat",
"input_value": "Hello"
}'
```
</details>
## Find your way around ## Find your way around
@ -20,14 +54,18 @@ Get started with OpenRAG by loading your knowledge, swapping out your language m
For more information, see [Langflow Agents](/agents). For more information, see [Langflow Agents](/agents).
2. Ask `What documents are available to you?` 2. Ask `What documents are available to you?`
The agent responds with a message summarizing the documents that OpenRAG loads by default, which are PDFs about evaluating data quality when using LLMs in health care. The agent responds with a message summarizing the documents that OpenRAG loads by default, which are PDFs about evaluating data quality when using LLMs in health care.
Knowledge is stored in OpenSearch.
For more information, see [Knowledge](/knowledge).
3. To confirm the agent is correct, click <Icon name="Library" aria-hidden="true"/> **Knowledge**. 3. To confirm the agent is correct, click <Icon name="Library" aria-hidden="true"/> **Knowledge**.
The **Knowledge** page lists the documents OpenRAG has ingested into the OpenSearch vector database. Click on a document to display the chunks derived from splitting the default documents into the vector database. The **Knowledge** page lists the documents OpenRAG has ingested into the OpenSearch vector database.
Click on a document to display the chunks derived from splitting the default documents into the vector database.
## Add your own knowledge ## Add your own knowledge
1. To add documents to your knowledge base, click <Icon name="Plus" aria-hidden="true"/> **Add Knowledge**. 1. To add documents to your knowledge base, click <Icon name="Plus" aria-hidden="true"/> **Add Knowledge**.
* Select **Add File** to add a single file from your local machine (mapped with the Docker volume mount). * Select **Add File** to add a single file from your local machine (mapped with the Docker volume mount).
* Select **Process Folder** to process an entire folder of documents from your local machine (mapped with the Docker volume mount). * Select **Process Folder** to process an entire folder of documents from your local machine (mapped with the Docker volume mount).
* Select your cloud storage provider to add knowledge from an OAuth-connected storage provider. For more information, see [OAuth ingestion](/knowledge#oauth-ingestion).
2. Return to the Chat window and ask a question about your loaded data. 2. Return to the Chat window and ask a question about your loaded data.
For example, with a manual about a PC tablet loaded, ask `How do I connect this device to WiFI?` For example, with a manual about a PC tablet loaded, ask `How do I connect this device to WiFI?`
The agent responds with a message indicating it now has your knowledge as context for answering questions. The agent responds with a message indicating it now has your knowledge as context for answering questions.
@ -40,353 +78,289 @@ If you aren't getting the results you need, you can further tune the knowledge i
To modify the knowledge ingestion or Agent behavior, click <Icon name="Settings2" aria-hidden="true"/> **Settings**. To modify the knowledge ingestion or Agent behavior, click <Icon name="Settings2" aria-hidden="true"/> **Settings**.
In this example, you'll try a different LLM to demonstrate how the Agent's response changes. In this example, you'll try a different LLM to demonstrate how the Agent's response changes.
You can only change the **Language model**, and not the **Model provider** that you started with in OpenRAG.
If you're using Ollama, you can use any installed model.
1. To edit the Agent's behavior, click **Edit in Langflow**. 1. To edit the Agent's behavior, click **Edit in Langflow**.
You can more quickly access the **Language Model** and **Agent Instructions** fields in this page, but for illustration purposes, navigate to the Langflow visual builder.
2. OpenRAG warns you that you're entering Langflow. Click **Proceed**. 2. OpenRAG warns you that you're entering Langflow. Click **Proceed**.
3. The OpenRAG OpenSearch Agent flow appears. 3. The OpenRAG OpenSearch Agent flow appears.
![OpenRAG Open Search Agent Flow](/img/opensearch-agent-flow.png)
![OpenRAG OpenSearch Agent Flow](/img/opensearch-agent-flow.png) 4. In the **Language Model** component, under **Model**, select a different OpenAI model.
4. In the **Language Model** component, under **Model Provider**, select **Anthropic**.
:::note
This guide uses an Anthropic model for demonstration purposes. If you want to use a different provider, change the **Model Provider** and **Model Name** fields, and then provide credentials for your selected provider.
:::
5. Save your flow with <kbd>Command+S</kbd>. 5. Save your flow with <kbd>Command+S</kbd>.
6. In OpenRAG, start a new conversation by clicking the <Icon name="Plus" aria-hidden="true"/> in the **Conversations** tab. 6. In OpenRAG, start a new conversation by clicking the <Icon name="Plus" aria-hidden="true"/> in the **Conversations** tab.
7. Ask the same question as before to demonstrate how a different language model changes the results. 7. Ask the same question as before to demonstrate how a different language model changes the results.
## Integrate OpenRAG into your application ## Integrate OpenRAG into your application
:::tip To integrate OpenRAG into your application, use the [Langflow API](https://docs.langflow.org/api-reference-api-examples).
Ensure the `openrag-backend` container has port 8000 exposed in your `docker-compose.yml`: Make requests with Python, TypeScript, or any HTTP client to run one of OpenRAG's default flows and get a response, and then modify the flow further to improve results.
```yaml Langflow provides code snippets to help you get started with the Langflow API.
openrag-backend:
ports:
- "8000:8000"
```
:::
OpenRAG provides a REST API that you can call from Python, TypeScript, or any HTTP client to chat with your documents. 1. To navigate to the OpenRAG OpenSearch Agent flow, click <Icon name="Settings2" aria-hidden="true"/> **Settings**, and then click **Edit in Langflow** in the OpenRAG OpenSearch Agent flow.
2. Click **Share**, and then click **API access**.
These example requests are run assuming OpenRAG is in "no-auth" mode. The default code in the API access pane constructs a request with the Langflow server `url`, `headers`, and a `payload` of request data. The code snippets automatically include the `LANGFLOW_SERVER_ADDRESS` and `FLOW_ID` values for the flow. Replace these values if you're using the code for a different server or flow. The default Langflow server address is http://localhost:7860.
For complete API documentation, including authentication, request and response parameters, and example requests, see the API documentation.
### Chat with your documents <Tabs>
<TabItem value="python" label="Python">
Prompt OpenRAG at the `/chat` API endpoint. ```python
import requests
import os
import uuid
<Tabs> api_key = 'LANGFLOW_API_KEY'
<TabItem value="python" label="Python"> url = "http://LANGFLOW_SERVER_ADDRESS/api/v1/run/FLOW_ID" # The complete API endpoint URL for this flow
```python # Request payload configuration
import requests payload = {
"output_type": "chat",
"input_type": "chat",
"input_value": "hello world!"
}
payload["session_id"] = str(uuid.uuid4())
url = "http://localhost:8000/chat" headers = {"x-api-key": api_key}
payload = {
"prompt": "What documents are available to you?",
"previous_response_id": None
}
response = requests.post(url, json=payload) try:
print("OpenRAG Response:", response.json()) # Send API request
``` response = requests.request("POST", url, json=payload, headers=headers)
response.raise_for_status() # Raise exception for bad status codes
</TabItem> # Print response
<TabItem value="typescript" label="TypeScript"> print(response.text)
```typescript except requests.exceptions.RequestException as e:
import fetch from 'node-fetch'; print(f"Error making API request: {e}")
except ValueError as e:
print(f"Error parsing response: {e}")
```
const response = await fetch("http://localhost:8000/chat", { </TabItem>
method: "POST", <TabItem value="typescript" label="TypeScript">
headers: { "Content-Type": "application/json" },
body: JSON.stringify({
prompt: "What documents are available to you?",
previous_response_id: null
})
});
const data = await response.json(); ```typescript
console.log("OpenRAG Response:", data); const crypto = require('crypto');
``` const apiKey = 'LANGFLOW_API_KEY';
const payload = {
"output_type": "chat",
"input_type": "chat",
"input_value": "hello world!"
};
payload.session_id = crypto.randomUUID();
</TabItem> const options = {
<TabItem value="curl" label="curl"> method: 'POST',
headers: {
'Content-Type': 'application/json',
"x-api-key": apiKey
},
body: JSON.stringify(payload)
};
```bash fetch('http://LANGFLOW_SERVER_ADDRESS/api/v1/run/FLOW_ID', options)
curl -X POST "http://localhost:8000/chat" \ .then(response => response.json())
-H "Content-Type: application/json" \ .then(response => console.warn(response))
-d '{ .catch(err => console.error(err));
"prompt": "What documents are available to you?", ```
"previous_response_id": null
}'
```
</TabItem> </TabItem>
</Tabs> <TabItem value="curl" label="curl">
<details closed> ```bash
<summary>Response</summary> curl --request POST \
--url 'http://LANGFLOW_SERVER_ADDRESS/api/v1/run/FLOW_ID?stream=false' \
--header 'Content-Type: application/json' \
--header "x-api-key: LANGFLOW_API_KEY" \
--data '{
"output_type": "chat",
"input_type": "chat",
"input_value": "hello world!",
}'
```
``` </TabItem>
</Tabs>
3. Copy the snippet, paste it in a script file, and then run the script to send the request. If you are using the curl snippet, you can run the command directly in your terminal.
If the request is successful, the response includes many details about the flow run, including the session ID, inputs, outputs, components, durations, and more.
The following is an example of a response from running the **Simple Agent** template flow:
<details>
<summary>Result</summary>
```json
{ {
"response": "I have access to a wide range of documents depending on the context and the tools enabled in this environment. Specifically, I can search for and retrieve documents related to various topics such as technical papers, articles, manuals, guides, knowledge base entries, and other text-based resources. If you specify a particular subject or type of document you're interested in, I can try to locate relevant materials for you. Let me know what you need!", "session_id": "29deb764-af3f-4d7d-94a0-47491ed241d6",
"response_id": "resp_68d3fdbac93081958b8781b97919fe7007f98bd83932fa1a" "outputs": [
} {
``` "inputs": {
"input_value": "hello world!"
</details> },
"outputs": [
### Search your documents {
"results": {
Search your document knowledge base at the `/search` endpoint. "message": {
"text_key": "text",
<Tabs> "data": {
<TabItem value="python" label="Python"> "timestamp": "2025-06-16 19:58:23 UTC",
"sender": "Machine",
```python "sender_name": "AI",
import requests "session_id": "29deb764-af3f-4d7d-94a0-47491ed241d6",
"text": "Hello world! 🌍 How can I assist you today?",
url = "http://localhost:8000/search" "files": [],
payload = {"query": "healthcare data quality", "limit": 5} "error": false,
"edit": false,
response = requests.post(url, json=payload) "properties": {
results = response.json() "text_color": "",
"background_color": "",
print("Search Results:") "edited": false,
for result in results.get("results", []): "source": {
print(f"- {result.get('filename')}: {result.get('text', '')[:100]}...") "id": "Agent-ZOknz",
``` "display_name": "Agent",
"source": "gpt-4o-mini"
</TabItem> },
<TabItem value="typescript" label="TypeScript"> "icon": "bot",
"allow_markdown": false,
```typescript "positive_feedback": null,
const response = await fetch("http://localhost:8000/search", { "state": "complete",
method: "POST", "targets": []
headers: { "Content-Type": "application/json" }, },
body: JSON.stringify({ "category": "message",
query: "healthcare data quality", "content_blocks": [
limit: 5 {
}) "title": "Agent Steps",
}); "contents": [
{
const results = await response.json(); "type": "text",
console.log("Search Results:"); "duration": 2,
results.results?.forEach((result, index) => { "header": {
const filename = result.filename || 'Unknown'; "title": "Input",
const text = result.text?.substring(0, 100) || ''; "icon": "MessageSquare"
console.log(`${index + 1}. ${filename}: ${text}...`); },
}); "text": "**Input**: hello world!"
``` },
{
</TabItem> "type": "text",
<TabItem value="curl" label="curl"> "duration": 226,
"header": {
```bash "title": "Output",
curl -X POST "http://localhost:8000/search" \ "icon": "MessageSquare"
-H "Content-Type: application/json" \ },
-d '{"query": "healthcare data quality", "limit": 5}' "text": "Hello world! 🌍 How can I assist you today?"
``` }
],
</TabItem> "allow_markdown": true,
</Tabs> "media_url": null
}
],
<details closed> "id": "f3d85d9a-261c-4325-b004-95a1bf5de7ca",
<summary>Example response</summary> "flow_id": "29deb764-af3f-4d7d-94a0-47491ed241d6",
"duration": null
``` },
Found 5 results "default_value": "",
1. 2506.08231v1.pdf: variables with high performance metrics. These variables might also require fewer replication analys... "text": "Hello world! 🌍 How can I assist you today?",
2. 2506.08231v1.pdf: on EHR data and may lack the clinical domain knowledge needed to perform well on the tasks where EHR... "sender": "Machine",
3. 2506.08231v1.pdf: Abstract Large language models (LLMs) are increasingly used to extract clinical data from electronic... "sender_name": "AI",
4. 2506.08231v1.pdf: these multidimensional assessments, the framework not only quantifies accuracy, but can also be appl... "files": [],
5. 2506.08231v1.pdf: observed in only the model metrics, but not the abstractor metrics, it indicates that model errors m... "session_id": "29deb764-af3f-4d7d-94a0-47491ed241d6",
``` "timestamp": "2025-06-16T19:58:23+00:00",
"flow_id": "29deb764-af3f-4d7d-94a0-47491ed241d6",
</details> "error": false,
"edit": false,
### Use chat and search together "properties": {
"text_color": "",
Create a complete chat application that combines an interactive terminal chat with session continuity and search functionality. "background_color": "",
"edited": false,
<Tabs> "source": {
<TabItem value="python" label="Python"> "id": "Agent-ZOknz",
"display_name": "Agent",
```python "source": "gpt-4o-mini"
import requests },
"icon": "bot",
# Configuration "allow_markdown": false,
OPENRAG_BASE_URL = "http://localhost:8000" "positive_feedback": null,
CHAT_URL = f"{OPENRAG_BASE_URL}/chat" "state": "complete",
SEARCH_URL = f"{OPENRAG_BASE_URL}/search" "targets": []
DEFAULT_SEARCH_LIMIT = 5 },
"category": "message",
def chat_with_openrag(message, previous_response_id=None): "content_blocks": [
try: {
response = requests.post(CHAT_URL, json={ "title": "Agent Steps",
"prompt": message, "contents": [
"previous_response_id": previous_response_id {
}) "type": "text",
response.raise_for_status() "duration": 2,
data = response.json() "header": {
return data.get("response"), data.get("response_id") "title": "Input",
except Exception as e: "icon": "MessageSquare"
return f"Error: {str(e)}", None },
"text": "**Input**: hello world!"
def search_documents(query, limit=DEFAULT_SEARCH_LIMIT): },
try: {
response = requests.post(SEARCH_URL, json={ "type": "text",
"query": query, "duration": 226,
"limit": limit "header": {
}) "title": "Output",
response.raise_for_status() "icon": "MessageSquare"
data = response.json() },
return data.get("results", []) "text": "Hello world! 🌍 How can I assist you today?"
except Exception as e: }
return [] ],
"allow_markdown": true,
# Interactive chat with session continuity and search "media_url": null
previous_response_id = None }
while True: ],
question = input("Your question (or 'search <query>' to search): ").strip() "duration": null
if question.lower() in ['quit', 'exit', 'q']: }
break },
if not question: "artifacts": {
continue "message": "Hello world! 🌍 How can I assist you today?",
"sender": "Machine",
if question.lower().startswith('search '): "sender_name": "AI",
query = question[7:].strip() "files": [],
print("Searching documents...") "type": "object"
results = search_documents(query) },
print(f"\nFound {len(results)} results:") "outputs": {
for i, result in enumerate(results, 1): "message": {
filename = result.get('filename', 'Unknown') "message": "Hello world! 🌍 How can I assist you today?",
text = result.get('text', '')[:100] "type": "text"
print(f"{i}. {filename}: {text}...") }
print() },
else: "logs": {
print("OpenRAG is thinking...") "message": []
result, response_id = chat_with_openrag(question, previous_response_id) },
print(f"OpenRAG: {result}\n") "messages": [
previous_response_id = response_id {
``` "message": "Hello world! 🌍 How can I assist you today?",
"sender": "Machine",
</TabItem> "sender_name": "AI",
<TabItem value="typescript" label="TypeScript"> "session_id": "29deb764-af3f-4d7d-94a0-47491ed241d6",
"stream_url": null,
```ts "component_id": "ChatOutput-aF5lw",
import fetch from 'node-fetch'; "files": [],
"type": "text"
// Configuration }
const OPENRAG_BASE_URL = "http://localhost:8000"; ],
const CHAT_URL = `${OPENRAG_BASE_URL}/chat`; "timedelta": null,
const SEARCH_URL = `${OPENRAG_BASE_URL}/search`; "duration": null,
const DEFAULT_SEARCH_LIMIT = 5; "component_display_name": "Chat Output",
"component_id": "ChatOutput-aF5lw",
async function chatWithOpenRAG(message: string, previousResponseId?: string | null) { "used_frozen_result": false
try { }
const response = await fetch(CHAT_URL, { ]
method: "POST",
headers: { "Content-Type": "application/json" },
body: JSON.stringify({
prompt: message,
previous_response_id: previousResponseId
})
});
const data = await response.json();
return [data.response || "No response received", data.response_id || null];
} catch (error) {
return [`Error: ${error}`, null];
} }
]
} }
async function searchDocuments(query: string, limit: number = DEFAULT_SEARCH_LIMIT) {
try {
const response = await fetch(SEARCH_URL, {
method: "POST",
headers: { "Content-Type": "application/json" },
body: JSON.stringify({ query, limit })
});
const data = await response.json();
return data.results || [];
} catch (error) {
return [];
}
}
// Interactive chat with session continuity and search
let previousResponseId = null;
const readline = require('readline');
const rl = readline.createInterface({ input: process.stdin, output: process.stdout });
const askQuestion = () => {
rl.question("Your question (or 'search <query>' to search): ", async (question) => {
if (question.toLowerCase() === 'quit' || question.toLowerCase() === 'exit' || question.toLowerCase() === 'q') {
console.log("Goodbye!");
rl.close();
return;
}
if (!question.trim()) {
askQuestion();
return;
}
if (question.toLowerCase().startsWith('search ')) {
const query = question.substring(7).trim();
console.log("Searching documents...");
const results = await searchDocuments(query);
console.log(`\nFound ${results.length} results:`);
results.forEach((result, i) => {
const filename = result.filename || 'Unknown';
const text = result.text?.substring(0, 100) || '';
console.log(`${i + 1}. ${filename}: ${text}...`);
});
console.log();
} else {
console.log("OpenRAG is thinking...");
const [result, responseId] = await chatWithOpenRAG(question, previousResponseId);
console.log(`\nOpenRAG: ${result}\n`);
previousResponseId = responseId;
}
askQuestion();
});
};
console.log("OpenRAG Chat Interface");
console.log("Ask questions about your documents. Type 'quit' to exit.");
console.log("Use 'search <query>' to search documents directly.\n");
askQuestion();
``` ```
</TabItem>
</Tabs>
<details closed>
<summary>Example response</summary>
```
Your question (or 'search <query>' to search): search healthcare
Searching documents...
Found 5 results:
1. 2506.08231v1.pdf: variables with high performance metrics. These variables might also require fewer replication analys...
2. 2506.08231v1.pdf: on EHR data and may lack the clinical domain knowledge needed to perform well on the tasks where EHR...
3. 2506.08231v1.pdf: Abstract Large language models (LLMs) are increasingly used to extract clinical data from electronic...
4. 2506.08231v1.pdf: Acknowledgements Darren Johnson for support in publication planning and management. The authors used...
5. 2506.08231v1.pdf: Ensuring Reliability of Curated EHR-Derived Data: The Validation of Accuracy for LLM/ML-Extracted In...
Your question (or 'search <query>' to search): what's the weather today?
OpenRAG is thinking...
OpenRAG: I don't have access to real-time weather data. Could you please provide me with your location? Then I can help you find the weather information.
Your question (or 'search <query>' to search): newark nj
OpenRAG is thinking...
```
</details> </details>
## Next steps
TBD To further explore the API, see:
* The Langflow [Quickstart](https://docs.langflow.org/quickstart#extract-data-from-the-response) extends this example with extracting fields from the response.
* [Get started with the Langflow API](https://docs.langflow.org/api-reference-api-examples)