remove connection dot indicators on settings page, better toast message for provider setup dialogs, fix typo in default agent prompt

This commit is contained in:
Cole Goldsmith 2025-11-19 13:16:17 -06:00
parent 60598b65ca
commit cf8eb9ccce
9 changed files with 201 additions and 192 deletions

View file

@ -2817,7 +2817,7 @@
"trace_as_input": true,
"trace_as_metadata": true,
"type": "str",
"value": "You are the OpenRAG Agent. You answer questions using retrieval, reasoning, and tool use.\nYou have access to several tools. Your job is to determine **which tool to use and when**.\n### Available Tools\n- OpenSearch Retrieval Tool:\n Use this to search the indexed knowledge base. Use when the user asks about product details, internal concepts, processes, architecture, documentation, roadmaps, or anything that may be stored in the index.\n- Conversation History:\n Use this to maintain continuity when the user is referring to previous turns. \n Do not treat history as a factual source.\n- Conversation File Context:\n Use this when the user asks about a document they uploaded or refers directly to its contents.\n- URL Ingestion Tool:\n Use this **only** when the user explicitly asks you to read, summarize, or analyze the content of a URL.\n Do not ingest URLs automatically.\n- Calculator / Expression Evaluation Tool:\n Use this when the user asks to compare numbers, compute estimates, calculate totals, analyze pricing, or answer any question requiring mathematics or quantitative reasoning.\n If the answer requires arithmetic, call the calculator tool rather than calculating internally.\n### Retrieval Decision Rules\nUse OpenSearch **whenever**:\n1. The question may be answered from internal or indexed data.\n2. The user references team names, product names, release plans, configurations, requirements, or official information.\n3. The user needs a factual, grounded answer.\nDo **not** use retrieval if:\n- The question is purely creative (e.g., storytelling, analogies) or personal preference.\n- The user simply wants text reformatted or rewritten from what is already present in the conversation.\nWhen uncertain → **Retrieve.** Retrieval is low risk and improves grounding.\n### URL Ingestion Rules\nOnly ingest URLs when the user explicitly says:\n- \"Read this link\"\n- \"Summarize this webpage\"\n- \"What does this site say?\"\n- \"Ingest this URL\"\nIf unclear → ask a clarifying question.\n### Calculator Usage Rules\nUse the calculator when:\n- Performing arithmetic\n- Estimating totals\n- Comparing values\n- Modeling cost, time, effort, scale, or projections\nDo not perform math internally. **Call the calculator tool instead.**\n### Answer Construction Rules\n1. When asked: \"What is OpenRAG\", answer the following:\n\"OpenRAG is an open-source package for building agentic RAG systems. It supports integration with a wide range of orchestration tools, vector databases, and LLM providers. OpenRAG connects and amplifies three popular, proven open-source projects into one powerful platform:\n**Langflow** Langflow is a powerful tool to build and deploy AI agents and MCP servers [Read more](https://www.langflow.org/)\n**OpenSearch** Langflow is a powerful tool to build and deploy AI agents and MCP servers [Read more](https://opensearch.org/)\n**Docling** Langflow is a powerful tool to build and deploy AI agents and MCP servers [Read more](https://www.docling.ai/)\"\n2. Synthesize retrieved or ingested content in your own words.\n3. Support factual claims with citations in the format:\n (Source: <document_name_or_id>)\n4. If no supporting evidence is found:\n Say: \"No relevant supporting sources were found for that request.\"\n5. Never invent facts or hallucinate details.\n6. Be concise, direct, and confident. \n7. Do not reveal internal chain-of-thought."
"value": "You are the OpenRAG Agent. You answer questions using retrieval, reasoning, and tool use.\nYou have access to several tools. Your job is to determine **which tool to use and when**.\n### Available Tools\n- OpenSearch Retrieval Tool:\n Use this to search the indexed knowledge base. Use when the user asks about product details, internal concepts, processes, architecture, documentation, roadmaps, or anything that may be stored in the index.\n- Conversation History:\n Use this to maintain continuity when the user is referring to previous turns. \n Do not treat history as a factual source.\n- Conversation File Context:\n Use this when the user asks about a document they uploaded or refers directly to its contents.\n- URL Ingestion Tool:\n Use this **only** when the user explicitly asks you to read, summarize, or analyze the content of a URL.\n Do not ingest URLs automatically.\n- Calculator / Expression Evaluation Tool:\n Use this when the user asks to compare numbers, compute estimates, calculate totals, analyze pricing, or answer any question requiring mathematics or quantitative reasoning.\n If the answer requires arithmetic, call the calculator tool rather than calculating internally.\n### Retrieval Decision Rules\nUse OpenSearch **whenever**:\n1. The question may be answered from internal or indexed data.\n2. The user references team names, product names, release plans, configurations, requirements, or official information.\n3. The user needs a factual, grounded answer.\nDo **not** use retrieval if:\n- The question is purely creative (e.g., storytelling, analogies) or personal preference.\n- The user simply wants text reformatted or rewritten from what is already present in the conversation.\nWhen uncertain → **Retrieve.** Retrieval is low risk and improves grounding.\n### URL Ingestion Rules\nOnly ingest URLs when the user explicitly says:\n- \"Read this link\"\n- \"Summarize this webpage\"\n- \"What does this site say?\"\n- \"Ingest this URL\"\nIf unclear → ask a clarifying question.\n### Calculator Usage Rules\nUse the calculator when:\n- Performing arithmetic\n- Estimating totals\n- Comparing values\n- Modeling cost, time, effort, scale, or projections\nDo not perform math internally. **Call the calculator tool instead.**\n### Answer Construction Rules\n1. When asked: \"What is OpenRAG\", answer the following:\n\"OpenRAG is an open-source package for building agentic RAG systems. It supports integration with a wide range of orchestration tools, vector databases, and LLM providers. OpenRAG connects and amplifies three popular, proven open-source projects into one powerful platform:\n**Langflow** Langflow is a powerful tool to build and deploy AI agents and MCP servers. [Read more](https://www.langflow.org/)\n**OpenSearch** OpenSearch is an open source, search and observability suite that brings order to unstructured data at scale. [Read more](https://opensearch.org/)\n**Docling** Docling simplifies document processing with advanced PDF understanding, OCR support, and seamless AI integrations. Parse PDFs, DOCX, PPTX, images & more. [Read more](https://www.docling.ai/)\"\n2. Synthesize retrieved or ingested content in your own words.\n3. Support factual claims with citations in the format:\n (Source: <document_name_or_id>)\n4. If no supporting evidence is found:\n Say: \"No relevant supporting sources were found for that request.\"\n5. Never invent facts or hallucinate details.\n6. Be concise, direct, and confident. \n7. Do not reveal internal chain-of-thought."
},
"temperature": {
"_input_type": "SliderInput",

View file

@ -9,150 +9,162 @@ import type { ProviderHealthResponse } from "@/app/api/queries/useProviderHealth
import AnthropicLogo from "@/components/icons/anthropic-logo";
import { Button } from "@/components/ui/button";
import {
Dialog,
DialogContent,
DialogFooter,
DialogHeader,
DialogTitle,
Dialog,
DialogContent,
DialogFooter,
DialogHeader,
DialogTitle,
} from "@/components/ui/dialog";
import {
AnthropicSettingsForm,
type AnthropicSettingsFormData,
AnthropicSettingsForm,
type AnthropicSettingsFormData,
} from "./anthropic-settings-form";
import { useRouter } from "next/navigation";
const AnthropicSettingsDialog = ({
open,
setOpen,
open,
setOpen,
}: {
open: boolean;
setOpen: (open: boolean) => void;
open: boolean;
setOpen: (open: boolean) => void;
}) => {
const queryClient = useQueryClient();
const [isValidating, setIsValidating] = useState(false);
const [validationError, setValidationError] = useState<Error | null>(null);
const queryClient = useQueryClient();
const [isValidating, setIsValidating] = useState(false);
const [validationError, setValidationError] = useState<Error | null>(null);
const router = useRouter();
const methods = useForm<AnthropicSettingsFormData>({
mode: "onSubmit",
defaultValues: {
apiKey: "",
},
});
const methods = useForm<AnthropicSettingsFormData>({
mode: "onSubmit",
defaultValues: {
apiKey: "",
},
});
const { handleSubmit, watch } = methods;
const apiKey = watch("apiKey");
const { handleSubmit, watch } = methods;
const apiKey = watch("apiKey");
const { refetch: validateCredentials } = useGetAnthropicModelsQuery(
{
apiKey: apiKey,
},
{
enabled: false,
},
);
const { refetch: validateCredentials } = useGetAnthropicModelsQuery(
{
apiKey: apiKey,
},
{
enabled: false,
},
);
const settingsMutation = useUpdateSettingsMutation({
onSuccess: () => {
// Update provider health cache to healthy since backend validated the setup
const healthData: ProviderHealthResponse = {
status: "healthy",
message: "Provider is configured and working correctly",
provider: "anthropic",
};
queryClient.setQueryData(["provider", "health"], healthData);
const settingsMutation = useUpdateSettingsMutation({
onSuccess: () => {
// Update provider health cache to healthy since backend validated the setup
const healthData: ProviderHealthResponse = {
status: "healthy",
message: "Provider is configured and working correctly",
provider: "anthropic",
};
queryClient.setQueryData(["provider", "health"], healthData);
toast.success(
"Anthropic credentials saved. Configure models in the Settings page.",
);
setOpen(false);
},
});
toast.message("Anthropic successfully configured", {
description:
"You can now access the provided language models.",
duration: Infinity,
closeButton: true,
icon: <AnthropicLogo className="w-4 h-4 text-[#D97757]" />,
action: {
label: "Settings",
onClick: () => {
router.push("/settings");
},
},
});
setOpen(false);
},
});
const onSubmit = async (data: AnthropicSettingsFormData) => {
// Clear any previous validation errors
setValidationError(null);
const onSubmit = async (data: AnthropicSettingsFormData) => {
// Clear any previous validation errors
setValidationError(null);
// Only validate if a new API key was entered
if (data.apiKey) {
setIsValidating(true);
const result = await validateCredentials();
setIsValidating(false);
// Only validate if a new API key was entered
if (data.apiKey) {
setIsValidating(true);
const result = await validateCredentials();
setIsValidating(false);
if (result.isError) {
setValidationError(result.error);
return;
}
}
if (result.isError) {
setValidationError(result.error);
return;
}
}
const payload: {
anthropic_api_key?: string;
} = {};
const payload: {
anthropic_api_key?: string;
} = {};
// Only include api_key if a value was entered
if (data.apiKey) {
payload.anthropic_api_key = data.apiKey;
}
// Only include api_key if a value was entered
if (data.apiKey) {
payload.anthropic_api_key = data.apiKey;
}
// Submit the update
settingsMutation.mutate(payload);
};
// Submit the update
settingsMutation.mutate(payload);
};
return (
<Dialog open={open} onOpenChange={setOpen}>
<DialogContent className="max-w-2xl">
<FormProvider {...methods}>
<form onSubmit={handleSubmit(onSubmit)} className="grid gap-4">
<DialogHeader className="mb-2">
<DialogTitle className="flex items-center gap-3">
<div className="w-8 h-8 rounded flex items-center justify-center bg-white border">
<AnthropicLogo className="text-black" />
</div>
Anthropic Setup
</DialogTitle>
</DialogHeader>
return (
<Dialog open={open} onOpenChange={setOpen}>
<DialogContent className="max-w-2xl">
<FormProvider {...methods}>
<form onSubmit={handleSubmit(onSubmit)} className="grid gap-4">
<DialogHeader className="mb-2">
<DialogTitle className="flex items-center gap-3">
<div className="w-8 h-8 rounded flex items-center justify-center bg-white border">
<AnthropicLogo className="text-black" />
</div>
Anthropic Setup
</DialogTitle>
</DialogHeader>
<AnthropicSettingsForm
modelsError={validationError}
isLoadingModels={isValidating}
/>
<AnthropicSettingsForm
modelsError={validationError}
isLoadingModels={isValidating}
/>
<AnimatePresence mode="wait">
{settingsMutation.isError && (
<motion.div
key="error"
initial={{ opacity: 0, y: 10 }}
animate={{ opacity: 1, y: 0 }}
exit={{ opacity: 0, y: -10 }}
>
<p className="rounded-lg border border-destructive p-4">
{settingsMutation.error?.message}
</p>
</motion.div>
)}
</AnimatePresence>
<DialogFooter className="mt-4">
<Button
variant="outline"
type="button"
onClick={() => setOpen(false)}
>
Cancel
</Button>
<Button
type="submit"
disabled={settingsMutation.isPending || isValidating}
>
{settingsMutation.isPending
? "Saving..."
: isValidating
? "Validating..."
: "Save"}
</Button>
</DialogFooter>
</form>
</FormProvider>
</DialogContent>
</Dialog>
);
<AnimatePresence mode="wait">
{settingsMutation.isError && (
<motion.div
key="error"
initial={{ opacity: 0, y: 10 }}
animate={{ opacity: 1, y: 0 }}
exit={{ opacity: 0, y: -10 }}
>
<p className="rounded-lg border border-destructive p-4">
{settingsMutation.error?.message}
</p>
</motion.div>
)}
</AnimatePresence>
<DialogFooter className="mt-4">
<Button
variant="outline"
type="button"
onClick={() => setOpen(false)}
>
Cancel
</Button>
<Button
type="submit"
disabled={settingsMutation.isPending || isValidating}
>
{settingsMutation.isPending
? "Saving..."
: isValidating
? "Validating..."
: "Save"}
</Button>
</DialogFooter>
</form>
</FormProvider>
</DialogContent>
</Dialog>
);
};
export default AnthropicSettingsDialog;

View file

@ -96,20 +96,10 @@ export const ModelProviders = () => {
const currentEmbeddingProvider =
(settings.knowledge?.embedding_provider as ModelProvider) || "openai";
// Get all provider keys with active providers first
const activeProviders = new Set([
currentLlmProvider,
currentEmbeddingProvider,
]);
const sortedProviderKeys = [
...Array.from(activeProviders),
...allProviderKeys.filter((key) => !activeProviders.has(key)),
];
return (
<>
<div className="grid gap-6 xs:grid-cols-1 md:grid-cols-2 lg:grid-cols-4">
{sortedProviderKeys.map((providerKey) => {
{allProviderKeys.map((providerKey) => {
const {
name,
logo: Logo,
@ -118,7 +108,6 @@ export const ModelProviders = () => {
} = modelProvidersMap[providerKey];
const isLlmProvider = providerKey === currentLlmProvider;
const isEmbeddingProvider = providerKey === currentEmbeddingProvider;
const isCurrentProvider = isLlmProvider || isEmbeddingProvider;
// Check if this specific provider is unhealthy
const hasLlmError = isLlmProvider && health?.llm_error;
@ -161,16 +150,8 @@ export const ModelProviders = () => {
</div>
<CardTitle className="flex flex-row items-center gap-2">
{name}
{isCurrentProvider && (
<span
className={cn(
"h-2 w-2 rounded-full",
isProviderUnhealthy
? "bg-destructive"
: "bg-accent-emerald-foreground",
)}
aria-label={isProviderUnhealthy ? "Error" : "Active"}
/>
{isProviderUnhealthy && (
<span className="h-2 w-2 rounded-full bg-destructive" />
)}
</CardTitle>
</div>

View file

@ -21,6 +21,7 @@ import {
OllamaSettingsForm,
type OllamaSettingsFormData,
} from "./ollama-settings-form";
import { useRouter } from "next/navigation";
const OllamaSettingsDialog = ({
open,
@ -33,7 +34,8 @@ const OllamaSettingsDialog = ({
const queryClient = useQueryClient();
const [isValidating, setIsValidating] = useState(false);
const [validationError, setValidationError] = useState<Error | null>(null);
const router = useRouter();
const { data: settings = {} } = useGetSettingsQuery({
enabled: isAuthenticated || isNoAuthMode,
});
@ -71,9 +73,19 @@ const OllamaSettingsDialog = ({
};
queryClient.setQueryData(["provider", "health"], healthData);
toast.success(
"Ollama endpoint saved. Configure models in the Settings page.",
);
toast.message("Ollama successfully configured", {
description:
"You can now access the provided language and embedding models.",
duration: Infinity,
closeButton: true,
icon: <OllamaLogo className="w-4 h-4" />,
action: {
label: "Settings",
onClick: () => {
router.push("/settings");
},
},
});
setOpen(false);
},
});

View file

@ -19,6 +19,7 @@ import {
OpenAISettingsForm,
type OpenAISettingsFormData,
} from "./openai-settings-form";
import { useRouter } from "next/navigation";
const OpenAISettingsDialog = ({
open,
@ -30,7 +31,8 @@ const OpenAISettingsDialog = ({
const queryClient = useQueryClient();
const [isValidating, setIsValidating] = useState(false);
const [validationError, setValidationError] = useState<Error | null>(null);
const router = useRouter();
const methods = useForm<OpenAISettingsFormData>({
mode: "onSubmit",
defaultValues: {
@ -60,9 +62,19 @@ const OpenAISettingsDialog = ({
};
queryClient.setQueryData(["provider", "health"], healthData);
toast.success(
"OpenAI credentials saved. Configure models in the Settings page.",
);
toast.message("OpenAI successfully configured", {
description:
"You can now access the provided language and embedding models.",
duration: Infinity,
closeButton: true,
icon: <OpenAILogo className="w-4 h-4" />,
action: {
label: "Settings",
onClick: () => {
router.push("/settings");
},
},
});
setOpen(false);
},
});

View file

@ -19,6 +19,7 @@ import {
WatsonxSettingsForm,
type WatsonxSettingsFormData,
} from "./watsonx-settings-form";
import { useRouter } from "next/navigation";
const WatsonxSettingsDialog = ({
open,
@ -30,7 +31,8 @@ const WatsonxSettingsDialog = ({
const queryClient = useQueryClient();
const [isValidating, setIsValidating] = useState(false);
const [validationError, setValidationError] = useState<Error | null>(null);
const router = useRouter();
const methods = useForm<WatsonxSettingsFormData>({
mode: "onSubmit",
defaultValues: {
@ -65,9 +67,20 @@ const WatsonxSettingsDialog = ({
provider: "watsonx",
};
queryClient.setQueryData(["provider", "health"], healthData);
toast.success(
"watsonx credentials saved. Configure models in the Settings page.",
);
toast.message("IBM watsonx.ai successfully configured", {
description:
"You can now access the provided language and embedding models.",
duration: Infinity,
closeButton: true,
icon: <IBMLogo className="w-4 h-4 text-[#1063FE]" />,
action: {
label: "Settings",
onClick: () => {
router.push("/settings");
},
},
});
setOpen(false);
},
});

View file

@ -42,6 +42,7 @@ import { useUpdateSettingsMutation } from "../api/mutations/useUpdateSettingsMut
import { ModelSelector } from "../onboarding/_components/model-selector";
import ModelProviders from "./_components/model-providers";
import { getModelLogo, type ModelProvider } from "./_helpers/model-helpers";
import { cn } from "@/lib/utils";
const { MAX_SYSTEM_PROMPT_CHARS } = UI_CONSTANTS;
@ -389,7 +390,7 @@ function KnowledgeSourcesPage() {
// Initialize connectors list with metadata from backend
const initialConnectors = connectorTypes
.filter((type) => connectorsResult.connectors[type].available) // Only show available connectors
// .filter((type) => connectorsResult.connectors[type].available) // Only show available connectors
.map((type) => ({
id: type,
name: connectorsResult.connectors[type].name,
@ -537,25 +538,6 @@ function KnowledgeSourcesPage() {
// }
// };
const getStatusBadge = (status: Connector["status"]) => {
switch (status) {
case "connected":
return (
<div className="h-2 w-2 bg-accent-emerald-foreground rounded-full" />
);
case "connecting":
return (
<div className="h-2 w-2 bg-accent-amber-foreground rounded-full" />
);
case "error":
return (
<div className="h-2 w-2 bg-accent-red-foreground rounded-full" />
);
default:
return <div className="h-2 w-2 bg-muted rounded-full" />;
}
};
const navigateToKnowledgePage = (connector: Connector) => {
const provider = connector.type.replace(/-/g, "_");
router.push(`/upload/${provider}`);
@ -693,7 +675,7 @@ function KnowledgeSourcesPage() {
{/* Conditional Sync Settings or No-Auth Message */}
{
isNoAuthMode ? (
<Card className="border-yellow-500">
<Card className="border-accent-amber-foreground">
<CardHeader>
<CardTitle className="text-lg">
Cloud connectors require authentication
@ -798,26 +780,21 @@ function KnowledgeSourcesPage() {
{connectors.map((connector) => {
return (
<Card key={connector.id} className="relative flex flex-col">
<CardHeader>
<CardHeader className="pb-2">
<div className="flex flex-col items-start justify-between">
<div className="flex flex-col gap-3">
<div className="mb-1">
<div
className={`w-8 h-8 ${
connector ? "bg-white" : "bg-muted grayscale"
} rounded flex items-center justify-center border`}
className={cn("w-8 h-8 rounded flex items-center justify-center border", connector?.available ? "bg-white" : "bg-muted grayscale")}
>
{connector.icon}
</div>
</div>
<CardTitle className="flex flex-row items-center gap-2">
{connector.name}
{connector && getStatusBadge(connector.status)}
</CardTitle>
<CardDescription className="text-[13px]">
{connector?.description
? `${connector.name} is configured.`
: connector.description}
<CardDescription className="text-sm">
{connector?.available ? `${connector.name} is configured.` : "Not configured."}
</CardDescription>
</div>
</div>
@ -876,12 +853,14 @@ function KnowledgeSourcesPage() {
)}
</div>
) : (
<div className="text-[13px] text-muted-foreground">
<div className="text-sm text-muted-foreground">
<p>
See our{" "}
<Link
className="text-accent-pink-foreground"
href="https://github.com/langflow-ai/openrag/pull/96/files#diff-06889aa94ccf8dac64e70c8cc30a2ceed32cc3c0c2c14a6ff0336fe882a9c2ccR41"
href="https://docs.openr.ag/knowledge#oauth-ingestion"
target="_blank"
rel="noopener noreferrer"
>
Cloud Connectors installation guide
</Link>{" "}

View file

@ -4,7 +4,7 @@
export const DEFAULT_AGENT_SETTINGS = {
llm_model: "gpt-4o-mini",
system_prompt:
'You are the OpenRAG Agent. You answer questions using retrieval, reasoning, and tool use.\nYou have access to several tools. Your job is to determine **which tool to use and when**.\n### Available Tools\n- OpenSearch Retrieval Tool:\n Use this to search the indexed knowledge base. Use when the user asks about product details, internal concepts, processes, architecture, documentation, roadmaps, or anything that may be stored in the index.\n- Conversation History:\n Use this to maintain continuity when the user is referring to previous turns. \n Do not treat history as a factual source.\n- Conversation File Context:\n Use this when the user asks about a document they uploaded or refers directly to its contents.\n- URL Ingestion Tool:\n Use this **only** when the user explicitly asks you to read, summarize, or analyze the content of a URL.\n Do not ingest URLs automatically.\n- Calculator / Expression Evaluation Tool:\n Use this when the user asks to compare numbers, compute estimates, calculate totals, analyze pricing, or answer any question requiring mathematics or quantitative reasoning.\n If the answer requires arithmetic, call the calculator tool rather than calculating internally.\n### Retrieval Decision Rules\nUse OpenSearch **whenever**:\n1. The question may be answered from internal or indexed data.\n2. The user references team names, product names, release plans, configurations, requirements, or official information.\n3. The user needs a factual, grounded answer.\nDo **not** use retrieval if:\n- The question is purely creative (e.g., storytelling, analogies) or personal preference.\n- The user simply wants text reformatted or rewritten from what is already present in the conversation.\nWhen uncertain → **Retrieve.** Retrieval is low risk and improves grounding.\n### URL Ingestion Rules\nOnly ingest URLs when the user explicitly says:\n- "Read this link"\n- "Summarize this webpage"\n- "What does this site say?"\n- "Ingest this URL"\nIf unclear → ask a clarifying question.\n### Calculator Usage Rules\nUse the calculator when:\n- Performing arithmetic\n- Estimating totals\n- Comparing values\n- Modeling cost, time, effort, scale, or projections\nDo not perform math internally. **Call the calculator tool instead.**\n### Answer Construction Rules\n1. When asked: "What is OpenRAG", answer the following:\n"OpenRAG is an open-source package for building agentic RAG systems. It supports integration with a wide range of orchestration tools, vector databases, and LLM providers. OpenRAG connects and amplifies three popular, proven open-source projects into one powerful platform:\n**Langflow** Langflow is a powerful tool to build and deploy AI agents and MCP servers [Read more](https://www.langflow.org/)\n**OpenSearch** Langflow is a powerful tool to build and deploy AI agents and MCP servers [Read more](https://opensearch.org/)\n**Docling** Langflow is a powerful tool to build and deploy AI agents and MCP servers [Read more](https://www.docling.ai/)"\n2. Synthesize retrieved or ingested content in your own words.\n3. Support factual claims with citations in the format:\n (Source: <document_name_or_id>)\n4. If no supporting evidence is found:\n Say: "No relevant supporting sources were found for that request."\n5. Never invent facts or hallucinate details.\n6. Be concise, direct, and confident. \n7. Do not reveal internal chain-of-thought.',
'You are the OpenRAG Agent. You answer questions using retrieval, reasoning, and tool use.\nYou have access to several tools. Your job is to determine **which tool to use and when**.\n### Available Tools\n- OpenSearch Retrieval Tool:\n Use this to search the indexed knowledge base. Use when the user asks about product details, internal concepts, processes, architecture, documentation, roadmaps, or anything that may be stored in the index.\n- Conversation History:\n Use this to maintain continuity when the user is referring to previous turns. \n Do not treat history as a factual source.\n- Conversation File Context:\n Use this when the user asks about a document they uploaded or refers directly to its contents.\n- URL Ingestion Tool:\n Use this **only** when the user explicitly asks you to read, summarize, or analyze the content of a URL.\n Do not ingest URLs automatically.\n- Calculator / Expression Evaluation Tool:\n Use this when the user asks to compare numbers, compute estimates, calculate totals, analyze pricing, or answer any question requiring mathematics or quantitative reasoning.\n If the answer requires arithmetic, call the calculator tool rather than calculating internally.\n### Retrieval Decision Rules\nUse OpenSearch **whenever**:\n1. The question may be answered from internal or indexed data.\n2. The user references team names, product names, release plans, configurations, requirements, or official information.\n3. The user needs a factual, grounded answer.\nDo **not** use retrieval if:\n- The question is purely creative (e.g., storytelling, analogies) or personal preference.\n- The user simply wants text reformatted or rewritten from what is already present in the conversation.\nWhen uncertain → **Retrieve.** Retrieval is low risk and improves grounding.\n### URL Ingestion Rules\nOnly ingest URLs when the user explicitly says:\n- \"Read this link\"\n- \"Summarize this webpage\"\n- \"What does this site say?\"\n- \"Ingest this URL\"\nIf unclear → ask a clarifying question.\n### Calculator Usage Rules\nUse the calculator when:\n- Performing arithmetic\n- Estimating totals\n- Comparing values\n- Modeling cost, time, effort, scale, or projections\nDo not perform math internally. **Call the calculator tool instead.**\n### Answer Construction Rules\n1. When asked: \"What is OpenRAG\", answer the following:\n\"OpenRAG is an open-source package for building agentic RAG systems. It supports integration with a wide range of orchestration tools, vector databases, and LLM providers. OpenRAG connects and amplifies three popular, proven open-source projects into one powerful platform:\n**Langflow** Langflow is a powerful tool to build and deploy AI agents and MCP servers. [Read more](https://www.langflow.org/)\n**OpenSearch** OpenSearch is an open source, search and observability suite that brings order to unstructured data at scale. [Read more](https://opensearch.org/)\n**Docling** Docling simplifies document processing with advanced PDF understanding, OCR support, and seamless AI integrations. Parse PDFs, DOCX, PPTX, images & more. [Read more](https://www.docling.ai/)\"\n2. Synthesize retrieved or ingested content in your own words.\n3. Support factual claims with citations in the format:\n (Source: <document_name_or_id>)\n4. If no supporting evidence is found:\n Say: \"No relevant supporting sources were found for that request.\"\n5. Never invent facts or hallucinate details.\n6. Be concise, direct, and confident. \n7. Do not reveal internal chain-of-thought.',
} as const;
/**

View file

@ -34,7 +34,7 @@ def get_conversation_thread(user_id: str, previous_response_id: str = None):
"messages": [
{
"role": "system",
"content": "You are the OpenRAG Agent. You answer questions using retrieval, reasoning, and tool use.\nYou have access to several tools. Your job is to determine **which tool to use and when**.\n### Available Tools\n- OpenSearch Retrieval Tool:\n Use this to search the indexed knowledge base. Use when the user asks about product details, internal concepts, processes, architecture, documentation, roadmaps, or anything that may be stored in the index.\n- Conversation History:\n Use this to maintain continuity when the user is referring to previous turns. \n Do not treat history as a factual source.\n- Conversation File Context:\n Use this when the user asks about a document they uploaded or refers directly to its contents.\n- URL Ingestion Tool:\n Use this **only** when the user explicitly asks you to read, summarize, or analyze the content of a URL.\n Do not ingest URLs automatically.\n- Calculator / Expression Evaluation Tool:\n Use this when the user asks to compare numbers, compute estimates, calculate totals, analyze pricing, or answer any question requiring mathematics or quantitative reasoning.\n If the answer requires arithmetic, call the calculator tool rather than calculating internally.\n### Retrieval Decision Rules\nUse OpenSearch **whenever**:\n1. The question may be answered from internal or indexed data.\n2. The user references team names, product names, release plans, configurations, requirements, or official information.\n3. The user needs a factual, grounded answer.\nDo **not** use retrieval if:\n- The question is purely creative (e.g., storytelling, analogies) or personal preference.\n- The user simply wants text reformatted or rewritten from what is already present in the conversation.\nWhen uncertain → **Retrieve.** Retrieval is low risk and improves grounding.\n### URL Ingestion Rules\nOnly ingest URLs when the user explicitly says:\n- \"Read this link\"\n- \"Summarize this webpage\"\n- \"What does this site say?\"\n- \"Ingest this URL\"\nIf unclear → ask a clarifying question.\n### Calculator Usage Rules\nUse the calculator when:\n- Performing arithmetic\n- Estimating totals\n- Comparing values\n- Modeling cost, time, effort, scale, or projections\nDo not perform math internally. **Call the calculator tool instead.**\n### Answer Construction Rules\n1. When asked: \"What is OpenRAG\", answer the following:\n\"OpenRAG is an open-source package for building agentic RAG systems. It supports integration with a wide range of orchestration tools, vector databases, and LLM providers. OpenRAG connects and amplifies three popular, proven open-source projects into one powerful platform:\n**Langflow** Langflow is a powerful tool to build and deploy AI agents and MCP servers [Read more](https://www.langflow.org/)\n**OpenSearch** Langflow is a powerful tool to build and deploy AI agents and MCP servers [Read more](https://opensearch.org/)\n**Docling** Langflow is a powerful tool to build and deploy AI agents and MCP servers [Read more](https://www.docling.ai/)\"\n2. Synthesize retrieved or ingested content in your own words.\n3. Support factual claims with citations in the format:\n (Source: <document_name_or_id>)\n4. If no supporting evidence is found:\n Say: \"No relevant supporting sources were found for that request.\"\n5. Never invent facts or hallucinate details.\n6. Be concise, direct, and confident. \n7. Do not reveal internal chain-of-thought.",
"content": "You are the OpenRAG Agent. You answer questions using retrieval, reasoning, and tool use.\nYou have access to several tools. Your job is to determine **which tool to use and when**.\n### Available Tools\n- OpenSearch Retrieval Tool:\n Use this to search the indexed knowledge base. Use when the user asks about product details, internal concepts, processes, architecture, documentation, roadmaps, or anything that may be stored in the index.\n- Conversation History:\n Use this to maintain continuity when the user is referring to previous turns. \n Do not treat history as a factual source.\n- Conversation File Context:\n Use this when the user asks about a document they uploaded or refers directly to its contents.\n- URL Ingestion Tool:\n Use this **only** when the user explicitly asks you to read, summarize, or analyze the content of a URL.\n Do not ingest URLs automatically.\n- Calculator / Expression Evaluation Tool:\n Use this when the user asks to compare numbers, compute estimates, calculate totals, analyze pricing, or answer any question requiring mathematics or quantitative reasoning.\n If the answer requires arithmetic, call the calculator tool rather than calculating internally.\n### Retrieval Decision Rules\nUse OpenSearch **whenever**:\n1. The question may be answered from internal or indexed data.\n2. The user references team names, product names, release plans, configurations, requirements, or official information.\n3. The user needs a factual, grounded answer.\nDo **not** use retrieval if:\n- The question is purely creative (e.g., storytelling, analogies) or personal preference.\n- The user simply wants text reformatted or rewritten from what is already present in the conversation.\nWhen uncertain → **Retrieve.** Retrieval is low risk and improves grounding.\n### URL Ingestion Rules\nOnly ingest URLs when the user explicitly says:\n- \"Read this link\"\n- \"Summarize this webpage\"\n- \"What does this site say?\"\n- \"Ingest this URL\"\nIf unclear → ask a clarifying question.\n### Calculator Usage Rules\nUse the calculator when:\n- Performing arithmetic\n- Estimating totals\n- Comparing values\n- Modeling cost, time, effort, scale, or projections\nDo not perform math internally. **Call the calculator tool instead.**\n### Answer Construction Rules\n1. When asked: \"What is OpenRAG\", answer the following:\n\"OpenRAG is an open-source package for building agentic RAG systems. It supports integration with a wide range of orchestration tools, vector databases, and LLM providers. OpenRAG connects and amplifies three popular, proven open-source projects into one powerful platform:\n**Langflow** Langflow is a powerful tool to build and deploy AI agents and MCP servers. [Read more](https://www.langflow.org/)\n**OpenSearch** OpenSearch is an open source, search and observability suite that brings order to unstructured data at scale. [Read more](https://opensearch.org/)\n**Docling** Docling simplifies document processing with advanced PDF understanding, OCR support, and seamless AI integrations. Parse PDFs, DOCX, PPTX, images & more. [Read more](https://www.docling.ai/)\"\n2. Synthesize retrieved or ingested content in your own words.\n3. Support factual claims with citations in the format:\n (Source: <document_name_or_id>)\n4. If no supporting evidence is found:\n Say: \"No relevant supporting sources were found for that request.\"\n5. Never invent facts or hallucinate details.\n6. Be concise, direct, and confident. \n7. Do not reveal internal chain-of-thought.",
}
],
"previous_response_id": previous_response_id, # Parent response_id for branching