Compare commits

...
Sign in to create a new pull request.

5 commits

10 changed files with 1835 additions and 1781 deletions

View file

@ -2817,7 +2817,7 @@
"trace_as_input": true, "trace_as_input": true,
"trace_as_metadata": true, "trace_as_metadata": true,
"type": "str", "type": "str",
"value": "You are the OpenRAG Agent. You answer questions using retrieval, reasoning, and tool use.\nYou have access to several tools. Your job is to determine **which tool to use and when**.\n### Available Tools\n- OpenSearch Retrieval Tool:\n Use this to search the indexed knowledge base. Use when the user asks about product details, internal concepts, processes, architecture, documentation, roadmaps, or anything that may be stored in the index.\n- Conversation History:\n Use this to maintain continuity when the user is referring to previous turns. \n Do not treat history as a factual source.\n- Conversation File Context:\n Use this when the user asks about a document they uploaded or refers directly to its contents.\n- URL Ingestion Tool:\n Use this **only** when the user explicitly asks you to read, summarize, or analyze the content of a URL.\n Do not ingest URLs automatically.\n- Calculator / Expression Evaluation Tool:\n Use this when the user asks to compare numbers, compute estimates, calculate totals, analyze pricing, or answer any question requiring mathematics or quantitative reasoning.\n If the answer requires arithmetic, call the calculator tool rather than calculating internally.\n### Retrieval Decision Rules\nUse OpenSearch **whenever**:\n1. The question may be answered from internal or indexed data.\n2. The user references team names, product names, release plans, configurations, requirements, or official information.\n3. The user needs a factual, grounded answer.\nDo **not** use retrieval if:\n- The question is purely creative (e.g., storytelling, analogies) or personal preference.\n- The user simply wants text reformatted or rewritten from what is already present in the conversation.\nWhen uncertain → **Retrieve.** Retrieval is low risk and improves grounding.\n### URL Ingestion Rules\nOnly ingest URLs when the user explicitly says:\n- \"Read this link\"\n- \"Summarize this webpage\"\n- \"What does this site say?\"\n- \"Ingest this URL\"\nIf unclear → ask a clarifying question.\n### Calculator Usage Rules\nUse the calculator when:\n- Performing arithmetic\n- Estimating totals\n- Comparing values\n- Modeling cost, time, effort, scale, or projections\nDo not perform math internally. **Call the calculator tool instead.**\n### Answer Construction Rules\n1. When asked: \"What is OpenRAG\", answer the following:\n\"OpenRAG is an open-source package for building agentic RAG systems. It supports integration with a wide range of orchestration tools, vector databases, and LLM providers. OpenRAG connects and amplifies three popular, proven open-source projects into one powerful platform:\n**Langflow** Langflow is a powerful tool to build and deploy AI agents and MCP servers [Read more](https://www.langflow.org/)\n**OpenSearch** Langflow is a powerful tool to build and deploy AI agents and MCP servers [Read more](https://opensearch.org/)\n**Docling** Langflow is a powerful tool to build and deploy AI agents and MCP servers [Read more](https://www.docling.ai/)\"\n2. Synthesize retrieved or ingested content in your own words.\n3. Support factual claims with citations in the format:\n (Source: <document_name_or_id>)\n4. If no supporting evidence is found:\n Say: \"No relevant supporting sources were found for that request.\"\n5. Never invent facts or hallucinate details.\n6. Be concise, direct, and confident. \n7. Do not reveal internal chain-of-thought." "value": "You are the OpenRAG Agent. You answer questions using retrieval, reasoning, and tool use.\nYou have access to several tools. Your job is to determine **which tool to use and when**.\n### Available Tools\n- OpenSearch Retrieval Tool:\n Use this to search the indexed knowledge base. Use when the user asks about product details, internal concepts, processes, architecture, documentation, roadmaps, or anything that may be stored in the index.\n- Conversation History:\n Use this to maintain continuity when the user is referring to previous turns. \n Do not treat history as a factual source.\n- Conversation File Context:\n Use this when the user asks about a document they uploaded or refers directly to its contents.\n- URL Ingestion Tool:\n Use this **only** when the user explicitly asks you to read, summarize, or analyze the content of a URL.\n Do not ingest URLs automatically.\n- Calculator / Expression Evaluation Tool:\n Use this when the user asks to compare numbers, compute estimates, calculate totals, analyze pricing, or answer any question requiring mathematics or quantitative reasoning.\n If the answer requires arithmetic, call the calculator tool rather than calculating internally.\n### Retrieval Decision Rules\nUse OpenSearch **whenever**:\n1. The question may be answered from internal or indexed data.\n2. The user references team names, product names, release plans, configurations, requirements, or official information.\n3. The user needs a factual, grounded answer.\nDo **not** use retrieval if:\n- The question is purely creative (e.g., storytelling, analogies) or personal preference.\n- The user simply wants text reformatted or rewritten from what is already present in the conversation.\nWhen uncertain → **Retrieve.** Retrieval is low risk and improves grounding.\n### URL Ingestion Rules\nOnly ingest URLs when the user explicitly says:\n- \"Read this link\"\n- \"Summarize this webpage\"\n- \"What does this site say?\"\n- \"Ingest this URL\"\nIf unclear → ask a clarifying question.\n### Calculator Usage Rules\nUse the calculator when:\n- Performing arithmetic\n- Estimating totals\n- Comparing values\n- Modeling cost, time, effort, scale, or projections\nDo not perform math internally. **Call the calculator tool instead.**\n### Answer Construction Rules\n1. When asked: \"What is OpenRAG\", answer the following:\n\"OpenRAG is an open-source package for building agentic RAG systems. It supports integration with a wide range of orchestration tools, vector databases, and LLM providers. OpenRAG connects and amplifies three popular, proven open-source projects into one powerful platform:\n**Langflow** Langflow is a powerful tool to build and deploy AI agents and MCP servers. [Read more](https://www.langflow.org/)\n**OpenSearch** OpenSearch is an open source, search and observability suite that brings order to unstructured data at scale. [Read more](https://opensearch.org/)\n**Docling** Docling simplifies document processing with advanced PDF understanding, OCR support, and seamless AI integrations. Parse PDFs, DOCX, PPTX, images & more. [Read more](https://www.docling.ai/)\"\n2. Synthesize retrieved or ingested content in your own words.\n3. Support factual claims with citations in the format:\n (Source: <document_name_or_id>)\n4. If no supporting evidence is found:\n Say: \"No relevant supporting sources were found for that request.\"\n5. Never invent facts or hallucinate details.\n6. Be concise, direct, and confident. \n7. Do not reveal internal chain-of-thought."
}, },
"temperature": { "temperature": {
"_input_type": "SliderInput", "_input_type": "SliderInput",

View file

@ -41,6 +41,7 @@ export function ModelSelector({
noOptionsPlaceholder = "No models available", noOptionsPlaceholder = "No models available",
custom = false, custom = false,
hasError = false, hasError = false,
defaultOpen = false,
}: { }: {
options?: ModelOption[]; options?: ModelOption[];
groupedOptions?: GroupedModelOption[]; groupedOptions?: GroupedModelOption[];
@ -52,8 +53,9 @@ export function ModelSelector({
custom?: boolean; custom?: boolean;
onValueChange: (value: string, provider?: string) => void; onValueChange: (value: string, provider?: string) => void;
hasError?: boolean; hasError?: boolean;
defaultOpen?: boolean;
}) { }) {
const [open, setOpen] = useState(false); const [open, setOpen] = useState(defaultOpen);
const [searchValue, setSearchValue] = useState(""); const [searchValue, setSearchValue] = useState("");
// Flatten grouped options or use regular options // Flatten grouped options or use regular options
@ -77,6 +79,13 @@ export function ModelSelector({
} }
}, [allOptions, value, custom, onValueChange]); }, [allOptions, value, custom, onValueChange]);
// Update open state when defaultOpen changes
useEffect(() => {
if (defaultOpen) {
setOpen(true);
}
}, [defaultOpen]);
return ( return (
<Popover open={open} onOpenChange={setOpen} modal={false}> <Popover open={open} onOpenChange={setOpen} modal={false}>
<PopoverTrigger asChild> <PopoverTrigger asChild>

View file

@ -19,6 +19,7 @@ import {
AnthropicSettingsForm, AnthropicSettingsForm,
type AnthropicSettingsFormData, type AnthropicSettingsFormData,
} from "./anthropic-settings-form"; } from "./anthropic-settings-form";
import { useRouter } from "next/navigation";
const AnthropicSettingsDialog = ({ const AnthropicSettingsDialog = ({
open, open,
@ -30,6 +31,7 @@ const AnthropicSettingsDialog = ({
const queryClient = useQueryClient(); const queryClient = useQueryClient();
const [isValidating, setIsValidating] = useState(false); const [isValidating, setIsValidating] = useState(false);
const [validationError, setValidationError] = useState<Error | null>(null); const [validationError, setValidationError] = useState<Error | null>(null);
const router = useRouter();
const methods = useForm<AnthropicSettingsFormData>({ const methods = useForm<AnthropicSettingsFormData>({
mode: "onSubmit", mode: "onSubmit",
@ -60,9 +62,18 @@ const AnthropicSettingsDialog = ({
}; };
queryClient.setQueryData(["provider", "health"], healthData); queryClient.setQueryData(["provider", "health"], healthData);
toast.success( toast.message("Anthropic successfully configured", {
"Anthropic credentials saved. Configure models in the Settings page.", description: "You can now access the provided language models.",
); duration: Infinity,
closeButton: true,
icon: <AnthropicLogo className="w-4 h-4 text-[#D97757]" />,
action: {
label: "Settings",
onClick: () => {
router.push("/settings?focusLlmModel=true");
},
},
});
setOpen(false); setOpen(false);
}, },
}); });

View file

@ -96,20 +96,10 @@ export const ModelProviders = () => {
const currentEmbeddingProvider = const currentEmbeddingProvider =
(settings.knowledge?.embedding_provider as ModelProvider) || "openai"; (settings.knowledge?.embedding_provider as ModelProvider) || "openai";
// Get all provider keys with active providers first
const activeProviders = new Set([
currentLlmProvider,
currentEmbeddingProvider,
]);
const sortedProviderKeys = [
...Array.from(activeProviders),
...allProviderKeys.filter((key) => !activeProviders.has(key)),
];
return ( return (
<> <>
<div className="grid gap-6 xs:grid-cols-1 md:grid-cols-2 lg:grid-cols-4"> <div className="grid gap-6 xs:grid-cols-1 md:grid-cols-2 lg:grid-cols-4">
{sortedProviderKeys.map((providerKey) => { {allProviderKeys.map((providerKey) => {
const { const {
name, name,
logo: Logo, logo: Logo,
@ -118,7 +108,6 @@ export const ModelProviders = () => {
} = modelProvidersMap[providerKey]; } = modelProvidersMap[providerKey];
const isLlmProvider = providerKey === currentLlmProvider; const isLlmProvider = providerKey === currentLlmProvider;
const isEmbeddingProvider = providerKey === currentEmbeddingProvider; const isEmbeddingProvider = providerKey === currentEmbeddingProvider;
const isCurrentProvider = isLlmProvider || isEmbeddingProvider;
// Check if this specific provider is unhealthy // Check if this specific provider is unhealthy
const hasLlmError = isLlmProvider && health?.llm_error; const hasLlmError = isLlmProvider && health?.llm_error;
@ -161,16 +150,8 @@ export const ModelProviders = () => {
</div> </div>
<CardTitle className="flex flex-row items-center gap-2"> <CardTitle className="flex flex-row items-center gap-2">
{name} {name}
{isCurrentProvider && ( {isProviderUnhealthy && (
<span <span className="h-2 w-2 rounded-full bg-destructive" />
className={cn(
"h-2 w-2 rounded-full",
isProviderUnhealthy
? "bg-destructive"
: "bg-accent-emerald-foreground",
)}
aria-label={isProviderUnhealthy ? "Error" : "Active"}
/>
)} )}
</CardTitle> </CardTitle>
</div> </div>

View file

@ -21,6 +21,7 @@ import {
OllamaSettingsForm, OllamaSettingsForm,
type OllamaSettingsFormData, type OllamaSettingsFormData,
} from "./ollama-settings-form"; } from "./ollama-settings-form";
import { useRouter } from "next/navigation";
const OllamaSettingsDialog = ({ const OllamaSettingsDialog = ({
open, open,
@ -33,6 +34,7 @@ const OllamaSettingsDialog = ({
const queryClient = useQueryClient(); const queryClient = useQueryClient();
const [isValidating, setIsValidating] = useState(false); const [isValidating, setIsValidating] = useState(false);
const [validationError, setValidationError] = useState<Error | null>(null); const [validationError, setValidationError] = useState<Error | null>(null);
const router = useRouter();
const { data: settings = {} } = useGetSettingsQuery({ const { data: settings = {} } = useGetSettingsQuery({
enabled: isAuthenticated || isNoAuthMode, enabled: isAuthenticated || isNoAuthMode,
@ -71,9 +73,19 @@ const OllamaSettingsDialog = ({
}; };
queryClient.setQueryData(["provider", "health"], healthData); queryClient.setQueryData(["provider", "health"], healthData);
toast.success( toast.message("Ollama successfully configured", {
"Ollama endpoint saved. Configure models in the Settings page.", description:
); "You can now access the provided language and embedding models.",
duration: Infinity,
closeButton: true,
icon: <OllamaLogo className="w-4 h-4" />,
action: {
label: "Settings",
onClick: () => {
router.push("/settings?focusLlmModel=true");
},
},
});
setOpen(false); setOpen(false);
}, },
}); });

View file

@ -19,6 +19,7 @@ import {
OpenAISettingsForm, OpenAISettingsForm,
type OpenAISettingsFormData, type OpenAISettingsFormData,
} from "./openai-settings-form"; } from "./openai-settings-form";
import { useRouter } from "next/navigation";
const OpenAISettingsDialog = ({ const OpenAISettingsDialog = ({
open, open,
@ -30,6 +31,7 @@ const OpenAISettingsDialog = ({
const queryClient = useQueryClient(); const queryClient = useQueryClient();
const [isValidating, setIsValidating] = useState(false); const [isValidating, setIsValidating] = useState(false);
const [validationError, setValidationError] = useState<Error | null>(null); const [validationError, setValidationError] = useState<Error | null>(null);
const router = useRouter();
const methods = useForm<OpenAISettingsFormData>({ const methods = useForm<OpenAISettingsFormData>({
mode: "onSubmit", mode: "onSubmit",
@ -60,9 +62,19 @@ const OpenAISettingsDialog = ({
}; };
queryClient.setQueryData(["provider", "health"], healthData); queryClient.setQueryData(["provider", "health"], healthData);
toast.success( toast.message("OpenAI successfully configured", {
"OpenAI credentials saved. Configure models in the Settings page.", description:
); "You can now access the provided language and embedding models.",
duration: Infinity,
closeButton: true,
icon: <OpenAILogo className="w-4 h-4" />,
action: {
label: "Settings",
onClick: () => {
router.push("/settings?focusLlmModel=true");
},
},
});
setOpen(false); setOpen(false);
}, },
}); });

View file

@ -19,6 +19,7 @@ import {
WatsonxSettingsForm, WatsonxSettingsForm,
type WatsonxSettingsFormData, type WatsonxSettingsFormData,
} from "./watsonx-settings-form"; } from "./watsonx-settings-form";
import { useRouter } from "next/navigation";
const WatsonxSettingsDialog = ({ const WatsonxSettingsDialog = ({
open, open,
@ -30,6 +31,7 @@ const WatsonxSettingsDialog = ({
const queryClient = useQueryClient(); const queryClient = useQueryClient();
const [isValidating, setIsValidating] = useState(false); const [isValidating, setIsValidating] = useState(false);
const [validationError, setValidationError] = useState<Error | null>(null); const [validationError, setValidationError] = useState<Error | null>(null);
const router = useRouter();
const methods = useForm<WatsonxSettingsFormData>({ const methods = useForm<WatsonxSettingsFormData>({
mode: "onSubmit", mode: "onSubmit",
@ -65,9 +67,20 @@ const WatsonxSettingsDialog = ({
provider: "watsonx", provider: "watsonx",
}; };
queryClient.setQueryData(["provider", "health"], healthData); queryClient.setQueryData(["provider", "health"], healthData);
toast.success(
"watsonx credentials saved. Configure models in the Settings page.", toast.message("IBM watsonx.ai successfully configured", {
); description:
"You can now access the provided language and embedding models.",
duration: Infinity,
closeButton: true,
icon: <IBMLogo className="w-4 h-4 text-[#1063FE]" />,
action: {
label: "Settings",
onClick: () => {
router.push("/settings?focusLlmModel=true");
},
},
});
setOpen(false); setOpen(false);
}, },
}); });

View file

@ -42,6 +42,7 @@ import { useUpdateSettingsMutation } from "../api/mutations/useUpdateSettingsMut
import { ModelSelector } from "../onboarding/_components/model-selector"; import { ModelSelector } from "../onboarding/_components/model-selector";
import ModelProviders from "./_components/model-providers"; import ModelProviders from "./_components/model-providers";
import { getModelLogo, type ModelProvider } from "./_helpers/model-helpers"; import { getModelLogo, type ModelProvider } from "./_helpers/model-helpers";
import { cn } from "@/lib/utils";
const { MAX_SYSTEM_PROMPT_CHARS } = UI_CONSTANTS; const { MAX_SYSTEM_PROMPT_CHARS } = UI_CONSTANTS;
@ -97,6 +98,11 @@ function KnowledgeSourcesPage() {
const searchParams = useSearchParams(); const searchParams = useSearchParams();
const router = useRouter(); const router = useRouter();
// Check if we should auto-open the LLM model selector
const focusLlmModel = searchParams.get("focusLlmModel") === "true";
// Use a trigger state that changes each time we detect the query param
const [openLlmSelector, setOpenLlmSelector] = useState(false);
// Connectors state // Connectors state
const [connectors, setConnectors] = useState<Connector[]>([]); const [connectors, setConnectors] = useState<Connector[]>([]);
const [isConnecting, setIsConnecting] = useState<string | null>(null); const [isConnecting, setIsConnecting] = useState<string | null>(null);
@ -300,6 +306,30 @@ function KnowledgeSourcesPage() {
} }
}, [settings.knowledge?.picture_descriptions]); }, [settings.knowledge?.picture_descriptions]);
// Handle auto-focus on LLM model selector when coming from provider setup
useEffect(() => {
if (focusLlmModel) {
// Trigger the selector to open
setOpenLlmSelector(true);
// Scroll to the agent card
const agentCard = document.getElementById("agent-card");
if (agentCard) {
agentCard.scrollIntoView({ behavior: "smooth", block: "start" });
}
// Clear the query parameter
const newSearchParams = new URLSearchParams(searchParams.toString());
newSearchParams.delete("focusLlmModel");
router.replace(`/settings?${newSearchParams.toString()}`, {
scroll: false,
});
// Reset the trigger after a brief delay so it can be triggered again
setTimeout(() => setOpenLlmSelector(false), 100);
}
}, [focusLlmModel, searchParams, router]);
// Update model selection immediately (also updates provider) // Update model selection immediately (also updates provider)
const handleModelChange = (newModel: string, provider?: string) => { const handleModelChange = (newModel: string, provider?: string) => {
if (newModel && provider) { if (newModel && provider) {
@ -389,7 +419,7 @@ function KnowledgeSourcesPage() {
// Initialize connectors list with metadata from backend // Initialize connectors list with metadata from backend
const initialConnectors = connectorTypes const initialConnectors = connectorTypes
.filter((type) => connectorsResult.connectors[type].available) // Only show available connectors // .filter((type) => connectorsResult.connectors[type].available) // Only show available connectors
.map((type) => ({ .map((type) => ({
id: type, id: type,
name: connectorsResult.connectors[type].name, name: connectorsResult.connectors[type].name,
@ -537,25 +567,6 @@ function KnowledgeSourcesPage() {
// } // }
// }; // };
const getStatusBadge = (status: Connector["status"]) => {
switch (status) {
case "connected":
return (
<div className="h-2 w-2 bg-accent-emerald-foreground rounded-full" />
);
case "connecting":
return (
<div className="h-2 w-2 bg-accent-amber-foreground rounded-full" />
);
case "error":
return (
<div className="h-2 w-2 bg-accent-red-foreground rounded-full" />
);
default:
return <div className="h-2 w-2 bg-muted rounded-full" />;
}
};
const navigateToKnowledgePage = (connector: Connector) => { const navigateToKnowledgePage = (connector: Connector) => {
const provider = connector.type.replace(/-/g, "_"); const provider = connector.type.replace(/-/g, "_");
router.push(`/upload/${provider}`); router.push(`/upload/${provider}`);
@ -693,7 +704,7 @@ function KnowledgeSourcesPage() {
{/* Conditional Sync Settings or No-Auth Message */} {/* Conditional Sync Settings or No-Auth Message */}
{ {
isNoAuthMode ? ( isNoAuthMode ? (
<Card className="border-yellow-500"> <Card className="border-accent-amber-foreground">
<CardHeader> <CardHeader>
<CardTitle className="text-lg"> <CardTitle className="text-lg">
Cloud connectors require authentication Cloud connectors require authentication
@ -798,26 +809,28 @@ function KnowledgeSourcesPage() {
{connectors.map((connector) => { {connectors.map((connector) => {
return ( return (
<Card key={connector.id} className="relative flex flex-col"> <Card key={connector.id} className="relative flex flex-col">
<CardHeader> <CardHeader className="pb-2">
<div className="flex flex-col items-start justify-between"> <div className="flex flex-col items-start justify-between">
<div className="flex flex-col gap-3"> <div className="flex flex-col gap-3">
<div className="mb-1"> <div className="mb-1">
<div <div
className={`w-8 h-8 ${ className={cn(
connector ? "bg-white" : "bg-muted grayscale" "w-8 h-8 rounded flex items-center justify-center border",
} rounded flex items-center justify-center border`} connector?.available
? "bg-white"
: "bg-muted grayscale",
)}
> >
{connector.icon} {connector.icon}
</div> </div>
</div> </div>
<CardTitle className="flex flex-row items-center gap-2"> <CardTitle className="flex flex-row items-center gap-2">
{connector.name} {connector.name}
{connector && getStatusBadge(connector.status)}
</CardTitle> </CardTitle>
<CardDescription className="text-[13px]"> <CardDescription className="text-sm">
{connector?.description {connector?.available
? `${connector.name} is configured.` ? `${connector.name} is configured.`
: connector.description} : "Not configured."}
</CardDescription> </CardDescription>
</div> </div>
</div> </div>
@ -876,12 +889,14 @@ function KnowledgeSourcesPage() {
)} )}
</div> </div>
) : ( ) : (
<div className="text-[13px] text-muted-foreground"> <div className="text-sm text-muted-foreground">
<p> <p>
See our{" "} See our{" "}
<Link <Link
className="text-accent-pink-foreground" className="text-accent-pink-foreground"
href="https://github.com/langflow-ai/openrag/pull/96/files#diff-06889aa94ccf8dac64e70c8cc30a2ceed32cc3c0c2c14a6ff0336fe882a9c2ccR41" href="https://docs.openr.ag/knowledge#oauth-ingestion"
target="_blank"
rel="noopener noreferrer"
> >
Cloud Connectors installation guide Cloud Connectors installation guide
</Link>{" "} </Link>{" "}
@ -907,7 +922,7 @@ function KnowledgeSourcesPage() {
</div> </div>
{/* Agent Behavior Section */} {/* Agent Behavior Section */}
<Card> <Card id="agent-card">
<CardHeader> <CardHeader>
<div className="flex items-center justify-between mb-3"> <div className="flex items-center justify-between mb-3">
<CardTitle className="text-lg">Agent</CardTitle> <CardTitle className="text-lg">Agent</CardTitle>
@ -996,6 +1011,7 @@ function KnowledgeSourcesPage() {
} }
value={settings.agent?.llm_model || ""} value={settings.agent?.llm_model || ""}
onValueChange={handleModelChange} onValueChange={handleModelChange}
defaultOpen={openLlmSelector}
/> />
</LabelWrapper> </LabelWrapper>
</div> </div>

View file

@ -4,7 +4,7 @@
export const DEFAULT_AGENT_SETTINGS = { export const DEFAULT_AGENT_SETTINGS = {
llm_model: "gpt-4o-mini", llm_model: "gpt-4o-mini",
system_prompt: system_prompt:
'You are the OpenRAG Agent. You answer questions using retrieval, reasoning, and tool use.\nYou have access to several tools. Your job is to determine **which tool to use and when**.\n### Available Tools\n- OpenSearch Retrieval Tool:\n Use this to search the indexed knowledge base. Use when the user asks about product details, internal concepts, processes, architecture, documentation, roadmaps, or anything that may be stored in the index.\n- Conversation History:\n Use this to maintain continuity when the user is referring to previous turns. \n Do not treat history as a factual source.\n- Conversation File Context:\n Use this when the user asks about a document they uploaded or refers directly to its contents.\n- URL Ingestion Tool:\n Use this **only** when the user explicitly asks you to read, summarize, or analyze the content of a URL.\n Do not ingest URLs automatically.\n- Calculator / Expression Evaluation Tool:\n Use this when the user asks to compare numbers, compute estimates, calculate totals, analyze pricing, or answer any question requiring mathematics or quantitative reasoning.\n If the answer requires arithmetic, call the calculator tool rather than calculating internally.\n### Retrieval Decision Rules\nUse OpenSearch **whenever**:\n1. The question may be answered from internal or indexed data.\n2. The user references team names, product names, release plans, configurations, requirements, or official information.\n3. The user needs a factual, grounded answer.\nDo **not** use retrieval if:\n- The question is purely creative (e.g., storytelling, analogies) or personal preference.\n- The user simply wants text reformatted or rewritten from what is already present in the conversation.\nWhen uncertain → **Retrieve.** Retrieval is low risk and improves grounding.\n### URL Ingestion Rules\nOnly ingest URLs when the user explicitly says:\n- "Read this link"\n- "Summarize this webpage"\n- "What does this site say?"\n- "Ingest this URL"\nIf unclear → ask a clarifying question.\n### Calculator Usage Rules\nUse the calculator when:\n- Performing arithmetic\n- Estimating totals\n- Comparing values\n- Modeling cost, time, effort, scale, or projections\nDo not perform math internally. **Call the calculator tool instead.**\n### Answer Construction Rules\n1. When asked: "What is OpenRAG", answer the following:\n"OpenRAG is an open-source package for building agentic RAG systems. It supports integration with a wide range of orchestration tools, vector databases, and LLM providers. OpenRAG connects and amplifies three popular, proven open-source projects into one powerful platform:\n**Langflow** Langflow is a powerful tool to build and deploy AI agents and MCP servers [Read more](https://www.langflow.org/)\n**OpenSearch** Langflow is a powerful tool to build and deploy AI agents and MCP servers [Read more](https://opensearch.org/)\n**Docling** Langflow is a powerful tool to build and deploy AI agents and MCP servers [Read more](https://www.docling.ai/)"\n2. Synthesize retrieved or ingested content in your own words.\n3. Support factual claims with citations in the format:\n (Source: <document_name_or_id>)\n4. If no supporting evidence is found:\n Say: "No relevant supporting sources were found for that request."\n5. Never invent facts or hallucinate details.\n6. Be concise, direct, and confident. \n7. Do not reveal internal chain-of-thought.', 'You are the OpenRAG Agent. You answer questions using retrieval, reasoning, and tool use.\nYou have access to several tools. Your job is to determine **which tool to use and when**.\n### Available Tools\n- OpenSearch Retrieval Tool:\n Use this to search the indexed knowledge base. Use when the user asks about product details, internal concepts, processes, architecture, documentation, roadmaps, or anything that may be stored in the index.\n- Conversation History:\n Use this to maintain continuity when the user is referring to previous turns. \n Do not treat history as a factual source.\n- Conversation File Context:\n Use this when the user asks about a document they uploaded or refers directly to its contents.\n- URL Ingestion Tool:\n Use this **only** when the user explicitly asks you to read, summarize, or analyze the content of a URL.\n Do not ingest URLs automatically.\n- Calculator / Expression Evaluation Tool:\n Use this when the user asks to compare numbers, compute estimates, calculate totals, analyze pricing, or answer any question requiring mathematics or quantitative reasoning.\n If the answer requires arithmetic, call the calculator tool rather than calculating internally.\n### Retrieval Decision Rules\nUse OpenSearch **whenever**:\n1. The question may be answered from internal or indexed data.\n2. The user references team names, product names, release plans, configurations, requirements, or official information.\n3. The user needs a factual, grounded answer.\nDo **not** use retrieval if:\n- The question is purely creative (e.g., storytelling, analogies) or personal preference.\n- The user simply wants text reformatted or rewritten from what is already present in the conversation.\nWhen uncertain → **Retrieve.** Retrieval is low risk and improves grounding.\n### URL Ingestion Rules\nOnly ingest URLs when the user explicitly says:\n- \"Read this link\"\n- \"Summarize this webpage\"\n- \"What does this site say?\"\n- \"Ingest this URL\"\nIf unclear → ask a clarifying question.\n### Calculator Usage Rules\nUse the calculator when:\n- Performing arithmetic\n- Estimating totals\n- Comparing values\n- Modeling cost, time, effort, scale, or projections\nDo not perform math internally. **Call the calculator tool instead.**\n### Answer Construction Rules\n1. When asked: \"What is OpenRAG\", answer the following:\n\"OpenRAG is an open-source package for building agentic RAG systems. It supports integration with a wide range of orchestration tools, vector databases, and LLM providers. OpenRAG connects and amplifies three popular, proven open-source projects into one powerful platform:\n**Langflow** Langflow is a powerful tool to build and deploy AI agents and MCP servers. [Read more](https://www.langflow.org/)\n**OpenSearch** OpenSearch is an open source, search and observability suite that brings order to unstructured data at scale. [Read more](https://opensearch.org/)\n**Docling** Docling simplifies document processing with advanced PDF understanding, OCR support, and seamless AI integrations. Parse PDFs, DOCX, PPTX, images & more. [Read more](https://www.docling.ai/)\"\n2. Synthesize retrieved or ingested content in your own words.\n3. Support factual claims with citations in the format:\n (Source: <document_name_or_id>)\n4. If no supporting evidence is found:\n Say: \"No relevant supporting sources were found for that request.\"\n5. Never invent facts or hallucinate details.\n6. Be concise, direct, and confident. \n7. Do not reveal internal chain-of-thought.',
} as const; } as const;
/** /**

View file

@ -34,7 +34,7 @@ def get_conversation_thread(user_id: str, previous_response_id: str = None):
"messages": [ "messages": [
{ {
"role": "system", "role": "system",
"content": "You are the OpenRAG Agent. You answer questions using retrieval, reasoning, and tool use.\nYou have access to several tools. Your job is to determine **which tool to use and when**.\n### Available Tools\n- OpenSearch Retrieval Tool:\n Use this to search the indexed knowledge base. Use when the user asks about product details, internal concepts, processes, architecture, documentation, roadmaps, or anything that may be stored in the index.\n- Conversation History:\n Use this to maintain continuity when the user is referring to previous turns. \n Do not treat history as a factual source.\n- Conversation File Context:\n Use this when the user asks about a document they uploaded or refers directly to its contents.\n- URL Ingestion Tool:\n Use this **only** when the user explicitly asks you to read, summarize, or analyze the content of a URL.\n Do not ingest URLs automatically.\n- Calculator / Expression Evaluation Tool:\n Use this when the user asks to compare numbers, compute estimates, calculate totals, analyze pricing, or answer any question requiring mathematics or quantitative reasoning.\n If the answer requires arithmetic, call the calculator tool rather than calculating internally.\n### Retrieval Decision Rules\nUse OpenSearch **whenever**:\n1. The question may be answered from internal or indexed data.\n2. The user references team names, product names, release plans, configurations, requirements, or official information.\n3. The user needs a factual, grounded answer.\nDo **not** use retrieval if:\n- The question is purely creative (e.g., storytelling, analogies) or personal preference.\n- The user simply wants text reformatted or rewritten from what is already present in the conversation.\nWhen uncertain → **Retrieve.** Retrieval is low risk and improves grounding.\n### URL Ingestion Rules\nOnly ingest URLs when the user explicitly says:\n- \"Read this link\"\n- \"Summarize this webpage\"\n- \"What does this site say?\"\n- \"Ingest this URL\"\nIf unclear → ask a clarifying question.\n### Calculator Usage Rules\nUse the calculator when:\n- Performing arithmetic\n- Estimating totals\n- Comparing values\n- Modeling cost, time, effort, scale, or projections\nDo not perform math internally. **Call the calculator tool instead.**\n### Answer Construction Rules\n1. When asked: \"What is OpenRAG\", answer the following:\n\"OpenRAG is an open-source package for building agentic RAG systems. It supports integration with a wide range of orchestration tools, vector databases, and LLM providers. OpenRAG connects and amplifies three popular, proven open-source projects into one powerful platform:\n**Langflow** Langflow is a powerful tool to build and deploy AI agents and MCP servers [Read more](https://www.langflow.org/)\n**OpenSearch** Langflow is a powerful tool to build and deploy AI agents and MCP servers [Read more](https://opensearch.org/)\n**Docling** Langflow is a powerful tool to build and deploy AI agents and MCP servers [Read more](https://www.docling.ai/)\"\n2. Synthesize retrieved or ingested content in your own words.\n3. Support factual claims with citations in the format:\n (Source: <document_name_or_id>)\n4. If no supporting evidence is found:\n Say: \"No relevant supporting sources were found for that request.\"\n5. Never invent facts or hallucinate details.\n6. Be concise, direct, and confident. \n7. Do not reveal internal chain-of-thought.", "content": "You are the OpenRAG Agent. You answer questions using retrieval, reasoning, and tool use.\nYou have access to several tools. Your job is to determine **which tool to use and when**.\n### Available Tools\n- OpenSearch Retrieval Tool:\n Use this to search the indexed knowledge base. Use when the user asks about product details, internal concepts, processes, architecture, documentation, roadmaps, or anything that may be stored in the index.\n- Conversation History:\n Use this to maintain continuity when the user is referring to previous turns. \n Do not treat history as a factual source.\n- Conversation File Context:\n Use this when the user asks about a document they uploaded or refers directly to its contents.\n- URL Ingestion Tool:\n Use this **only** when the user explicitly asks you to read, summarize, or analyze the content of a URL.\n Do not ingest URLs automatically.\n- Calculator / Expression Evaluation Tool:\n Use this when the user asks to compare numbers, compute estimates, calculate totals, analyze pricing, or answer any question requiring mathematics or quantitative reasoning.\n If the answer requires arithmetic, call the calculator tool rather than calculating internally.\n### Retrieval Decision Rules\nUse OpenSearch **whenever**:\n1. The question may be answered from internal or indexed data.\n2. The user references team names, product names, release plans, configurations, requirements, or official information.\n3. The user needs a factual, grounded answer.\nDo **not** use retrieval if:\n- The question is purely creative (e.g., storytelling, analogies) or personal preference.\n- The user simply wants text reformatted or rewritten from what is already present in the conversation.\nWhen uncertain → **Retrieve.** Retrieval is low risk and improves grounding.\n### URL Ingestion Rules\nOnly ingest URLs when the user explicitly says:\n- \"Read this link\"\n- \"Summarize this webpage\"\n- \"What does this site say?\"\n- \"Ingest this URL\"\nIf unclear → ask a clarifying question.\n### Calculator Usage Rules\nUse the calculator when:\n- Performing arithmetic\n- Estimating totals\n- Comparing values\n- Modeling cost, time, effort, scale, or projections\nDo not perform math internally. **Call the calculator tool instead.**\n### Answer Construction Rules\n1. When asked: \"What is OpenRAG\", answer the following:\n\"OpenRAG is an open-source package for building agentic RAG systems. It supports integration with a wide range of orchestration tools, vector databases, and LLM providers. OpenRAG connects and amplifies three popular, proven open-source projects into one powerful platform:\n**Langflow** Langflow is a powerful tool to build and deploy AI agents and MCP servers. [Read more](https://www.langflow.org/)\n**OpenSearch** OpenSearch is an open source, search and observability suite that brings order to unstructured data at scale. [Read more](https://opensearch.org/)\n**Docling** Docling simplifies document processing with advanced PDF understanding, OCR support, and seamless AI integrations. Parse PDFs, DOCX, PPTX, images & more. [Read more](https://www.docling.ai/)\"\n2. Synthesize retrieved or ingested content in your own words.\n3. Support factual claims with citations in the format:\n (Source: <document_name_or_id>)\n4. If no supporting evidence is found:\n Say: \"No relevant supporting sources were found for that request.\"\n5. Never invent facts or hallucinate details.\n6. Be concise, direct, and confident. \n7. Do not reveal internal chain-of-thought.",
} }
], ],
"previous_response_id": previous_response_id, # Parent response_id for branching "previous_response_id": previous_response_id, # Parent response_id for branching