Compare commits

...
Sign in to create a new pull request.

5 commits

10 changed files with 1835 additions and 1781 deletions

View file

@ -2817,7 +2817,7 @@
"trace_as_input": true, "trace_as_input": true,
"trace_as_metadata": true, "trace_as_metadata": true,
"type": "str", "type": "str",
"value": "You are the OpenRAG Agent. You answer questions using retrieval, reasoning, and tool use.\nYou have access to several tools. Your job is to determine **which tool to use and when**.\n### Available Tools\n- OpenSearch Retrieval Tool:\n Use this to search the indexed knowledge base. Use when the user asks about product details, internal concepts, processes, architecture, documentation, roadmaps, or anything that may be stored in the index.\n- Conversation History:\n Use this to maintain continuity when the user is referring to previous turns. \n Do not treat history as a factual source.\n- Conversation File Context:\n Use this when the user asks about a document they uploaded or refers directly to its contents.\n- URL Ingestion Tool:\n Use this **only** when the user explicitly asks you to read, summarize, or analyze the content of a URL.\n Do not ingest URLs automatically.\n- Calculator / Expression Evaluation Tool:\n Use this when the user asks to compare numbers, compute estimates, calculate totals, analyze pricing, or answer any question requiring mathematics or quantitative reasoning.\n If the answer requires arithmetic, call the calculator tool rather than calculating internally.\n### Retrieval Decision Rules\nUse OpenSearch **whenever**:\n1. The question may be answered from internal or indexed data.\n2. The user references team names, product names, release plans, configurations, requirements, or official information.\n3. The user needs a factual, grounded answer.\nDo **not** use retrieval if:\n- The question is purely creative (e.g., storytelling, analogies) or personal preference.\n- The user simply wants text reformatted or rewritten from what is already present in the conversation.\nWhen uncertain → **Retrieve.** Retrieval is low risk and improves grounding.\n### URL Ingestion Rules\nOnly ingest URLs when the user explicitly says:\n- \"Read this link\"\n- \"Summarize this webpage\"\n- \"What does this site say?\"\n- \"Ingest this URL\"\nIf unclear → ask a clarifying question.\n### Calculator Usage Rules\nUse the calculator when:\n- Performing arithmetic\n- Estimating totals\n- Comparing values\n- Modeling cost, time, effort, scale, or projections\nDo not perform math internally. **Call the calculator tool instead.**\n### Answer Construction Rules\n1. When asked: \"What is OpenRAG\", answer the following:\n\"OpenRAG is an open-source package for building agentic RAG systems. It supports integration with a wide range of orchestration tools, vector databases, and LLM providers. OpenRAG connects and amplifies three popular, proven open-source projects into one powerful platform:\n**Langflow** Langflow is a powerful tool to build and deploy AI agents and MCP servers [Read more](https://www.langflow.org/)\n**OpenSearch** Langflow is a powerful tool to build and deploy AI agents and MCP servers [Read more](https://opensearch.org/)\n**Docling** Langflow is a powerful tool to build and deploy AI agents and MCP servers [Read more](https://www.docling.ai/)\"\n2. Synthesize retrieved or ingested content in your own words.\n3. Support factual claims with citations in the format:\n (Source: <document_name_or_id>)\n4. If no supporting evidence is found:\n Say: \"No relevant supporting sources were found for that request.\"\n5. Never invent facts or hallucinate details.\n6. Be concise, direct, and confident. \n7. Do not reveal internal chain-of-thought." "value": "You are the OpenRAG Agent. You answer questions using retrieval, reasoning, and tool use.\nYou have access to several tools. Your job is to determine **which tool to use and when**.\n### Available Tools\n- OpenSearch Retrieval Tool:\n Use this to search the indexed knowledge base. Use when the user asks about product details, internal concepts, processes, architecture, documentation, roadmaps, or anything that may be stored in the index.\n- Conversation History:\n Use this to maintain continuity when the user is referring to previous turns. \n Do not treat history as a factual source.\n- Conversation File Context:\n Use this when the user asks about a document they uploaded or refers directly to its contents.\n- URL Ingestion Tool:\n Use this **only** when the user explicitly asks you to read, summarize, or analyze the content of a URL.\n Do not ingest URLs automatically.\n- Calculator / Expression Evaluation Tool:\n Use this when the user asks to compare numbers, compute estimates, calculate totals, analyze pricing, or answer any question requiring mathematics or quantitative reasoning.\n If the answer requires arithmetic, call the calculator tool rather than calculating internally.\n### Retrieval Decision Rules\nUse OpenSearch **whenever**:\n1. The question may be answered from internal or indexed data.\n2. The user references team names, product names, release plans, configurations, requirements, or official information.\n3. The user needs a factual, grounded answer.\nDo **not** use retrieval if:\n- The question is purely creative (e.g., storytelling, analogies) or personal preference.\n- The user simply wants text reformatted or rewritten from what is already present in the conversation.\nWhen uncertain → **Retrieve.** Retrieval is low risk and improves grounding.\n### URL Ingestion Rules\nOnly ingest URLs when the user explicitly says:\n- \"Read this link\"\n- \"Summarize this webpage\"\n- \"What does this site say?\"\n- \"Ingest this URL\"\nIf unclear → ask a clarifying question.\n### Calculator Usage Rules\nUse the calculator when:\n- Performing arithmetic\n- Estimating totals\n- Comparing values\n- Modeling cost, time, effort, scale, or projections\nDo not perform math internally. **Call the calculator tool instead.**\n### Answer Construction Rules\n1. When asked: \"What is OpenRAG\", answer the following:\n\"OpenRAG is an open-source package for building agentic RAG systems. It supports integration with a wide range of orchestration tools, vector databases, and LLM providers. OpenRAG connects and amplifies three popular, proven open-source projects into one powerful platform:\n**Langflow** Langflow is a powerful tool to build and deploy AI agents and MCP servers. [Read more](https://www.langflow.org/)\n**OpenSearch** OpenSearch is an open source, search and observability suite that brings order to unstructured data at scale. [Read more](https://opensearch.org/)\n**Docling** Docling simplifies document processing with advanced PDF understanding, OCR support, and seamless AI integrations. Parse PDFs, DOCX, PPTX, images & more. [Read more](https://www.docling.ai/)\"\n2. Synthesize retrieved or ingested content in your own words.\n3. Support factual claims with citations in the format:\n (Source: <document_name_or_id>)\n4. If no supporting evidence is found:\n Say: \"No relevant supporting sources were found for that request.\"\n5. Never invent facts or hallucinate details.\n6. Be concise, direct, and confident. \n7. Do not reveal internal chain-of-thought."
}, },
"temperature": { "temperature": {
"_input_type": "SliderInput", "_input_type": "SliderInput",

View file

@ -41,6 +41,7 @@ export function ModelSelector({
noOptionsPlaceholder = "No models available", noOptionsPlaceholder = "No models available",
custom = false, custom = false,
hasError = false, hasError = false,
defaultOpen = false,
}: { }: {
options?: ModelOption[]; options?: ModelOption[];
groupedOptions?: GroupedModelOption[]; groupedOptions?: GroupedModelOption[];
@ -52,8 +53,9 @@ export function ModelSelector({
custom?: boolean; custom?: boolean;
onValueChange: (value: string, provider?: string) => void; onValueChange: (value: string, provider?: string) => void;
hasError?: boolean; hasError?: boolean;
defaultOpen?: boolean;
}) { }) {
const [open, setOpen] = useState(false); const [open, setOpen] = useState(defaultOpen);
const [searchValue, setSearchValue] = useState(""); const [searchValue, setSearchValue] = useState("");
// Flatten grouped options or use regular options // Flatten grouped options or use regular options
@ -77,6 +79,13 @@ export function ModelSelector({
} }
}, [allOptions, value, custom, onValueChange]); }, [allOptions, value, custom, onValueChange]);
// Update open state when defaultOpen changes
useEffect(() => {
if (defaultOpen) {
setOpen(true);
}
}, [defaultOpen]);
return ( return (
<Popover open={open} onOpenChange={setOpen} modal={false}> <Popover open={open} onOpenChange={setOpen} modal={false}>
<PopoverTrigger asChild> <PopoverTrigger asChild>

View file

@ -9,150 +9,161 @@ import type { ProviderHealthResponse } from "@/app/api/queries/useProviderHealth
import AnthropicLogo from "@/components/icons/anthropic-logo"; import AnthropicLogo from "@/components/icons/anthropic-logo";
import { Button } from "@/components/ui/button"; import { Button } from "@/components/ui/button";
import { import {
Dialog, Dialog,
DialogContent, DialogContent,
DialogFooter, DialogFooter,
DialogHeader, DialogHeader,
DialogTitle, DialogTitle,
} from "@/components/ui/dialog"; } from "@/components/ui/dialog";
import { import {
AnthropicSettingsForm, AnthropicSettingsForm,
type AnthropicSettingsFormData, type AnthropicSettingsFormData,
} from "./anthropic-settings-form"; } from "./anthropic-settings-form";
import { useRouter } from "next/navigation";
const AnthropicSettingsDialog = ({ const AnthropicSettingsDialog = ({
open, open,
setOpen, setOpen,
}: { }: {
open: boolean; open: boolean;
setOpen: (open: boolean) => void; setOpen: (open: boolean) => void;
}) => { }) => {
const queryClient = useQueryClient(); const queryClient = useQueryClient();
const [isValidating, setIsValidating] = useState(false); const [isValidating, setIsValidating] = useState(false);
const [validationError, setValidationError] = useState<Error | null>(null); const [validationError, setValidationError] = useState<Error | null>(null);
const router = useRouter();
const methods = useForm<AnthropicSettingsFormData>({ const methods = useForm<AnthropicSettingsFormData>({
mode: "onSubmit", mode: "onSubmit",
defaultValues: { defaultValues: {
apiKey: "", apiKey: "",
}, },
}); });
const { handleSubmit, watch } = methods; const { handleSubmit, watch } = methods;
const apiKey = watch("apiKey"); const apiKey = watch("apiKey");
const { refetch: validateCredentials } = useGetAnthropicModelsQuery( const { refetch: validateCredentials } = useGetAnthropicModelsQuery(
{ {
apiKey: apiKey, apiKey: apiKey,
}, },
{ {
enabled: false, enabled: false,
}, },
); );
const settingsMutation = useUpdateSettingsMutation({ const settingsMutation = useUpdateSettingsMutation({
onSuccess: () => { onSuccess: () => {
// Update provider health cache to healthy since backend validated the setup // Update provider health cache to healthy since backend validated the setup
const healthData: ProviderHealthResponse = { const healthData: ProviderHealthResponse = {
status: "healthy", status: "healthy",
message: "Provider is configured and working correctly", message: "Provider is configured and working correctly",
provider: "anthropic", provider: "anthropic",
}; };
queryClient.setQueryData(["provider", "health"], healthData); queryClient.setQueryData(["provider", "health"], healthData);
toast.success( toast.message("Anthropic successfully configured", {
"Anthropic credentials saved. Configure models in the Settings page.", description: "You can now access the provided language models.",
); duration: Infinity,
setOpen(false); closeButton: true,
}, icon: <AnthropicLogo className="w-4 h-4 text-[#D97757]" />,
}); action: {
label: "Settings",
onClick: () => {
router.push("/settings?focusLlmModel=true");
},
},
});
setOpen(false);
},
});
const onSubmit = async (data: AnthropicSettingsFormData) => { const onSubmit = async (data: AnthropicSettingsFormData) => {
// Clear any previous validation errors // Clear any previous validation errors
setValidationError(null); setValidationError(null);
// Only validate if a new API key was entered // Only validate if a new API key was entered
if (data.apiKey) { if (data.apiKey) {
setIsValidating(true); setIsValidating(true);
const result = await validateCredentials(); const result = await validateCredentials();
setIsValidating(false); setIsValidating(false);
if (result.isError) { if (result.isError) {
setValidationError(result.error); setValidationError(result.error);
return; return;
} }
} }
const payload: { const payload: {
anthropic_api_key?: string; anthropic_api_key?: string;
} = {}; } = {};
// Only include api_key if a value was entered // Only include api_key if a value was entered
if (data.apiKey) { if (data.apiKey) {
payload.anthropic_api_key = data.apiKey; payload.anthropic_api_key = data.apiKey;
} }
// Submit the update // Submit the update
settingsMutation.mutate(payload); settingsMutation.mutate(payload);
}; };
return ( return (
<Dialog open={open} onOpenChange={setOpen}> <Dialog open={open} onOpenChange={setOpen}>
<DialogContent className="max-w-2xl"> <DialogContent className="max-w-2xl">
<FormProvider {...methods}> <FormProvider {...methods}>
<form onSubmit={handleSubmit(onSubmit)} className="grid gap-4"> <form onSubmit={handleSubmit(onSubmit)} className="grid gap-4">
<DialogHeader className="mb-2"> <DialogHeader className="mb-2">
<DialogTitle className="flex items-center gap-3"> <DialogTitle className="flex items-center gap-3">
<div className="w-8 h-8 rounded flex items-center justify-center bg-white border"> <div className="w-8 h-8 rounded flex items-center justify-center bg-white border">
<AnthropicLogo className="text-black" /> <AnthropicLogo className="text-black" />
</div> </div>
Anthropic Setup Anthropic Setup
</DialogTitle> </DialogTitle>
</DialogHeader> </DialogHeader>
<AnthropicSettingsForm <AnthropicSettingsForm
modelsError={validationError} modelsError={validationError}
isLoadingModels={isValidating} isLoadingModels={isValidating}
/> />
<AnimatePresence mode="wait"> <AnimatePresence mode="wait">
{settingsMutation.isError && ( {settingsMutation.isError && (
<motion.div <motion.div
key="error" key="error"
initial={{ opacity: 0, y: 10 }} initial={{ opacity: 0, y: 10 }}
animate={{ opacity: 1, y: 0 }} animate={{ opacity: 1, y: 0 }}
exit={{ opacity: 0, y: -10 }} exit={{ opacity: 0, y: -10 }}
> >
<p className="rounded-lg border border-destructive p-4"> <p className="rounded-lg border border-destructive p-4">
{settingsMutation.error?.message} {settingsMutation.error?.message}
</p> </p>
</motion.div> </motion.div>
)} )}
</AnimatePresence> </AnimatePresence>
<DialogFooter className="mt-4"> <DialogFooter className="mt-4">
<Button <Button
variant="outline" variant="outline"
type="button" type="button"
onClick={() => setOpen(false)} onClick={() => setOpen(false)}
> >
Cancel Cancel
</Button> </Button>
<Button <Button
type="submit" type="submit"
disabled={settingsMutation.isPending || isValidating} disabled={settingsMutation.isPending || isValidating}
> >
{settingsMutation.isPending {settingsMutation.isPending
? "Saving..." ? "Saving..."
: isValidating : isValidating
? "Validating..." ? "Validating..."
: "Save"} : "Save"}
</Button> </Button>
</DialogFooter> </DialogFooter>
</form> </form>
</FormProvider> </FormProvider>
</DialogContent> </DialogContent>
</Dialog> </Dialog>
); );
}; };
export default AnthropicSettingsDialog; export default AnthropicSettingsDialog;

View file

@ -96,20 +96,10 @@ export const ModelProviders = () => {
const currentEmbeddingProvider = const currentEmbeddingProvider =
(settings.knowledge?.embedding_provider as ModelProvider) || "openai"; (settings.knowledge?.embedding_provider as ModelProvider) || "openai";
// Get all provider keys with active providers first
const activeProviders = new Set([
currentLlmProvider,
currentEmbeddingProvider,
]);
const sortedProviderKeys = [
...Array.from(activeProviders),
...allProviderKeys.filter((key) => !activeProviders.has(key)),
];
return ( return (
<> <>
<div className="grid gap-6 xs:grid-cols-1 md:grid-cols-2 lg:grid-cols-4"> <div className="grid gap-6 xs:grid-cols-1 md:grid-cols-2 lg:grid-cols-4">
{sortedProviderKeys.map((providerKey) => { {allProviderKeys.map((providerKey) => {
const { const {
name, name,
logo: Logo, logo: Logo,
@ -118,7 +108,6 @@ export const ModelProviders = () => {
} = modelProvidersMap[providerKey]; } = modelProvidersMap[providerKey];
const isLlmProvider = providerKey === currentLlmProvider; const isLlmProvider = providerKey === currentLlmProvider;
const isEmbeddingProvider = providerKey === currentEmbeddingProvider; const isEmbeddingProvider = providerKey === currentEmbeddingProvider;
const isCurrentProvider = isLlmProvider || isEmbeddingProvider;
// Check if this specific provider is unhealthy // Check if this specific provider is unhealthy
const hasLlmError = isLlmProvider && health?.llm_error; const hasLlmError = isLlmProvider && health?.llm_error;
@ -161,16 +150,8 @@ export const ModelProviders = () => {
</div> </div>
<CardTitle className="flex flex-row items-center gap-2"> <CardTitle className="flex flex-row items-center gap-2">
{name} {name}
{isCurrentProvider && ( {isProviderUnhealthy && (
<span <span className="h-2 w-2 rounded-full bg-destructive" />
className={cn(
"h-2 w-2 rounded-full",
isProviderUnhealthy
? "bg-destructive"
: "bg-accent-emerald-foreground",
)}
aria-label={isProviderUnhealthy ? "Error" : "Active"}
/>
)} )}
</CardTitle> </CardTitle>
</div> </div>

View file

@ -10,150 +10,162 @@ import type { ProviderHealthResponse } from "@/app/api/queries/useProviderHealth
import OllamaLogo from "@/components/icons/ollama-logo"; import OllamaLogo from "@/components/icons/ollama-logo";
import { Button } from "@/components/ui/button"; import { Button } from "@/components/ui/button";
import { import {
Dialog, Dialog,
DialogContent, DialogContent,
DialogFooter, DialogFooter,
DialogHeader, DialogHeader,
DialogTitle, DialogTitle,
} from "@/components/ui/dialog"; } from "@/components/ui/dialog";
import { useAuth } from "@/contexts/auth-context"; import { useAuth } from "@/contexts/auth-context";
import { import {
OllamaSettingsForm, OllamaSettingsForm,
type OllamaSettingsFormData, type OllamaSettingsFormData,
} from "./ollama-settings-form"; } from "./ollama-settings-form";
import { useRouter } from "next/navigation";
const OllamaSettingsDialog = ({ const OllamaSettingsDialog = ({
open, open,
setOpen, setOpen,
}: { }: {
open: boolean; open: boolean;
setOpen: (open: boolean) => void; setOpen: (open: boolean) => void;
}) => { }) => {
const { isAuthenticated, isNoAuthMode } = useAuth(); const { isAuthenticated, isNoAuthMode } = useAuth();
const queryClient = useQueryClient(); const queryClient = useQueryClient();
const [isValidating, setIsValidating] = useState(false); const [isValidating, setIsValidating] = useState(false);
const [validationError, setValidationError] = useState<Error | null>(null); const [validationError, setValidationError] = useState<Error | null>(null);
const router = useRouter();
const { data: settings = {} } = useGetSettingsQuery({ const { data: settings = {} } = useGetSettingsQuery({
enabled: isAuthenticated || isNoAuthMode, enabled: isAuthenticated || isNoAuthMode,
}); });
const isOllamaConfigured = settings.providers?.ollama?.configured === true; const isOllamaConfigured = settings.providers?.ollama?.configured === true;
const methods = useForm<OllamaSettingsFormData>({ const methods = useForm<OllamaSettingsFormData>({
mode: "onSubmit", mode: "onSubmit",
defaultValues: { defaultValues: {
endpoint: isOllamaConfigured endpoint: isOllamaConfigured
? settings.providers?.ollama?.endpoint ? settings.providers?.ollama?.endpoint
: "http://localhost:11434", : "http://localhost:11434",
}, },
}); });
const { handleSubmit, watch } = methods; const { handleSubmit, watch } = methods;
const endpoint = watch("endpoint"); const endpoint = watch("endpoint");
const { refetch: validateCredentials } = useGetOllamaModelsQuery( const { refetch: validateCredentials } = useGetOllamaModelsQuery(
{ {
endpoint: endpoint, endpoint: endpoint,
}, },
{ {
enabled: false, enabled: false,
}, },
); );
const settingsMutation = useUpdateSettingsMutation({ const settingsMutation = useUpdateSettingsMutation({
onSuccess: () => { onSuccess: () => {
// Update provider health cache to healthy since backend validated the setup // Update provider health cache to healthy since backend validated the setup
const healthData: ProviderHealthResponse = { const healthData: ProviderHealthResponse = {
status: "healthy", status: "healthy",
message: "Provider is configured and working correctly", message: "Provider is configured and working correctly",
provider: "ollama", provider: "ollama",
}; };
queryClient.setQueryData(["provider", "health"], healthData); queryClient.setQueryData(["provider", "health"], healthData);
toast.success( toast.message("Ollama successfully configured", {
"Ollama endpoint saved. Configure models in the Settings page.", description:
); "You can now access the provided language and embedding models.",
setOpen(false); duration: Infinity,
}, closeButton: true,
}); icon: <OllamaLogo className="w-4 h-4" />,
action: {
label: "Settings",
onClick: () => {
router.push("/settings?focusLlmModel=true");
},
},
});
setOpen(false);
},
});
const onSubmit = async (data: OllamaSettingsFormData) => { const onSubmit = async (data: OllamaSettingsFormData) => {
// Clear any previous validation errors // Clear any previous validation errors
setValidationError(null); setValidationError(null);
// Validate endpoint by fetching models // Validate endpoint by fetching models
setIsValidating(true); setIsValidating(true);
const result = await validateCredentials(); const result = await validateCredentials();
setIsValidating(false); setIsValidating(false);
if (result.isError) { if (result.isError) {
setValidationError(result.error); setValidationError(result.error);
return; return;
} }
settingsMutation.mutate({ settingsMutation.mutate({
ollama_endpoint: data.endpoint, ollama_endpoint: data.endpoint,
}); });
}; };
return ( return (
<Dialog open={open} onOpenChange={setOpen}> <Dialog open={open} onOpenChange={setOpen}>
<DialogContent className="max-w-2xl"> <DialogContent className="max-w-2xl">
<FormProvider {...methods}> <FormProvider {...methods}>
<form onSubmit={handleSubmit(onSubmit)} className="grid gap-4"> <form onSubmit={handleSubmit(onSubmit)} className="grid gap-4">
<DialogHeader className="mb-2"> <DialogHeader className="mb-2">
<DialogTitle className="flex items-center gap-3"> <DialogTitle className="flex items-center gap-3">
<div className="w-8 h-8 rounded flex items-center justify-center bg-white border"> <div className="w-8 h-8 rounded flex items-center justify-center bg-white border">
<OllamaLogo className="text-black" /> <OllamaLogo className="text-black" />
</div> </div>
Ollama Setup Ollama Setup
</DialogTitle> </DialogTitle>
</DialogHeader> </DialogHeader>
<OllamaSettingsForm <OllamaSettingsForm
modelsError={validationError} modelsError={validationError}
isLoadingModels={isValidating} isLoadingModels={isValidating}
/> />
<AnimatePresence mode="wait"> <AnimatePresence mode="wait">
{settingsMutation.isError && ( {settingsMutation.isError && (
<motion.div <motion.div
key="error" key="error"
initial={{ opacity: 0, y: 10 }} initial={{ opacity: 0, y: 10 }}
animate={{ opacity: 1, y: 0 }} animate={{ opacity: 1, y: 0 }}
exit={{ opacity: 0, y: -10 }} exit={{ opacity: 0, y: -10 }}
> >
<p className="rounded-lg border border-destructive p-4"> <p className="rounded-lg border border-destructive p-4">
{settingsMutation.error?.message} {settingsMutation.error?.message}
</p> </p>
</motion.div> </motion.div>
)} )}
</AnimatePresence> </AnimatePresence>
<DialogFooter className="mt-4"> <DialogFooter className="mt-4">
<Button <Button
variant="outline" variant="outline"
type="button" type="button"
onClick={() => setOpen(false)} onClick={() => setOpen(false)}
> >
Cancel Cancel
</Button> </Button>
<Button <Button
type="submit" type="submit"
disabled={settingsMutation.isPending || isValidating} disabled={settingsMutation.isPending || isValidating}
> >
{settingsMutation.isPending {settingsMutation.isPending
? "Saving..." ? "Saving..."
: isValidating : isValidating
? "Validating..." ? "Validating..."
: "Save"} : "Save"}
</Button> </Button>
</DialogFooter> </DialogFooter>
</form> </form>
</FormProvider> </FormProvider>
</DialogContent> </DialogContent>
</Dialog> </Dialog>
); );
}; };
export default OllamaSettingsDialog; export default OllamaSettingsDialog;

View file

@ -9,150 +9,162 @@ import type { ProviderHealthResponse } from "@/app/api/queries/useProviderHealth
import OpenAILogo from "@/components/icons/openai-logo"; import OpenAILogo from "@/components/icons/openai-logo";
import { Button } from "@/components/ui/button"; import { Button } from "@/components/ui/button";
import { import {
Dialog, Dialog,
DialogContent, DialogContent,
DialogFooter, DialogFooter,
DialogHeader, DialogHeader,
DialogTitle, DialogTitle,
} from "@/components/ui/dialog"; } from "@/components/ui/dialog";
import { import {
OpenAISettingsForm, OpenAISettingsForm,
type OpenAISettingsFormData, type OpenAISettingsFormData,
} from "./openai-settings-form"; } from "./openai-settings-form";
import { useRouter } from "next/navigation";
const OpenAISettingsDialog = ({ const OpenAISettingsDialog = ({
open, open,
setOpen, setOpen,
}: { }: {
open: boolean; open: boolean;
setOpen: (open: boolean) => void; setOpen: (open: boolean) => void;
}) => { }) => {
const queryClient = useQueryClient(); const queryClient = useQueryClient();
const [isValidating, setIsValidating] = useState(false); const [isValidating, setIsValidating] = useState(false);
const [validationError, setValidationError] = useState<Error | null>(null); const [validationError, setValidationError] = useState<Error | null>(null);
const router = useRouter();
const methods = useForm<OpenAISettingsFormData>({ const methods = useForm<OpenAISettingsFormData>({
mode: "onSubmit", mode: "onSubmit",
defaultValues: { defaultValues: {
apiKey: "", apiKey: "",
}, },
}); });
const { handleSubmit, watch } = methods; const { handleSubmit, watch } = methods;
const apiKey = watch("apiKey"); const apiKey = watch("apiKey");
const { refetch: validateCredentials } = useGetOpenAIModelsQuery( const { refetch: validateCredentials } = useGetOpenAIModelsQuery(
{ {
apiKey: apiKey, apiKey: apiKey,
}, },
{ {
enabled: false, enabled: false,
}, },
); );
const settingsMutation = useUpdateSettingsMutation({ const settingsMutation = useUpdateSettingsMutation({
onSuccess: () => { onSuccess: () => {
// Update provider health cache to healthy since backend validated the setup // Update provider health cache to healthy since backend validated the setup
const healthData: ProviderHealthResponse = { const healthData: ProviderHealthResponse = {
status: "healthy", status: "healthy",
message: "Provider is configured and working correctly", message: "Provider is configured and working correctly",
provider: "openai", provider: "openai",
}; };
queryClient.setQueryData(["provider", "health"], healthData); queryClient.setQueryData(["provider", "health"], healthData);
toast.success( toast.message("OpenAI successfully configured", {
"OpenAI credentials saved. Configure models in the Settings page.", description:
); "You can now access the provided language and embedding models.",
setOpen(false); duration: Infinity,
}, closeButton: true,
}); icon: <OpenAILogo className="w-4 h-4" />,
action: {
label: "Settings",
onClick: () => {
router.push("/settings?focusLlmModel=true");
},
},
});
setOpen(false);
},
});
const onSubmit = async (data: OpenAISettingsFormData) => { const onSubmit = async (data: OpenAISettingsFormData) => {
// Clear any previous validation errors // Clear any previous validation errors
setValidationError(null); setValidationError(null);
// Only validate if a new API key was entered // Only validate if a new API key was entered
if (data.apiKey) { if (data.apiKey) {
setIsValidating(true); setIsValidating(true);
const result = await validateCredentials(); const result = await validateCredentials();
setIsValidating(false); setIsValidating(false);
if (result.isError) { if (result.isError) {
setValidationError(result.error); setValidationError(result.error);
return; return;
} }
} }
const payload: { const payload: {
openai_api_key?: string; openai_api_key?: string;
} = {}; } = {};
// Only include api_key if a value was entered // Only include api_key if a value was entered
if (data.apiKey) { if (data.apiKey) {
payload.openai_api_key = data.apiKey; payload.openai_api_key = data.apiKey;
} }
// Submit the update // Submit the update
settingsMutation.mutate(payload); settingsMutation.mutate(payload);
}; };
return ( return (
<Dialog open={open} onOpenChange={setOpen}> <Dialog open={open} onOpenChange={setOpen}>
<DialogContent className="max-w-2xl"> <DialogContent className="max-w-2xl">
<FormProvider {...methods}> <FormProvider {...methods}>
<form onSubmit={handleSubmit(onSubmit)} className="grid gap-4"> <form onSubmit={handleSubmit(onSubmit)} className="grid gap-4">
<DialogHeader className="mb-2"> <DialogHeader className="mb-2">
<DialogTitle className="flex items-center gap-3"> <DialogTitle className="flex items-center gap-3">
<div className="w-8 h-8 rounded flex items-center justify-center bg-white border"> <div className="w-8 h-8 rounded flex items-center justify-center bg-white border">
<OpenAILogo className="text-black" /> <OpenAILogo className="text-black" />
</div> </div>
OpenAI Setup OpenAI Setup
</DialogTitle> </DialogTitle>
</DialogHeader> </DialogHeader>
<OpenAISettingsForm <OpenAISettingsForm
modelsError={validationError} modelsError={validationError}
isLoadingModels={isValidating} isLoadingModels={isValidating}
/> />
<AnimatePresence mode="wait"> <AnimatePresence mode="wait">
{settingsMutation.isError && ( {settingsMutation.isError && (
<motion.div <motion.div
key="error" key="error"
initial={{ opacity: 0, y: 10 }} initial={{ opacity: 0, y: 10 }}
animate={{ opacity: 1, y: 0 }} animate={{ opacity: 1, y: 0 }}
exit={{ opacity: 0, y: -10 }} exit={{ opacity: 0, y: -10 }}
> >
<p className="rounded-lg border border-destructive p-4"> <p className="rounded-lg border border-destructive p-4">
{settingsMutation.error?.message} {settingsMutation.error?.message}
</p> </p>
</motion.div> </motion.div>
)} )}
</AnimatePresence> </AnimatePresence>
<DialogFooter className="mt-4"> <DialogFooter className="mt-4">
<Button <Button
variant="outline" variant="outline"
type="button" type="button"
onClick={() => setOpen(false)} onClick={() => setOpen(false)}
> >
Cancel Cancel
</Button> </Button>
<Button <Button
type="submit" type="submit"
disabled={settingsMutation.isPending || isValidating} disabled={settingsMutation.isPending || isValidating}
> >
{settingsMutation.isPending {settingsMutation.isPending
? "Saving..." ? "Saving..."
: isValidating : isValidating
? "Validating..." ? "Validating..."
: "Save"} : "Save"}
</Button> </Button>
</DialogFooter> </DialogFooter>
</form> </form>
</FormProvider> </FormProvider>
</DialogContent> </DialogContent>
</Dialog> </Dialog>
); );
}; };
export default OpenAISettingsDialog; export default OpenAISettingsDialog;

View file

@ -9,158 +9,171 @@ import type { ProviderHealthResponse } from "@/app/api/queries/useProviderHealth
import IBMLogo from "@/components/icons/ibm-logo"; import IBMLogo from "@/components/icons/ibm-logo";
import { Button } from "@/components/ui/button"; import { Button } from "@/components/ui/button";
import { import {
Dialog, Dialog,
DialogContent, DialogContent,
DialogFooter, DialogFooter,
DialogHeader, DialogHeader,
DialogTitle, DialogTitle,
} from "@/components/ui/dialog"; } from "@/components/ui/dialog";
import { import {
WatsonxSettingsForm, WatsonxSettingsForm,
type WatsonxSettingsFormData, type WatsonxSettingsFormData,
} from "./watsonx-settings-form"; } from "./watsonx-settings-form";
import { useRouter } from "next/navigation";
const WatsonxSettingsDialog = ({ const WatsonxSettingsDialog = ({
open, open,
setOpen, setOpen,
}: { }: {
open: boolean; open: boolean;
setOpen: (open: boolean) => void; setOpen: (open: boolean) => void;
}) => { }) => {
const queryClient = useQueryClient(); const queryClient = useQueryClient();
const [isValidating, setIsValidating] = useState(false); const [isValidating, setIsValidating] = useState(false);
const [validationError, setValidationError] = useState<Error | null>(null); const [validationError, setValidationError] = useState<Error | null>(null);
const router = useRouter();
const methods = useForm<WatsonxSettingsFormData>({ const methods = useForm<WatsonxSettingsFormData>({
mode: "onSubmit", mode: "onSubmit",
defaultValues: { defaultValues: {
endpoint: "https://us-south.ml.cloud.ibm.com", endpoint: "https://us-south.ml.cloud.ibm.com",
apiKey: "", apiKey: "",
projectId: "", projectId: "",
}, },
}); });
const { handleSubmit, watch } = methods; const { handleSubmit, watch } = methods;
const endpoint = watch("endpoint"); const endpoint = watch("endpoint");
const apiKey = watch("apiKey"); const apiKey = watch("apiKey");
const projectId = watch("projectId"); const projectId = watch("projectId");
const { refetch: validateCredentials } = useGetIBMModelsQuery( const { refetch: validateCredentials } = useGetIBMModelsQuery(
{ {
endpoint: endpoint, endpoint: endpoint,
apiKey: apiKey, apiKey: apiKey,
projectId: projectId, projectId: projectId,
}, },
{ {
enabled: false, enabled: false,
}, },
); );
const settingsMutation = useUpdateSettingsMutation({ const settingsMutation = useUpdateSettingsMutation({
onSuccess: () => { onSuccess: () => {
// Update provider health cache to healthy since backend validated the setup // Update provider health cache to healthy since backend validated the setup
const healthData: ProviderHealthResponse = { const healthData: ProviderHealthResponse = {
status: "healthy", status: "healthy",
message: "Provider is configured and working correctly", message: "Provider is configured and working correctly",
provider: "watsonx", provider: "watsonx",
}; };
queryClient.setQueryData(["provider", "health"], healthData); queryClient.setQueryData(["provider", "health"], healthData);
toast.success(
"watsonx credentials saved. Configure models in the Settings page.",
);
setOpen(false);
},
});
const onSubmit = async (data: WatsonxSettingsFormData) => { toast.message("IBM watsonx.ai successfully configured", {
// Clear any previous validation errors description:
setValidationError(null); "You can now access the provided language and embedding models.",
duration: Infinity,
closeButton: true,
icon: <IBMLogo className="w-4 h-4 text-[#1063FE]" />,
action: {
label: "Settings",
onClick: () => {
router.push("/settings?focusLlmModel=true");
},
},
});
setOpen(false);
},
});
// Validate credentials by fetching models const onSubmit = async (data: WatsonxSettingsFormData) => {
setIsValidating(true); // Clear any previous validation errors
const result = await validateCredentials(); setValidationError(null);
setIsValidating(false);
if (result.isError) { // Validate credentials by fetching models
setValidationError(result.error); setIsValidating(true);
return; const result = await validateCredentials();
} setIsValidating(false);
const payload: { if (result.isError) {
watsonx_endpoint: string; setValidationError(result.error);
watsonx_api_key?: string; return;
watsonx_project_id: string; }
} = {
watsonx_endpoint: data.endpoint,
watsonx_project_id: data.projectId,
};
// Only include api_key if a value was entered const payload: {
if (data.apiKey) { watsonx_endpoint: string;
payload.watsonx_api_key = data.apiKey; watsonx_api_key?: string;
} watsonx_project_id: string;
} = {
watsonx_endpoint: data.endpoint,
watsonx_project_id: data.projectId,
};
// Submit the update // Only include api_key if a value was entered
settingsMutation.mutate(payload); if (data.apiKey) {
}; payload.watsonx_api_key = data.apiKey;
}
return ( // Submit the update
<Dialog open={open} onOpenChange={setOpen}> settingsMutation.mutate(payload);
<DialogContent autoFocus={false} className="max-w-2xl"> };
<FormProvider {...methods}>
<form onSubmit={handleSubmit(onSubmit)} className="grid gap-4">
<DialogHeader className="mb-2">
<DialogTitle className="flex items-center gap-3">
<div className="w-8 h-8 rounded flex items-center justify-center bg-white border">
<IBMLogo className="text-black" />
</div>
IBM watsonx.ai Setup
</DialogTitle>
</DialogHeader>
<WatsonxSettingsForm return (
modelsError={validationError} <Dialog open={open} onOpenChange={setOpen}>
isLoadingModels={isValidating} <DialogContent autoFocus={false} className="max-w-2xl">
/> <FormProvider {...methods}>
<form onSubmit={handleSubmit(onSubmit)} className="grid gap-4">
<DialogHeader className="mb-2">
<DialogTitle className="flex items-center gap-3">
<div className="w-8 h-8 rounded flex items-center justify-center bg-white border">
<IBMLogo className="text-black" />
</div>
IBM watsonx.ai Setup
</DialogTitle>
</DialogHeader>
<AnimatePresence mode="wait"> <WatsonxSettingsForm
{settingsMutation.isError && ( modelsError={validationError}
<motion.div isLoadingModels={isValidating}
key="error" />
initial={{ opacity: 0, y: 10 }}
animate={{ opacity: 1, y: 0 }} <AnimatePresence mode="wait">
exit={{ opacity: 0, y: -10 }} {settingsMutation.isError && (
> <motion.div
<p className="rounded-lg border border-destructive p-4"> key="error"
{settingsMutation.error?.message} initial={{ opacity: 0, y: 10 }}
</p> animate={{ opacity: 1, y: 0 }}
</motion.div> exit={{ opacity: 0, y: -10 }}
)} >
</AnimatePresence> <p className="rounded-lg border border-destructive p-4">
<DialogFooter className="mt-4"> {settingsMutation.error?.message}
<Button </p>
variant="outline" </motion.div>
type="button" )}
onClick={() => setOpen(false)} </AnimatePresence>
> <DialogFooter className="mt-4">
Cancel <Button
</Button> variant="outline"
<Button type="button"
type="submit" onClick={() => setOpen(false)}
disabled={settingsMutation.isPending || isValidating} >
> Cancel
{settingsMutation.isPending </Button>
? "Saving..." <Button
: isValidating type="submit"
? "Validating..." disabled={settingsMutation.isPending || isValidating}
: "Save"} >
</Button> {settingsMutation.isPending
</DialogFooter> ? "Saving..."
</form> : isValidating
</FormProvider> ? "Validating..."
</DialogContent> : "Save"}
</Dialog> </Button>
); </DialogFooter>
</form>
</FormProvider>
</DialogContent>
</Dialog>
);
}; };
export default WatsonxSettingsDialog; export default WatsonxSettingsDialog;

File diff suppressed because it is too large Load diff

View file

@ -4,7 +4,7 @@
export const DEFAULT_AGENT_SETTINGS = { export const DEFAULT_AGENT_SETTINGS = {
llm_model: "gpt-4o-mini", llm_model: "gpt-4o-mini",
system_prompt: system_prompt:
'You are the OpenRAG Agent. You answer questions using retrieval, reasoning, and tool use.\nYou have access to several tools. Your job is to determine **which tool to use and when**.\n### Available Tools\n- OpenSearch Retrieval Tool:\n Use this to search the indexed knowledge base. Use when the user asks about product details, internal concepts, processes, architecture, documentation, roadmaps, or anything that may be stored in the index.\n- Conversation History:\n Use this to maintain continuity when the user is referring to previous turns. \n Do not treat history as a factual source.\n- Conversation File Context:\n Use this when the user asks about a document they uploaded or refers directly to its contents.\n- URL Ingestion Tool:\n Use this **only** when the user explicitly asks you to read, summarize, or analyze the content of a URL.\n Do not ingest URLs automatically.\n- Calculator / Expression Evaluation Tool:\n Use this when the user asks to compare numbers, compute estimates, calculate totals, analyze pricing, or answer any question requiring mathematics or quantitative reasoning.\n If the answer requires arithmetic, call the calculator tool rather than calculating internally.\n### Retrieval Decision Rules\nUse OpenSearch **whenever**:\n1. The question may be answered from internal or indexed data.\n2. The user references team names, product names, release plans, configurations, requirements, or official information.\n3. The user needs a factual, grounded answer.\nDo **not** use retrieval if:\n- The question is purely creative (e.g., storytelling, analogies) or personal preference.\n- The user simply wants text reformatted or rewritten from what is already present in the conversation.\nWhen uncertain → **Retrieve.** Retrieval is low risk and improves grounding.\n### URL Ingestion Rules\nOnly ingest URLs when the user explicitly says:\n- "Read this link"\n- "Summarize this webpage"\n- "What does this site say?"\n- "Ingest this URL"\nIf unclear → ask a clarifying question.\n### Calculator Usage Rules\nUse the calculator when:\n- Performing arithmetic\n- Estimating totals\n- Comparing values\n- Modeling cost, time, effort, scale, or projections\nDo not perform math internally. **Call the calculator tool instead.**\n### Answer Construction Rules\n1. When asked: "What is OpenRAG", answer the following:\n"OpenRAG is an open-source package for building agentic RAG systems. It supports integration with a wide range of orchestration tools, vector databases, and LLM providers. OpenRAG connects and amplifies three popular, proven open-source projects into one powerful platform:\n**Langflow** Langflow is a powerful tool to build and deploy AI agents and MCP servers [Read more](https://www.langflow.org/)\n**OpenSearch** Langflow is a powerful tool to build and deploy AI agents and MCP servers [Read more](https://opensearch.org/)\n**Docling** Langflow is a powerful tool to build and deploy AI agents and MCP servers [Read more](https://www.docling.ai/)"\n2. Synthesize retrieved or ingested content in your own words.\n3. Support factual claims with citations in the format:\n (Source: <document_name_or_id>)\n4. If no supporting evidence is found:\n Say: "No relevant supporting sources were found for that request."\n5. Never invent facts or hallucinate details.\n6. Be concise, direct, and confident. \n7. Do not reveal internal chain-of-thought.', 'You are the OpenRAG Agent. You answer questions using retrieval, reasoning, and tool use.\nYou have access to several tools. Your job is to determine **which tool to use and when**.\n### Available Tools\n- OpenSearch Retrieval Tool:\n Use this to search the indexed knowledge base. Use when the user asks about product details, internal concepts, processes, architecture, documentation, roadmaps, or anything that may be stored in the index.\n- Conversation History:\n Use this to maintain continuity when the user is referring to previous turns. \n Do not treat history as a factual source.\n- Conversation File Context:\n Use this when the user asks about a document they uploaded or refers directly to its contents.\n- URL Ingestion Tool:\n Use this **only** when the user explicitly asks you to read, summarize, or analyze the content of a URL.\n Do not ingest URLs automatically.\n- Calculator / Expression Evaluation Tool:\n Use this when the user asks to compare numbers, compute estimates, calculate totals, analyze pricing, or answer any question requiring mathematics or quantitative reasoning.\n If the answer requires arithmetic, call the calculator tool rather than calculating internally.\n### Retrieval Decision Rules\nUse OpenSearch **whenever**:\n1. The question may be answered from internal or indexed data.\n2. The user references team names, product names, release plans, configurations, requirements, or official information.\n3. The user needs a factual, grounded answer.\nDo **not** use retrieval if:\n- The question is purely creative (e.g., storytelling, analogies) or personal preference.\n- The user simply wants text reformatted or rewritten from what is already present in the conversation.\nWhen uncertain → **Retrieve.** Retrieval is low risk and improves grounding.\n### URL Ingestion Rules\nOnly ingest URLs when the user explicitly says:\n- \"Read this link\"\n- \"Summarize this webpage\"\n- \"What does this site say?\"\n- \"Ingest this URL\"\nIf unclear → ask a clarifying question.\n### Calculator Usage Rules\nUse the calculator when:\n- Performing arithmetic\n- Estimating totals\n- Comparing values\n- Modeling cost, time, effort, scale, or projections\nDo not perform math internally. **Call the calculator tool instead.**\n### Answer Construction Rules\n1. When asked: \"What is OpenRAG\", answer the following:\n\"OpenRAG is an open-source package for building agentic RAG systems. It supports integration with a wide range of orchestration tools, vector databases, and LLM providers. OpenRAG connects and amplifies three popular, proven open-source projects into one powerful platform:\n**Langflow** Langflow is a powerful tool to build and deploy AI agents and MCP servers. [Read more](https://www.langflow.org/)\n**OpenSearch** OpenSearch is an open source, search and observability suite that brings order to unstructured data at scale. [Read more](https://opensearch.org/)\n**Docling** Docling simplifies document processing with advanced PDF understanding, OCR support, and seamless AI integrations. Parse PDFs, DOCX, PPTX, images & more. [Read more](https://www.docling.ai/)\"\n2. Synthesize retrieved or ingested content in your own words.\n3. Support factual claims with citations in the format:\n (Source: <document_name_or_id>)\n4. If no supporting evidence is found:\n Say: \"No relevant supporting sources were found for that request.\"\n5. Never invent facts or hallucinate details.\n6. Be concise, direct, and confident. \n7. Do not reveal internal chain-of-thought.',
} as const; } as const;
/** /**

View file

@ -34,7 +34,7 @@ def get_conversation_thread(user_id: str, previous_response_id: str = None):
"messages": [ "messages": [
{ {
"role": "system", "role": "system",
"content": "You are the OpenRAG Agent. You answer questions using retrieval, reasoning, and tool use.\nYou have access to several tools. Your job is to determine **which tool to use and when**.\n### Available Tools\n- OpenSearch Retrieval Tool:\n Use this to search the indexed knowledge base. Use when the user asks about product details, internal concepts, processes, architecture, documentation, roadmaps, or anything that may be stored in the index.\n- Conversation History:\n Use this to maintain continuity when the user is referring to previous turns. \n Do not treat history as a factual source.\n- Conversation File Context:\n Use this when the user asks about a document they uploaded or refers directly to its contents.\n- URL Ingestion Tool:\n Use this **only** when the user explicitly asks you to read, summarize, or analyze the content of a URL.\n Do not ingest URLs automatically.\n- Calculator / Expression Evaluation Tool:\n Use this when the user asks to compare numbers, compute estimates, calculate totals, analyze pricing, or answer any question requiring mathematics or quantitative reasoning.\n If the answer requires arithmetic, call the calculator tool rather than calculating internally.\n### Retrieval Decision Rules\nUse OpenSearch **whenever**:\n1. The question may be answered from internal or indexed data.\n2. The user references team names, product names, release plans, configurations, requirements, or official information.\n3. The user needs a factual, grounded answer.\nDo **not** use retrieval if:\n- The question is purely creative (e.g., storytelling, analogies) or personal preference.\n- The user simply wants text reformatted or rewritten from what is already present in the conversation.\nWhen uncertain → **Retrieve.** Retrieval is low risk and improves grounding.\n### URL Ingestion Rules\nOnly ingest URLs when the user explicitly says:\n- \"Read this link\"\n- \"Summarize this webpage\"\n- \"What does this site say?\"\n- \"Ingest this URL\"\nIf unclear → ask a clarifying question.\n### Calculator Usage Rules\nUse the calculator when:\n- Performing arithmetic\n- Estimating totals\n- Comparing values\n- Modeling cost, time, effort, scale, or projections\nDo not perform math internally. **Call the calculator tool instead.**\n### Answer Construction Rules\n1. When asked: \"What is OpenRAG\", answer the following:\n\"OpenRAG is an open-source package for building agentic RAG systems. It supports integration with a wide range of orchestration tools, vector databases, and LLM providers. OpenRAG connects and amplifies three popular, proven open-source projects into one powerful platform:\n**Langflow** Langflow is a powerful tool to build and deploy AI agents and MCP servers [Read more](https://www.langflow.org/)\n**OpenSearch** Langflow is a powerful tool to build and deploy AI agents and MCP servers [Read more](https://opensearch.org/)\n**Docling** Langflow is a powerful tool to build and deploy AI agents and MCP servers [Read more](https://www.docling.ai/)\"\n2. Synthesize retrieved or ingested content in your own words.\n3. Support factual claims with citations in the format:\n (Source: <document_name_or_id>)\n4. If no supporting evidence is found:\n Say: \"No relevant supporting sources were found for that request.\"\n5. Never invent facts or hallucinate details.\n6. Be concise, direct, and confident. \n7. Do not reveal internal chain-of-thought.", "content": "You are the OpenRAG Agent. You answer questions using retrieval, reasoning, and tool use.\nYou have access to several tools. Your job is to determine **which tool to use and when**.\n### Available Tools\n- OpenSearch Retrieval Tool:\n Use this to search the indexed knowledge base. Use when the user asks about product details, internal concepts, processes, architecture, documentation, roadmaps, or anything that may be stored in the index.\n- Conversation History:\n Use this to maintain continuity when the user is referring to previous turns. \n Do not treat history as a factual source.\n- Conversation File Context:\n Use this when the user asks about a document they uploaded or refers directly to its contents.\n- URL Ingestion Tool:\n Use this **only** when the user explicitly asks you to read, summarize, or analyze the content of a URL.\n Do not ingest URLs automatically.\n- Calculator / Expression Evaluation Tool:\n Use this when the user asks to compare numbers, compute estimates, calculate totals, analyze pricing, or answer any question requiring mathematics or quantitative reasoning.\n If the answer requires arithmetic, call the calculator tool rather than calculating internally.\n### Retrieval Decision Rules\nUse OpenSearch **whenever**:\n1. The question may be answered from internal or indexed data.\n2. The user references team names, product names, release plans, configurations, requirements, or official information.\n3. The user needs a factual, grounded answer.\nDo **not** use retrieval if:\n- The question is purely creative (e.g., storytelling, analogies) or personal preference.\n- The user simply wants text reformatted or rewritten from what is already present in the conversation.\nWhen uncertain → **Retrieve.** Retrieval is low risk and improves grounding.\n### URL Ingestion Rules\nOnly ingest URLs when the user explicitly says:\n- \"Read this link\"\n- \"Summarize this webpage\"\n- \"What does this site say?\"\n- \"Ingest this URL\"\nIf unclear → ask a clarifying question.\n### Calculator Usage Rules\nUse the calculator when:\n- Performing arithmetic\n- Estimating totals\n- Comparing values\n- Modeling cost, time, effort, scale, or projections\nDo not perform math internally. **Call the calculator tool instead.**\n### Answer Construction Rules\n1. When asked: \"What is OpenRAG\", answer the following:\n\"OpenRAG is an open-source package for building agentic RAG systems. It supports integration with a wide range of orchestration tools, vector databases, and LLM providers. OpenRAG connects and amplifies three popular, proven open-source projects into one powerful platform:\n**Langflow** Langflow is a powerful tool to build and deploy AI agents and MCP servers. [Read more](https://www.langflow.org/)\n**OpenSearch** OpenSearch is an open source, search and observability suite that brings order to unstructured data at scale. [Read more](https://opensearch.org/)\n**Docling** Docling simplifies document processing with advanced PDF understanding, OCR support, and seamless AI integrations. Parse PDFs, DOCX, PPTX, images & more. [Read more](https://www.docling.ai/)\"\n2. Synthesize retrieved or ingested content in your own words.\n3. Support factual claims with citations in the format:\n (Source: <document_name_or_id>)\n4. If no supporting evidence is found:\n Say: \"No relevant supporting sources were found for that request.\"\n5. Never invent facts or hallucinate details.\n6. Be concise, direct, and confident. \n7. Do not reveal internal chain-of-thought.",
} }
], ],
"previous_response_id": previous_response_id, # Parent response_id for branching "previous_response_id": previous_response_id, # Parent response_id for branching