From 845bfbf384473a55d1ec93aa0b1b40b067a7aed5 Mon Sep 17 00:00:00 2001 From: Mendon Kissling <59585235+mendonk@users.noreply.github.com> Date: Wed, 12 Nov 2025 12:03:10 -0500 Subject: [PATCH] docs: add anthropic provider and new onboarding (#381) * add-to-onboarding-partial * anthropic-no-embeddings * new-onboarding-card * Apply suggestions from code review Co-authored-by: April I. Murphy <36110273+aimurphy@users.noreply.github.com> * remove-preload-files-step --------- Co-authored-by: April I. Murphy <36110273+aimurphy@users.noreply.github.com> --- docs/docs/_partial-onboarding.mdx | 49 +++++++++++++++++++------------ 1 file changed, 30 insertions(+), 19 deletions(-) diff --git a/docs/docs/_partial-onboarding.mdx b/docs/docs/_partial-onboarding.mdx index 54e5d3ce..fda43f6e 100644 --- a/docs/docs/_partial-onboarding.mdx +++ b/docs/docs/_partial-onboarding.mdx @@ -12,35 +12,47 @@ Most values from onboarding can be changed later in the OpenRAG **Settings** pag The **language model provider** and **embeddings model provider** can only be selected at onboarding. To change your provider selection later, you must [reinstall OpenRAG](/install#reinstall). -You must use the same provider for your language model and embedding model, unless you're using Ollama. +You can use different providers for your language model and embedding model, such as Anthropic for the language model and OpenAI for the embeddings model. ::: -Choose one LLM provider and complete only those steps: +Choose one LLM provider and complete these steps: 1. Enable **Get API key from environment variable** to automatically enter your key from the TUI-generated `.env` file. Alternatively, paste an OpenAI API key into the field. - 2. Under **Advanced settings**, select your **Embedding Model** and **Language Model**. - 3. To load 2 sample PDFs, enable **Sample dataset**. - This is recommended, but not required. - 4. Click **Complete**. + 2. Under **Advanced settings**, select your **Language Model**. + 3. Click **Complete**. + 4. In the second onboarding panel, select a provider for embeddings and select your **Embedding Model**. 5. To complete the onboarding tasks, click **What is OpenRAG**, and then click **Add a Document**. Alternatively, click 1. Complete the fields for **watsonx.ai API Endpoint**, **IBM Project ID**, and **IBM API key**. These values are found in your IBM watsonx deployment. - 2. Under **Advanced settings**, select your **Embedding Model** and **Language Model**. - 3. To load 2 sample PDFs, enable **Sample dataset**. - This is recommended, but not required. - 4. Click **Complete**. + 2. Under **Advanced settings**, select your **Language Model**. + 3. Click **Complete**. + 4. In the second onboarding panel, select a provider for embeddings and select your **Embedding Model**. 5. To complete the onboarding tasks, click **What is OpenRAG**, and then click **Add a Document**. Alternatively, click + + :::info + Anthropic does not provide embedding models. If you select Anthropic for your language model, you must then select a different provider for embeddings. + ::: + 1. Enable **Use environment Anthropic API key** to automatically use your key from the TUI-generated `.env` file. + Alternatively, paste an Anthropic API key into the field. + 2. Under **Advanced settings**, select your **Language Model**. + 3. Click **Complete**. + 4. In the second onboarding panel, select a provider for embeddings and select your **Embedding Model**. + 5. To complete the onboarding tasks, click **What is OpenRAG**, and then click **Add a Document**. + Alternatively, click :::tip @@ -49,13 +61,12 @@ Choose one LLM provider and complete only those steps: 1. Enter your Ollama server's base URL address. The default Ollama server address is `http://localhost:11434`. OpenRAG automatically transforms `localhost` to access services outside of the container, and sends a test connection to your Ollama server to confirm connectivity. - 2. Select the **Embedding Model** and **Language Model** your Ollama server is running. - OpenRAG retrieves the available models from your Ollama server. - 3. To load 2 sample PDFs, enable **Sample dataset**. - This is recommended, but not required. - 4. Click **Complete**. + 2. Under **Advanced settings**, select your **Language Model** from the models available on your Ollama server. + 3. Click **Complete**. + 4. In the second onboarding panel, select your **Embedding Model** from the models available on your Ollama server. 5. To complete the onboarding tasks, click **What is OpenRAG**, and then click **Add a Document**. Alternatively, click \ No newline at end of file