From a7d152eca23fc5d88492a807b50644ed01e1e258 Mon Sep 17 00:00:00 2001 From: Mendon Kissling <59585235+mendonk@users.noreply.github.com> Date: Fri, 26 Sep 2025 15:59:22 -0400 Subject: [PATCH 1/5] model-selection --- docs/docs/get-started/install.mdx | 38 +++++++++++++++++++++++++++++-- 1 file changed, 36 insertions(+), 2 deletions(-) diff --git a/docs/docs/get-started/install.mdx b/docs/docs/get-started/install.mdx index 67f4ae89..1b8f625e 100644 --- a/docs/docs/get-started/install.mdx +++ b/docs/docs/get-started/install.mdx @@ -79,8 +79,42 @@ For more information on virtual environments, see [uv](https://docs.astral.sh/uv Command completed successfully ``` -7. To open the OpenRAG application, click **Open App** or press 6. -8. Continue with the Quickstart. +7. To open the OpenRAG application, click **Open App**, press 6, or navigate to `http://localhost:3000`. + The application opens. +8. Select your language model and embedding model provider, and complete the required fields. + **Your provider can only be selected once, and you must use the same provider for your language model and embedding model. + To change your selection, you must restart OpenRAG.** + + + + 9. You already entered a value for `OPENAI_API_KEY` in the TUI in Step 5, so enable **Get API key from environment variable**. + 10. Under **Advanced settings**, select your **Embedding Model** and **Language Model**. + 11. To load 2 sample PDFs, enable **Sample dataset**. + This is recommended, but not required. + 12. Click **Complete**. + + + + 9. Complete the fields for **watsonx.ai API Endpoint**, **IBM API key**, and **IBM Project ID**. + These values are found in your IBM watsonx deployment. + 10. Under **Advanced settings**, select your **Embedding Model** and **Language Model**. + 11. To load 2 sample PDFs, enable **Sample dataset**. + This is recommended, but not required. + 12. Click **Complete**. + + + + 9. Enter your Ollama server's base URL address. + The default Ollama server address is `http://localhost:11434`. + 10. Select the **Embedding Model** and **Language Model** your Ollama server is running. + 11. To load 2 sample PDFs, enable **Sample dataset**. + This is recommended, but not required. + 12. Click **Complete**. + + + + +8. Continue with the [Quickstart](/quickstart). ### Advanced Setup {#advanced-setup} From bc9f181abd7a0bc75245cb807bc64d6500645d75 Mon Sep 17 00:00:00 2001 From: Mendon Kissling <59585235+mendonk@users.noreply.github.com> Date: Fri, 26 Sep 2025 16:31:35 -0400 Subject: [PATCH 2/5] use-local-model --- docs/docs/get-started/install.mdx | 3 +++ docs/docs/get-started/quickstart.mdx | 24 ++++++++++-------------- 2 files changed, 13 insertions(+), 14 deletions(-) diff --git a/docs/docs/get-started/install.mdx b/docs/docs/get-started/install.mdx index 1b8f625e..68a935fa 100644 --- a/docs/docs/get-started/install.mdx +++ b/docs/docs/get-started/install.mdx @@ -106,7 +106,10 @@ For more information on virtual environments, see [uv](https://docs.astral.sh/uv 9. Enter your Ollama server's base URL address. The default Ollama server address is `http://localhost:11434`. + Since OpenRAG is running in a container, you may need to change `localhost` to access services outside of the container. For example, change `http://localhost:11434` to `http://host.docker.internal:11434` to connect to Ollama. + OpenRAG automatically sends a test connection to your Ollama server to confirm connectivity. 10. Select the **Embedding Model** and **Language Model** your Ollama server is running. + OpenRAG automatically lists the available models from your Ollama server. 11. To load 2 sample PDFs, enable **Sample dataset**. This is recommended, but not required. 12. Click **Complete**. diff --git a/docs/docs/get-started/quickstart.mdx b/docs/docs/get-started/quickstart.mdx index 68d15aef..27361200 100644 --- a/docs/docs/get-started/quickstart.mdx +++ b/docs/docs/get-started/quickstart.mdx @@ -11,17 +11,21 @@ Get started with OpenRAG by loading your knowledge, swapping out your language m ## Prerequisites -- Install and start OpenRAG +- [Install and start OpenRAG](/install) +- [Langflow API key](/) ## Find your way around 1. In OpenRAG, click