openrag/docs/docs/_partial-onboarding.mdx
Mendon Kissling e080f74f5e
docs: clarify provider selection in onboarding (#336)
* clarify-provider-selection-in-onboarding

* Apply suggestion from @aimurphy

Co-authored-by: April I. Murphy <36110273+aimurphy@users.noreply.github.com>

* add-warning-admonition

* steps-numbers

---------

Co-authored-by: April I. Murphy <36110273+aimurphy@users.noreply.github.com>
2025-11-03 11:14:30 -05:00

59 lines
No EOL
2.9 KiB
Text

import Tabs from '@theme/Tabs';
import TabItem from '@theme/TabItem';
## Application onboarding
The first time you start OpenRAG, whether using the TUI or a `.env` file, it's recommended that you complete application onboarding.
To skip onboarding, click **Skip onboarding**.
:::warning
Most values from onboarding can be changed later in the OpenRAG **Settings** page, but there are important restrictions.
The **language model provider** and **embeddings model provider** can only be selected at onboarding.
To change your provider selection later, you must [reinstall OpenRAG](/install#reinstall).
You must use the same provider for your language model and embedding model, unless you're using Ollama.
:::
Choose one LLM provider and complete only those steps:
<Tabs groupId="Provider">
<TabItem value="OpenAI" label="OpenAI" default>
1. Enable **Get API key from environment variable** to automatically enter your key from the TUI-generated `.env` file.
Alternatively, paste an OpenAI API key into the field.
2. Under **Advanced settings**, select your **Embedding Model** and **Language Model**.
3. To load 2 sample PDFs, enable **Sample dataset**.
This is recommended, but not required.
4. Click **Complete**.
5. To complete the onboarding tasks, click **What is OpenRAG**, and then click **Add a Document**.
6. Continue with the [Quickstart](/quickstart).
</TabItem>
<TabItem value="IBM watsonx.ai" label="IBM watsonx.ai">
1. Complete the fields for **watsonx.ai API Endpoint**, **IBM Project ID**, and **IBM API key**.
These values are found in your IBM watsonx deployment.
2. Under **Advanced settings**, select your **Embedding Model** and **Language Model**.
3. To load 2 sample PDFs, enable **Sample dataset**.
This is recommended, but not required.
4. Click **Complete**.
5. To complete the onboarding tasks, click **What is OpenRAG**, and then click **Add a Document**.
6. Continue with the [Quickstart](/quickstart).
</TabItem>
<TabItem value="Ollama" label="Ollama">
:::tip
Ollama is not included with OpenRAG. To install Ollama, see the [Ollama documentation](https://docs.ollama.com/).
:::
1. Enter your Ollama server's base URL address.
The default Ollama server address is `http://localhost:11434`.
OpenRAG automatically transforms `localhost` to access services outside of the container, and sends a test connection to your Ollama server to confirm connectivity.
2. Select the **Embedding Model** and **Language Model** your Ollama server is running.
OpenRAG retrieves the available models from your Ollama server.
3. To load 2 sample PDFs, enable **Sample dataset**.
This is recommended, but not required.
4. Click **Complete**.
5. To complete the onboarding tasks, click **What is OpenRAG**, and then click **Add a Document**.
6. Continue with the [Quickstart](/quickstart).
</TabItem>
</Tabs>