import Icon from "@site/src/components/icon/icon"; import Tabs from '@theme/Tabs'; import TabItem from '@theme/TabItem'; ## Application onboarding The first time you start OpenRAG, whether using the TUI or a `.env` file, you must complete application onboarding. :::warning Most values from onboarding can be changed later in the OpenRAG **Settings** page, but there are important restrictions. The **language model provider** and **embeddings model provider** can only be selected at onboarding. To change your provider selection later, you must [reinstall OpenRAG](/install#reinstall). You must use the same provider for your language model and embedding model, unless you're using Ollama. ::: Choose one LLM provider and complete only those steps: 1. Enable **Get API key from environment variable** to automatically enter your key from the TUI-generated `.env` file. Alternatively, paste an OpenAI API key into the field. 2. Under **Advanced settings**, select your **Embedding Model** and **Language Model**. 3. To load 2 sample PDFs, enable **Sample dataset**. This is recommended, but not required. 4. Click **Complete**. 5. To complete the onboarding tasks, click **What is OpenRAG**, and then click **Add a Document**. Alternatively, click 1. Complete the fields for **watsonx.ai API Endpoint**, **IBM Project ID**, and **IBM API key**. These values are found in your IBM watsonx deployment. 2. Under **Advanced settings**, select your **Embedding Model** and **Language Model**. 3. To load 2 sample PDFs, enable **Sample dataset**. This is recommended, but not required. 4. Click **Complete**. 5. To complete the onboarding tasks, click **What is OpenRAG**, and then click **Add a Document**. Alternatively, click :::tip Ollama is not included with OpenRAG. To install Ollama, see the [Ollama documentation](https://docs.ollama.com/). ::: 1. Enter your Ollama server's base URL address. The default Ollama server address is `http://localhost:11434`. OpenRAG automatically transforms `localhost` to access services outside of the container, and sends a test connection to your Ollama server to confirm connectivity. 2. Select the **Embedding Model** and **Language Model** your Ollama server is running. OpenRAG retrieves the available models from your Ollama server. 3. To load 2 sample PDFs, enable **Sample dataset**. This is recommended, but not required. 4. Click **Complete**. 5. To complete the onboarding tasks, click **What is OpenRAG**, and then click **Add a Document**. Alternatively, click