diff --git a/docs/docs/_partial-docker-remove-and-cleanup-steps.mdx b/docs/docs/_partial-docker-remove-and-cleanup-steps.mdx index ccc91713..0114ea46 100644 --- a/docs/docs/_partial-docker-remove-and-cleanup-steps.mdx +++ b/docs/docs/_partial-docker-remove-and-cleanup-steps.mdx @@ -1,4 +1,4 @@ -2. Remove all containers, including stopped containers: +1. Remove all containers, including stopped containers: ```bash title="Docker" docker rm --force $(docker ps -aq) @@ -8,7 +8,7 @@ podman rm --all --force ``` -3. Remove all images: +2. Remove all images: ```bash title="Docker" docker rmi --force $(docker images -q) @@ -18,7 +18,7 @@ podman rmi --all --force ``` -4. Remove all volumes: +3. Remove all volumes: ```bash title="Docker" docker volume prune --force @@ -28,7 +28,7 @@ podman volume prune --force ``` -5. Remove all networks except the default network: +4. Remove all networks except the default network: ```bash title="Docker" docker network prune --force @@ -38,7 +38,7 @@ podman network prune --force ``` -6. Clean up any leftover data: +5. Clean up any leftover data: ```bash title="Docker" docker system prune --all --force --volumes diff --git a/docs/docs/_partial-onboarding.mdx b/docs/docs/_partial-onboarding.mdx index c4eee73e..6ffa32c2 100644 --- a/docs/docs/_partial-onboarding.mdx +++ b/docs/docs/_partial-onboarding.mdx @@ -23,9 +23,7 @@ You only need to complete onboarding for your preferred providers. Anthropic doesn't provide embedding models. If you select Anthropic for your language model, you must select a different provider for the embedding model. ::: -1. Enter your Anthropic API key, or enable **Get API key from environment variable** to pull the key from your [OpenRAG `.env` file](/reference/configuration). - - If you set `ANTHROPIC_API_KEY` in your OpenRAG `.env` file, this value can be populated automatically. +1. Enter your Anthropic API key, or enable **Use environment API key** to pull the key from your [OpenRAG `.env` file](/reference/configuration). 2. Under **Advanced settings**, select the language model that you want to use. @@ -46,24 +44,26 @@ The overview demonstrates some basic functionality that is covered in the [quick -1. Use the values from your IBM watsonx deployment for the **watsonx.ai API Endpoint**, **IBM Project ID**, and **IBM API key** fields. +1. For **watsonx.ai API Endpoint**, select the base URL for your watsonx.ai model deployment. - If you set `WATSONX_API_KEY`, `WATSONX_API_URL`, or `WATSONX_PROJECT_ID` in your [OpenRAG `.env` file](/reference/configuration), these values can be populated automatically. +2. Enter your watsonx.ai deployment's project ID and API key. -2. Under **Advanced settings**, select the language model that you want to use. + You can enable **Use environment API key** to pull the key from your [OpenRAG `.env` file](/reference/configuration). -3. Click **Complete**. +3. Under **Advanced settings**, select the language model that you want to use. -4. Select a provider for embeddings, provide the required information, and then select the embedding model you want to use. +4. Click **Complete**. + +5. Select a provider for embeddings, provide the required information, and then select the embedding model you want to use. For information about another provider's credentials and settings, see the instructions for that provider. -5. Click **Complete**. +6. Click **Complete**. After you configure the embedding model, OpenRAG uses your credentials and models to ingest some [initial documents](/knowledge#default-documents). This tests the connection, and it allows you to ask OpenRAG about itself in the [**Chat**](/chat). If there is a problem with the model configuration, an error occurs and you are redirected back to the application onboarding screen. Verify that the credentials are valid and have access to the selected model, and then click **Complete** to retry ingestion. -6. Continue through the overview slides for a brief introduction to OpenRAG, or click @@ -81,15 +81,15 @@ You must deploy Ollama separately if you want to use Ollama as a model provider. If you are running a remote server, it must be accessible from your OpenRAG deployment. -2. In OpenRAG onboarding, connect to your Ollama server: +2. In the OpenRAG onboarding dialog, enter your Ollama server's base URL: * **Local Ollama server**: Enter your Ollama server's base URL and port. The default Ollama server address is `http://localhost:11434`. * **Ollama Cloud**: Because Ollama Cloud models run at the same address as a local Ollama server and automatically offload to Ollama's cloud service, you can use the same base URL and port as you would for a local Ollama server. The default address is `http://localhost:11434`. * **Remote server**: Enter your remote Ollama server's base URL and port, such as `http://your-remote-server:11434`. - If the connection succeeds, OpenRAG populates the model lists with the server's available models. +3. Select the language model that your Ollama server is running. -3. Select the model that your Ollama server is running. + If your server isn't running any language models, you must either deploy a language model on your Ollama server, or use another provider for the language model. Language model and embedding model selections are independent. You can use the same or different servers for each model. @@ -98,20 +98,23 @@ You must deploy Ollama separately if you want to use Ollama as a model provider. 4. Click **Complete**. - After you configure the embedding model, OpenRAG uses the address and models to ingest some [initial documents](/knowledge#default-documents). This tests the connection, and it allows you to ask OpenRAG about itself in the [**Chat**](/chat). +5. Select a provider for embeddings, provide the required information, and then select the embedding model you want to use. +For information about another provider's credentials and settings, see the instructions for that provider. + +6. Click **Complete**. + + After you configure the embedding model, OpenRAG uses your credentials and models to ingest some [initial documents](/knowledge#default-documents). This tests the connection, and it allows you to ask OpenRAG about itself in the [**Chat**](/chat). If there is a problem with the model configuration, an error occurs and you are redirected back to the application onboarding screen. Verify that the server address is valid, and that the selected model is running on the server. Then, click **Complete** to retry ingestion. -5. Continue through the overview slides for a brief introduction to OpenRAG, or click