edit instructions while testing and update TUI img

This commit is contained in:
April M 2025-12-18 18:00:06 -08:00
parent 37a914f162
commit 6e66b6f5fd
10 changed files with 92 additions and 80 deletions

View file

@ -22,9 +22,7 @@ You only need to complete onboarding for your preferred providers.
Anthropic doesn't provide embedding models. If you select Anthropic for your language model, you must select a different provider for the embedding model.
:::
1. Enter your Anthropic API key, or enable **Get API key from environment variable** to pull the key from your [OpenRAG `.env` file](/reference/configuration).
If you set `ANTHROPIC_API_KEY` in your OpenRAG `.env` file, this value can be populated automatically.
1. Enter your Anthropic API key, or enable **Use environment API key** to pull the key from your [OpenRAG `.env` file](/reference/configuration).
2. Under **Advanced settings**, select the language model that you want to use.
@ -45,24 +43,26 @@ The overview demonstrates some basic functionality that is covered in the [quick
</TabItem>
<TabItem value="IBM watsonx.ai" label="IBM watsonx.ai">
1. Use the values from your IBM watsonx deployment for the **watsonx.ai API Endpoint**, **IBM Project ID**, and **IBM API key** fields.
1. For **watsonx.ai API Endpoint**, select the base URL for your watsonx.ai model deployment.
If you set `WATSONX_API_KEY`, `WATSONX_API_URL`, or `WATSONX_PROJECT_ID` in your [OpenRAG `.env` file](/reference/configuration), these values can be populated automatically.
2. Enter your watsonx.ai deployment's project ID and API key.
2. Under **Advanced settings**, select the language model that you want to use.
You can enable **Use environment API key** to pull the key from your [OpenRAG `.env` file](/reference/configuration).
3. Click **Complete**.
3. Under **Advanced settings**, select the language model that you want to use.
4. Select a provider for embeddings, provide the required information, and then select the embedding model you want to use.
4. Click **Complete**.
5. Select a provider for embeddings, provide the required information, and then select the embedding model you want to use.
For information about another provider's credentials and settings, see the instructions for that provider.
5. Click **Complete**.
6. Click **Complete**.
After you configure the embedding model, OpenRAG uses your credentials and models to ingest some [initial documents](/knowledge#default-documents). This tests the connection, and it allows you to ask OpenRAG about itself in the [**Chat**](/chat).
If there is a problem with the model configuration, an error occurs and you are redirected back to the application onboarding screen.
Verify that the credentials are valid and have access to the selected model, and then click **Complete** to retry ingestion.
6. Continue through the overview slides for a brief introduction to OpenRAG, or click <Icon name="ArrowRight" aria-hidden="true"/> **Skip overview**.
7. Continue through the overview slides for a brief introduction to OpenRAG, or click <Icon name="ArrowRight" aria-hidden="true"/> **Skip overview**.
The overview demonstrates some basic functionality that is covered in the [quickstart](/quickstart#chat-with-documents) and in other parts of the OpenRAG documentation.
</TabItem>
@ -75,22 +75,22 @@ Ollama isn't installed with OpenRAG. You must install it separately if you want
Using Ollama as your language and embedding model provider offers greater flexibility and configuration options for hosting models, but it can be advanced for new users.
The recommendations given here are a reasonable starting point for users with at least one GPU and experience running LLMs locally.
The OpenRAG team recommends the OpenAI `gpt-oss:20b` lanuage model and the [`nomic-embed-text`](https://ollama.com/library/nomic-embed-text) embedding model.
The OpenRAG team recommends the OpenAI `gpt-oss:20b` language model and the [`nomic-embed-text`](https://ollama.com/library/nomic-embed-text) embedding model.
However, `gpt-oss:20b` uses 16GB of RAM, so consider using Ollama Cloud or running Ollama on a remote machine.
1. [Install Ollama locally or on a remote server](https://docs.ollama.com/), or [run models in Ollama Cloud](https://docs.ollama.com/cloud).
If you are running a remote server, it must be accessible from your OpenRAG deployment.
2. In OpenRAG onboarding, connect to your Ollama server:
2. In the OpenRAG onboarding dialog, enter your Ollama server's base URL:
* **Local Ollama server**: Enter your Ollama server's base URL and port. The default Ollama server address is `http://localhost:11434`.
* **Ollama Cloud**: Because Ollama Cloud models run at the same address as a local Ollama server and automatically offload to Ollama's cloud service, you can use the same base URL and port as you would for a local Ollama server. The default address is `http://localhost:11434`.
* **Remote server**: Enter your remote Ollama server's base URL and port, such as `http://your-remote-server:11434`.
If the connection succeeds, OpenRAG populates the model lists with the server's available models.
3. Select the language model that your Ollama server is running.
3. Select the model that your Ollama server is running.
If your server isn't running any language models, you must either deploy a language model on your Ollama server, or use another provider for the language model.
Language model and embedding model selections are independent.
You can use the same or different servers for each model.
@ -99,18 +99,23 @@ However, `gpt-oss:20b` uses 16GB of RAM, so consider using Ollama Cloud or runni
4. Click **Complete**.
After you configure the embedding model, OpenRAG uses the address and models to ingest some [initial documents](/knowledge#default-documents). This tests the connection, and it allows you to ask OpenRAG about itself in the [**Chat**](/chat).
5. Select a provider for embeddings, provide the required information, and then select the embedding model you want to use.
For information about another provider's credentials and settings, see the instructions for that provider.
6. Click **Complete**.
After you configure the embedding model, OpenRAG uses your credentials and models to ingest some [initial documents](/knowledge#default-documents). This tests the connection, and it allows you to ask OpenRAG about itself in the [**Chat**](/chat).
If there is a problem with the model configuration, an error occurs and you are redirected back to the application onboarding screen.
Verify that the server address is valid, and that the selected model is running on the server.
Then, click **Complete** to retry ingestion.
5. Continue through the overview slides for a brief introduction to OpenRAG, or click <Icon name="ArrowRight" aria-hidden="true"/> **Skip overview**.
7. Continue through the overview slides for a brief introduction to OpenRAG, or click <Icon name="ArrowRight" aria-hidden="true"/> **Skip overview**.
The overview demonstrates some basic functionality that is covered in the [quickstart](/quickstart#chat-with-documents) and in other parts of the OpenRAG documentation.
</TabItem>
<TabItem value="OpenAI" label="OpenAI (default)">
1. Enter your OpenAI API key, or enable **Get API key from environment variable** to pull the key from your [OpenRAG `.env` file](/reference/configuration).
1. Enter your OpenAI API key, or enable **Use environment API key** to pull the key from your [OpenRAG `.env` file](/reference/configuration).
If you set `OPENAI_API_KEY` in your OpenRAG `.env` file, this value can be populated automatically.

View file

@ -30,7 +30,7 @@ If OpenRAG detects OAuth credentials during setup, it recommends **Advanced Setu
3. Optional: Under **API Keys**, enter your model provider credentials, or leave these fields empty if you want to configure model provider credentials during the application onboarding process.
There is no material difference between providing these values now or during the [application onboarding process](#application-onboarding).
If you provide a credential now, it can be populated automatically during the application onboarding process if you enable the **Get API key from environment variable** option.
If you provide a credential now, it can be populated automatically during the application onboarding process if you enable the **Use environment API key** option.
OpenRAG's core functionality requires access to language and embedding models.
By default, OpenRAG uses OpenAI models.
@ -46,7 +46,7 @@ If OpenRAG detects OAuth credentials during setup, it recommends **Advanced Setu
Your passwords and API keys, if provided, are stored in the [OpenRAG `.env` file](/reference/configuration) at `~/.openrag/tui`.
If you modified any credentials that were pulled from an existing `.env` file, those values are updated in the `.env` file.
6. Click **Start OpenRAG** to start the OpenRAG container services.
6. Click **Start OpenRAG** to start the OpenRAG services.
This process can take some time while OpenRAG pulls and runs the container images.
If all services start successfully, the TUI prints a confirmation message:
@ -56,12 +56,9 @@ If OpenRAG detects OAuth credentials during setup, it recommends **Advanced Setu
Command completed successfully
```
8. Launch the OpenRAG application:
7. Click **Close**, and then click **Launch OpenRAG** or navigate to `localhost:3000` in your browser.
* From the TUI main menu, click **Open App**.
* In your browser, navigate to `localhost:3000`.
9. Continue with the [application onboarding process](#application-onboarding).
8. Continue with the [application onboarding process](#application-onboarding).
</TabItem>
<TabItem value="Advanced setup" label="Advanced setup">
@ -80,7 +77,7 @@ If OpenRAG detects OAuth credentials during setup, it recommends **Advanced Setu
3. Optional: Under **API Keys**, enter your model provider credentials, or leave the **OpenAI**, **Anthropic**, **Ollama**, and **IBM watsonx.ai** fields empty if you want to configure model provider credentials during the application onboarding process.
There is no material difference between providing these values now or during the [application onboarding process](#application-onboarding).
If you provide a credential now, it can be populated automatically during the application onboarding process if you enable the **Get API key from environment variable** option.
If you provide a credential now, it can be populated automatically during the application onboarding process if you enable the **Use environment API key** option.
OpenRAG's core functionality requires access to language and embedding models.
By default, OpenRAG uses OpenAI models.
@ -109,7 +106,7 @@ These are the URLs your OAuth provider will use to redirect users back to OpenRA
Your passwords, API key, and OAuth credentials, if provided, are stored in the [OpenRAG `.env` file](/reference/configuration) at `~/.openrag/tui`.
If you modified any credentials that were pulled from an existing `.env` file, those values are updated in the `.env` file.
8. Click **Start OpenRAG** to start the OpenRAG container services.
8. Click **Start OpenRAG** to start the OpenRAG services.
This process can take some time while OpenRAG pulls and runs the container images.
If all services start successfully, the TUI prints a confirmation message:
@ -119,10 +116,7 @@ These are the URLs your OAuth provider will use to redirect users back to OpenRA
Command completed successfully
```
9. Launch the OpenRAG application:
* From the TUI main menu, click **Open App**.
* In your browser, navigate to `localhost:3000`.
9. Click **Close**, and then click **Launch OpenRAG** or navigate to `localhost:3000` in your browser.
10. If you enabled OAuth connectors, you must sign in to your OAuth provider before being redirected to your OpenRAG instance.

View file

@ -66,19 +66,19 @@ To enable multiple connectors, you must register an app and generate credentials
<Tabs>
<TabItem value="TUI" label="TUI-managed services" default>
If you use the [Terminal User Interface (TUI)](/tui) to manage your OpenRAG services, enter OAuth credentials in the **Advanced Setup** menu.
If you use the [Terminal User Interface (TUI)](/tui) to manage your OpenRAG services, enter OAuth credentials on the **Advanced Setup** page.
You can do this during [installation](/install#setup), or you can add the credentials afterwards:
1. If OpenRAG is running, open the TUI's **Status** menu, and then click **Stop Services**.
1. If OpenRAG is running, click **Stop All Services** in the TUI.
2. Open the **Advanced Setup** menu, and then add the OAuth credentials for the cloud storage providers that you want to use under **API Keys**:
2. Open the **Advanced Setup** page, and then add the OAuth credentials for the cloud storage providers that you want to use under **API Keys**:
* **Google**: Provide your Google OAuth Client ID and Google OAuth Client Secret. You can generate these in the [Google Cloud Console](https://console.cloud.google.com/apis/credentials). For more information, see the [Google OAuth client documentation](https://developers.google.com/identity/protocols/oauth2).
* **Microsoft**: For the Microsoft OAuth Client ID and Microsoft OAuth Client Secret, provide [Azure application registration credentials for SharePoint and OneDrive](https://learn.microsoft.com/en-us/onedrive/developer/rest-api/getting-started/app-registration?view=odsp-graph-online). For more information, see the [Microsoft Graph OAuth client documentation](https://learn.microsoft.com/en-us/onedrive/developer/rest-api/getting-started/graph-oauth).
* **Amazon**: Provide your AWS Access Key ID and AWS Secret Access Key with access to your S3 instance. For more information, see the AWS documentation on [Configuring access to AWS applications](https://docs.aws.amazon.com/singlesignon/latest/userguide/manage-your-applications.html).
3. The TUI presents redirect URIs for your OAuth app that you must register with your OAuth provider.
These are the URLs your OAuth provider will redirect back to after users authenticate and grant access to their cloud storage.
3. Register the redirect URIs shown in the TUI in your OAuth provider.
These are the URLs your OAuth provider will use to redirect users back to OpenRAG after they sign in.
4. Click **Save Configuration** to add the OAuth credentials to your [OpenRAG `.env` file](/reference/configuration).
@ -126,6 +126,9 @@ You can do this during [initial set up](/docker#setup), or you can add the crede
<PartialDockerComposeUp />
5. Access the OpenRAG frontend at `http://localhost:3000`.
You should be prompted to sign in to your OAuth provider before being redirected to your OpenRAG instance.
</TabItem>
</Tabs>

View file

@ -19,25 +19,23 @@ If you [installed OpenRAG](/install-options) with the automated installer script
For [self-managed deployments](/docker), run Docker or Podman commands to manage your OpenRAG services.
## Monitor services
## Monitor services and view logs
<Tabs>
<TabItem value="TUI" label="TUI-managed services" default>
* **TUI Status menu**: In the **Status** menu, you can access streaming logs for all OpenRAG services.
Select the service you want to view, and then press <kbd>l</kbd>.
To copy the logs, click **Copy to Clipboard**.
In the TUI, click **Status** to access diagnostics and controls for all OpenRAG services, including container health, ports, and image versions.
* **TUI Diagnostics menu**: The TUI's **Diagnostics** menu provides health monitoring for your container runtimes and monitoring of your OpenSearch instance.
To view streaming logs, click the name of a service, and then press <kbd>l</kbd>.
* **Docling**: See [Stop, start, and inspect native services](#start-native-services).
For the Docling native service, see [Stop, start, and inspect native services](#start-native-services).
</TabItem>
<TabItem value="env" label="Self-managed services">
* **Containers**: Get container logs with [`docker compose logs`](https://docs.docker.com/reference/cli/docker/compose/logs/) or [`podman logs`](https://docs.podman.io/en/latest/markdown/podman-logs.1.html).
For self-managed container services, you can get container logs with [`docker compose logs`](https://docs.docker.com/reference/cli/docker/compose/logs/) or [`podman logs`](https://docs.podman.io/en/latest/markdown/podman-logs.1.html).
* **Docling**: See [Stop, start, and inspect native services](#start-native-services).
For the Docling native service, see [Stop, start, and inspect native services](#start-native-services).
</TabItem>
</Tabs>
@ -47,10 +45,9 @@ To copy the logs, click **Copy to Clipboard**.
<Tabs>
<TabItem value="TUI" label="TUI-managed services" default>
In the TUI's **Status** menu, click **Stop Services** to stop all OpenRAG container-based services.
Then, click **Start Services** to restart the OpenRAG containers.
On the TUI's **Status** page, you can stop, start, and restart OpenRAG's container-based services.
When you click **Start Services**, the following processes are triggered:
When you click **Restart** or **Start Services**, the following processes are triggered:
1. OpenRAG automatically detects your container runtime, and then checks if your machine has compatible GPU support by checking for `CUDA`, `NVIDIA_SMI`, and Docker/Podman runtime support. This check determines which Docker Compose file OpenRAG uses because there are separate Docker Compose files for GPU and CPU deployments.
@ -75,10 +72,13 @@ A _native service_ in OpenRAG is a service that runs locally on your machine, no
<Tabs>
<TabItem value="TUI" label="TUI-managed services" default>
From the TUI's **Status** menu, click **Native Services** to do the following:
On the TUI's **Status** page, you can stop, start, restart, and inspect OpenRAG's native services.
* View the service's status, port, and process ID (PID).
* Stop, start, and restart native services.
The **Native Services** section lists the status, port, and process ID (PID) for each native service.
To manage a native service, click the service's name, and then click **Stop**, **Start** or **Restart**.
To view the logs for a native service, click the service's name, and then press <kbd>l</kbd>.
</TabItem>
<TabItem value="env" label="Self-managed services">
@ -135,7 +135,7 @@ To reset your OpenRAG deployment _and_ delete all OpenRAG data, see [Reinstall O
<PartialExportFlows />
2. To destroy and recreate your OpenRAG containers, open the TUI's **Status** menu, and then click **Factory Reset**.
2. To destroy and recreate your OpenRAG containers, click **Status** in the TUI, and then click **Factory Reset**.
3. Repeat the [setup process](/install#setup) to restart the services and launch the OpenRAG app. Your OpenRAG passwords, OAuth credentials (if previously set), and onboarding configuration are restored from the `.env` file.

View file

@ -54,7 +54,12 @@ The script installs OpenRAG dependencies, including Docker or Podman, and then i
5. Use the default values for all other fields.
6. Click **Save Configuration**, and then click **Start OpenRAG**.
6. Click **Save Configuration**.
Your OpenRAG configuration and passwords are stored in an [OpenRAG `.env` file](/reference/configuration) file that is created automatically at `~/.openrag/tui`.
OpenRAG container definitions are stored in the `docker-compose` files in the same directory.
7. Click **Start OpenRAG** to start the OpenRAG services.
This process can take some time while OpenRAG pulls and runs the container images.
If all services start successfully, the TUI prints a confirmation message:
@ -64,12 +69,7 @@ The script installs OpenRAG dependencies, including Docker or Podman, and then i
Command completed successfully
```
Your OpenRAG configuration and passwords are stored in an [OpenRAG `.env` file](/reference/configuration) file that is created automatically at `~/.openrag/tui`.
Container definitions are stored in the `docker-compose` files in the same directory.
7. Under [**Native Services**](/manage-services), click **Start** to start the Docling service.
8. From the TUI main menu, click **Open App** to launch the OpenRAG application and start the application onboarding process.
8. Click **Close**, and then click **Launch OpenRAG** to access the OpenRAG application and start the application onboarding process.
9. For this quickstart, select the **OpenAI** model provider, enter your OpenAI API key, and then click **Complete**. Use the default settings for all other model options.

View file

@ -21,22 +21,22 @@ Destroyed containers and deleted data are lost and cannot be recovered after run
<PartialExportFlows />
2. In the TUI's **Status** menu, click **Factory Reset** to [reset your OpenRAG containers](/manage-services#reset-containers).
2. In the TUI, click **Status**, and then click **Factory Reset** to [reset your OpenRAG containers](/manage-services#reset-containers).
<PartialFactorResetWarning />
2. Exit the TUI with <kbd>q</kbd>.
3. Press <kbd>Esc</kbd> to close the **Status** page, and then press <kbd>q</kbd> to exit the TUI.
3. Optional: Delete or edit [OpenRAG's `.env` file](/reference/configuration), which is stored at `~/.openrag/tui`.
4. Optional: Delete or edit [OpenRAG's `.env` file](/reference/configuration), which is stored at `~/.openrag/tui`.
This file contains your OpenRAG configuration, including OpenRAG passwords, API keys, OAuth settings, and other environment variables. If you delete this file, the TUI automatically generates a new one after you repeat the setup and onboarding process. If you preserve this file, the TUI can read values from the existing `.env` file during setup and onboarding.
4. Optional: Remove any files from the `~/.openrag/documents` subdirectory that you don't want to reingest after redeploying the containers.
5. Optional: Remove any files from the `~/.openrag/documents` subdirectory that you don't want to reingest after redeploying the containers.
It is recommended that you preserve OpenRAG's [default documents](https://github.com/langflow-ai/openrag/tree/main/openrag-documents).
5. Restart the TUI with `uv run openrag` or `uvx openrag`.
6. Restart the TUI with `uv run openrag` or `uvx openrag`.
6. Repeat the [setup process](/install#setup) to configure OpenRAG and restart all services.
7. Repeat the [setup process](/install#setup) to configure OpenRAG and restart all services.
Then, launch the OpenRAG app and repeat the [application onboarding process](/install#application-onboarding).
## Reinstall self-managed containers with `docker compose` or `podman compose`

View file

@ -25,13 +25,22 @@ Keyboard shortcuts for additional menus are printed at the bottom of the TUI scr
## Manage services with the TUI
Use the TUI's **Status** and **Diagnostics** menus to access controls and information for your OpenRAG services.
Use the TUI's **Status** page to access controls and information for your OpenRAG services.
For more information, see [Manage OpenRAG services](/manage-services).
## Toggle GPU/CPU mode
You can toggle between GPU and CPU mode from within the TUI if your system has compatible GPU hardware and drivers installed.
In the TUI, click **Status**, and then click **Switch to GPU Mode** or **Switch to CPU Mode**.
This change requires restarting all OpenRAG services because each mode has its own `docker-compose` file.
## Exit the OpenRAG TUI
To exit the OpenRAG TUI, go to the TUI main menu, and then press <kbd>q</kbd>.
To exit the OpenRAG TUI, press <kbd>q</kbd> on the TUI main page.
Your OpenRAG containers continue to run until they are stopped.
Exiting the TUI doesn't stop your OpenRAG services.
Your OpenRAG services continue to run until they are stopped from within the TUI or by another process that inadvertently stops them.
To restart the TUI, see [Access the TUI](#access-the-tui).

View file

@ -13,9 +13,9 @@ If you want to reset your OpenRAG containers without removing OpenRAG entirely,
## Uninstall TUI-managed deployments
If you used the [automated installer script](/install) or [`uvx`](/install-uvx) to install OpenRAG, clear your `uv` cache (`uv cache clean`) to remove the TUI environment, and then delete the directory containing your OpenRAG configuration files and data (where you would invoke OpenRAG).
If you used the [automated installer script](/install) or [`uvx`](/install-uvx) to install OpenRAG, clear your `uv` cache (`uv cache clean`) to remove the TUI environment, and then delete the `~/.openrag` directory.
If you used [`uv`](/install-uv) to install OpenRAG, run `uv remove openrag` in your Python project.
If you used [`uv`](/install-uv) to install OpenRAG, run `uv remove openrag` in your Python project, and then delete the `~/.openrag` directory.
## Uninstall self-managed deployments

View file

@ -14,7 +14,7 @@ Use these steps to upgrade your OpenRAG deployment to the latest version or a sp
If you modified the built-in flows or created custom flows in your OpenRAG Langflow instance, [export your flows](https://docs.langflow.org/concepts-flows-import) before upgrading.
This ensure that you won't lose your flows after upgrading, and you can reference the exported flows if there are any breaking changes in the new version.
## Upgrade TUI-managed installations
## Upgrade TUI-managed deployments
To upgrade OpenRAG, you need to upgrade the OpenRAG Python package, and then upgrade the OpenRAG containers.
@ -24,12 +24,13 @@ This is a two-part process because upgrading the OpenRAG Python package updates
<PartialExportFlows />
2. To check for updates, open the TUI's **Status** menu, and then click **Upgrade**.
2. To check for updates, click **Status** in the TUI, and then click **Upgrade**.
3. If there is an update, stop all OpenRAG services.
In the **Status** menu, click **Stop Services**.
3. If there is an update available, press <kbd>Esc</kbd> to close the **Status** page, then then click **Stop All Services**.
4. Upgrade the OpenRAG Python package to the latest version from [PyPI](https://pypi.org/project/openrag/).
4. Press <kbd>q</kbd> to exit the TUI.
5. Upgrade the OpenRAG Python package to the latest version from [PyPI](https://pypi.org/project/openrag/).
The commands to upgrade the package depend on how you installed OpenRAG.
<Tabs>
@ -112,7 +113,7 @@ The commands to upgrade the package depend on how you installed OpenRAG.
</TabItem>
</Tabs>
5. In the OpenRAG TUI, click **Start Services**, and then wait while the upgraded containers start.
6. In the OpenRAG TUI, click **Start Services**, and then wait while the services start.
When you start services after upgrading the Python package, OpenRAG runs `docker compose pull` to get the appropriate container images matching the version specified in your OpenRAG `.env` file. Then, it recreates the containers with the new images using `docker compose up -d --force-recreate`.
@ -127,11 +128,9 @@ The commands to upgrade the package depend on how you installed OpenRAG.
If you get an error that `langflow container already exists` error during upgrade, see [Langflow container already exists during upgrade](/support/troubleshoot#langflow-container-already-exists-during-upgrade).
6. Under [**Native Services**](/manage-services), click **Start** to start the Docling service.
7. After the containers start, click **Close**, and then click **Launch OpenRAG**.
7. When the upgrade process is complete, you can close the **Status** window and continue using OpenRAG.
## Upgrade self-managed containers
## Upgrade self-managed deployments
<PartialExportFlows />
@ -149,6 +148,8 @@ The commands to upgrade the package depend on how you installed OpenRAG.
By default, OpenRAG's `docker-compose` files pull the latest container images.
3. After the containers start, access the OpenRAG application at `http://localhost:3000`.
## See also
* [Manage OpenRAG services](/manage-services)

View file

@ -72,7 +72,7 @@ For example, on macOS, this is typically a user cache directory, such as `~/.cac
This cache can become stale, producing errors like missing dependencies.
1. [Exit the TUI](/tui).
1. If the TUI is open, press <kbd>q</kbd> to exit the TUI.
2. Clear the `uv` cache:
@ -92,7 +92,7 @@ This cache can become stale, producing errors like missing dependencies.
uvx openrag
```
4. Click **Open App**, and then retry document ingestion.
4. Click **Launch OpenRAG**, and then retry document ingestion.
If you install OpenRAG with `uv`, dependencies are synced directly from your `pyproject.toml` file.
This should automatically install `easyocr` because `easyocr` is included as a dependency in OpenRAG's `pyproject.toml`.