ssl troubleshooting

This commit is contained in:
April M 2025-11-24 14:18:48 -08:00
parent 363191edba
commit 3b325d1a1c
6 changed files with 155 additions and 96 deletions

View file

@ -40,7 +40,7 @@ This flow contains eight components connected together to chat with your data:
* The [**Agent** component](https://docs.langflow.org/agents) orchestrates the entire flow by deciding when to search the knowledge base, how to formulate search queries, and how to combine retrieved information with the user's question to generate a comprehensive response.
The **Agent** behaves according to the prompt in the **Agent Instructions** field.
* The [**Chat Input** component](https://docs.langflow.org/components-io) is connected to the Agent component's Input port. This allows to flow to be triggered by an incoming prompt from a user or application.
* The [**OpenSearch** component](https://docs.langflow.org/bundles-elastic#opensearch) is connected to the Agent component's Tools port. The agent may not use this database for every request; the agent only uses this connection if it decides the knowledge can help respond to the prompt.
* The [**OpenSearch** component](https://docs.langflow.org/bundles-elastic#opensearch) is connected to the Agent component's Tools port. The agent might not use this database for every request; the agent only uses this connection if it decides the knowledge can help respond to the prompt.
* The [**Language Model** component](https://docs.langflow.org/components-models) is connected to the Agent component's Language Model port. The agent uses the connected LLM to reason through the request sent through Chat Input.
* The [**Embedding Model** component](https://docs.langflow.org/components-embedding-models) is connected to the OpenSearch component's Embedding port. This component converts text queries into vector representations that are compared with document embeddings stored in OpenSearch for semantic similarity matching. This gives your Agent's queries context.
* The [**Text Input** component](https://docs.langflow.org/components-io) is populated with the global variable `OPENRAG-QUERY-FILTER`.

View file

@ -27,7 +27,7 @@ To start or stop `docling serve` or any other native services, in the TUI main m
**Embedding model** determines which AI model is used to create vector embeddings. The default is the OpenAI `text-embedding-3-small` model.
**Chunk size** determines how large each text chunk is in number of characters.
Larger chunks yield more context per chunk, but may include irrelevant information. Smaller chunks yield more precise semantic search, but may lack context.
Larger chunks yield more context per chunk, but can include irrelevant information. Smaller chunks yield more precise semantic search, but can lack context.
The default value of `1000` characters provides a good starting point that balances these considerations.
**Chunk overlap** controls the number of characters that overlap over chunk boundaries.

View file

@ -83,7 +83,7 @@ The **Add Cloud Knowledge** page opens.
Select the files or folders you want and click **Select**.
You can select multiple files.
3. When your files are selected, click **Ingest Files**.
The ingestion process may take some time, depending on the size of your documents.
The ingestion process can take some time depending on the size of your documents.
4. When ingestion is complete, your documents are available in the Knowledge screen.
If ingestion fails, click **Status** to view the logged error.
@ -151,7 +151,7 @@ The complete list of supported models is available at [`models_service.py` in th
You can use custom embedding models by specifying them in your configuration.
If you use an unknown embedding model, OpenRAG will automatically fall back to `1536` dimensions and log a warning. The system will continue to work, but search quality may be affected if the actual model dimensions differ from `1536`.
If you use an unknown embedding model, OpenRAG automatically falls back to `1536` dimensions and logs a warning. The system continues to work, but search quality can be affected if the actual model dimensions differ from `1536`.
The default embedding dimension is `1536` and the default model is `text-embedding-3-small`.

View file

@ -154,6 +154,8 @@ Choose an installation method based on your needs:
Continue with [Set up OpenRAG with the TUI](#setup).
If you encounter errors during installation, see [Troubleshoot OpenRAG](/support/troubleshoot).
## Set up OpenRAG with the TUI {#setup}
The TUI creates a `.env` file in your OpenRAG directory root and starts OpenRAG.

View file

@ -9,9 +9,9 @@ import TabItem from '@theme/TabItem';
OpenRAG recognizes environment variables from the following sources:
* [Environment variables](#configure-environment-variables) - Values set in the `.env` file.
* [Langflow runtime overrides](#langflow-runtime-overrides) - Langflow components may tweak environment variables at runtime.
* [Default or fallback values](#default-values-and-fallbacks) - These values are default or fallback values if OpenRAG doesn't find a value.
* [Environment variables](#configure-environment-variables): Values set in the `.env` file.
* [Langflow runtime overrides](#langflow-runtime-overrides): Langflow components can set environment variables at runtime.
* [Default or fallback values](#default-values-and-fallbacks): These values are default or fallback values if OpenRAG doesn't find a value.
## Configure environment variables

View file

@ -1,5 +1,5 @@
---
title: Troubleshooting
title: Troubleshoot OpenRAG
slug: /support/troubleshoot
---
@ -13,7 +13,7 @@ This page provides troubleshooting advice for issues you might encounter when us
Check that `OPENSEARCH_PASSWORD` set in [Environment variables](/reference/configuration) meets requirements.
The password must contain at least 8 characters, and must contain at least one uppercase letter, one lowercase letter, one digit, and one special character that is strong.
## OpenRAG fails to start from the TUI with "Operation not supported" error
## OpenRAG fails to start from the TUI with operation not supported
This error occurs when starting OpenRAG with the TUI in [WSL (Windows Subsystem for Linux)](https://learn.microsoft.com/en-us/windows/wsl/install).
@ -21,27 +21,36 @@ The error occurs because OpenRAG is running within a WSL environment, so `webbro
To access the OpenRAG application, open a web browser and enter `http://localhost:3000` in the address bar.
## OpenRAG installation fails with unable to get local issuer certificate
If you are installing OpenRAG on macOS, and the installation fails with `unable to get local issuer certificate`, run the following command, and then retry the installation:
```bash
open "/Applications/Python VERSION/Install Certificates.command"
```
Replace `VERSION` with your installed Python version, such as `3.13`.
## Langflow connection issues
Verify the `LANGFLOW_SUPERUSER` credentials set in [Environment variables](/reference/configuration) are correct.
## Memory errors
### Container out of memory errors
## Container out of memory errors
Increase Docker memory allocation or use [docker-compose-cpu.yml](https://github.com/langflow-ai/openrag/blob/main/docker-compose-cpu.yml) to deploy OpenRAG.
### Podman on macOS memory issues
## Memory issue with Podman on macOS
If you're using Podman on macOS, you may need to increase VM memory on your Podman machine.
If you're using Podman on macOS, you might need to increase VM memory on your Podman machine.
This example increases the machine size to 8 GB of RAM, which should be sufficient to run OpenRAG.
```bash
podman machine stop
podman machine rm
podman machine init --memory 8192 # 8 GB example
podman machine start
```
```bash
podman machine stop
podman machine rm
podman machine init --memory 8192 # 8 GB example
podman machine start
```
## Port conflicts
Ensure ports 3000, 7860, 8000, 9200, 5601 are available.
@ -52,7 +61,7 @@ If Docling ingestion fails with an OCR-related error and mentions `easyocr` is m
`easyocr` is already included as a dependency in OpenRAG's `pyproject.toml`. Project-managed installations using `uv sync` and `uv run` always sync dependencies directly from your `pyproject.toml`, so they should have `easyocr` installed.
If you're running OpenRAG with `uvx openrag`, `uvx` creates a cached, ephemeral environment that doesn't modify your project. This cache may become stale.
If you're running OpenRAG with `uvx openrag`, `uvx` creates a cached, ephemeral environment that doesn't modify your project. This cache can become stale.
On macOS, this cache directory is typically a user cache directory such as `/Users/USER_NAME/.cache/uv`.
1. To clear the uv cache, run:
@ -66,87 +75,135 @@ On macOS, this cache directory is typically a user cache directory such as `/Use
If you do not need OCR, you can disable OCR-based processing in your ingestion settings to avoid requiring `easyocr`.
## Langflow container already exists {#langflow-container-already-exists-during-upgrade}
## Upgrade fails due to Langflow container already exists {#langflow-container-already-exists-during-upgrade}
If you encounter a `langflow container already exists` error when upgrading OpenRAG, this typically means you upgraded OpenRAG with `uv`, but didn't remove or upgrade containers from a previous installation.
If you encounter a `langflow container already exists` error when upgrading OpenRAG, this typically means you upgraded OpenRAG with `uv`, but you didn't remove or upgrade containers from a previous installation.
1. Remove only the problematic Langflow container:
To resolve this issue, do the following:
<Tabs groupId="Container software">
<TabItem value="Podman" label="Podman">
```bash
# Stop the langflow container
podman stop langflow
# Remove the langflow container
podman rm langflow --force
```
</TabItem>
<TabItem value="Docker" label="Docker" default>
```bash
# Stop the langflow container
docker stop langflow
# Remove the langflow container
docker rm langflow --force
```
</TabItem>
</Tabs>
First, try removing only the Langflow container, and then retry the upgrade in the OpenRAG TUI by clicking **Status** and then **Upgrade**.
2. After removing the container, retry the upgrade in the OpenRAG TUI by clicking **Status** > **Upgrade**.
<Tabs groupId="Container software">
<TabItem value="Podman" label="Podman">
### Reinstall all containers
1. Stop the Langflow container:
If reinstalling the Langflow container doesn't resolve the issue, or if you want a completely fresh installation, remove all OpenRAG containers and data, and then retry the upgrade.
:::warning Data loss
The complete reset removes all your data, including OpenSearch data, uploaded documents, and authentication. Your `.env` file is preserved, so your configuration settings remain intact.
```bash
podman stop langflow
```
2. Remove the Langflow container:
```bash
podman rm langflow --force
```
</TabItem>
<TabItem value="Docker" label="Docker" default>
1. Stop the Langflow container:
```bash
docker stop langflow
```
2. Remove the Langflow container:
```bash
docker rm langflow --force
```
</TabItem>
</Tabs>
If reinstalling the Langflow container doesn't resolve the issue, you must reset to a fresh installation by removing all OpenRAG containers and data.
Then, you can retry the upgrade.
:::warning
This is a destructive operation that completely resets your OpenRAG containers and removes all OpenRAG data, including OpenSearch data, uploaded documents, and authentication details.
Your `.env` file is preserved, so your configuration settings remain intact, but all other data is lost.
:::
1. Stop your containers and completely remove them.
To reset your installation, stop your containers, and then completely remove them.
After removing the containers, retry the upgrade in the OpenRAG TUI by clicking **Status** and then **Upgrade**.
<Tabs groupId="Container software">
<TabItem value="Podman" label="Podman">
```bash
# Stop all running containers
podman stop --all
# Remove all containers (including stopped ones)
podman rm --all --force
# Remove all images
podman rmi --all --force
# Remove all volumes
podman volume prune --force
# Remove all networks (except default)
podman network prune --force
# Clean up any leftover data
podman system prune --all --force --volumes
```
</TabItem>
<TabItem value="Docker" label="Docker" default>
```bash
# Stop all running containers
docker stop $(docker ps -q)
# Remove all containers (including stopped ones)
docker rm --force $(docker ps -aq)
# Remove all images
docker rmi --force $(docker images -q)
# Remove all volumes
docker volume prune --force
# Remove all networks (except default)
docker network prune --force
# Clean up any leftover data
docker system prune --all --force --volumes
```
</TabItem>
</Tabs>
<Tabs groupId="Container software">
<TabItem value="Podman" label="Podman">
2. After removing the containers, retry the upgrade in the OpenRAG TUI by clicking **Status** > **Upgrade**.
1. Stop all running containers:
```bash
podman stop --all
```
2. Remove all containers, including stopped containers:
```bash
podman rm --all --force
```
3. Remove all images:
```bash
podman rmi --all --force
```
4. Remove all volumes:
```bash
podman volume prune --force
```
5. Remove all networks except the default network:
```bash
podman network prune --force
```
6. Clean up any leftover data:
```bash
podman system prune --all --force --volumes
```
</TabItem>
<TabItem value="Docker" label="Docker" default>
1. Stop all running containers:
```bash
docker stop $(docker ps -q)
```
2. Remove all containers, including stopped containers:
```bash
docker rm --force $(docker ps -aq)
```
3. Remove all images:
```bash
docker rmi --force $(docker images -q)
```
4. Remove all volumes:
```bash
docker volume prune --force
```
5. Remove all networks except the default network:
```bash
docker network prune --force
```
6. Clean up any leftover data:
```bash
docker system prune --all --force --volumes
```
</TabItem>
</Tabs>