diff --git a/.DS_Store b/.DS_Store index e98b18b1..ca39b6e3 100644 Binary files a/.DS_Store and b/.DS_Store differ diff --git a/Dockerfile.langflow b/Dockerfile.langflow index 2acb4877..86ee0ea5 100644 --- a/Dockerfile.langflow +++ b/Dockerfile.langflow @@ -7,7 +7,7 @@ ENV RUSTFLAGS="--cfg reqwest_unstable" # Accept build arguments for git repository and branch ARG GIT_REPO=https://github.com/langflow-ai/langflow.git -ARG GIT_BRANCH=load_flows_autologin_false +ARG GIT_BRANCH=test-openai-responses WORKDIR /app diff --git a/README.md b/README.md index df1d6451..a0178f28 100644 --- a/README.md +++ b/README.md @@ -62,7 +62,7 @@ LANGFLOW_CHAT_FLOW_ID=your_chat_flow_id LANGFLOW_INGEST_FLOW_ID=your_ingest_flow_id NUDGES_FLOW_ID=your_nudges_flow_id ``` -See extended configuration, including ingestion and optional variables: [docs/configure/configuration.md](docs/docs/configure/configuration.md) +See extended configuration, including ingestion and optional variables: [docs/reference/configuration.md](docs/docs/reference/configuration.md) ### 3. Start OpenRAG ```bash diff --git a/docker-compose.yml b/docker-compose.yml index daa921ae..be31fb71 100644 --- a/docker-compose.yml +++ b/docker-compose.yml @@ -40,9 +40,9 @@ services: openrag-backend: image: phact/openrag-backend:${OPENRAG_VERSION:-latest} - #build: - #context: . - #dockerfile: Dockerfile.backend + # build: + # context: . + # dockerfile: Dockerfile.backend container_name: openrag-backend depends_on: - langflow @@ -77,9 +77,10 @@ services: openrag-frontend: image: phact/openrag-frontend:${OPENRAG_VERSION:-latest} - #build: - #context: . - #dockerfile: Dockerfile.frontend + # build: + # context: . + # dockerfile: Dockerfile.frontend + # #dockerfile: Dockerfile.frontend container_name: openrag-frontend depends_on: - openrag-backend @@ -92,6 +93,9 @@ services: volumes: - ./flows:/app/flows:z image: phact/openrag-langflow:${LANGFLOW_VERSION:-latest} + # build: + # context: . + # dockerfile: Dockerfile.langflow container_name: langflow ports: - "7860:7860" @@ -99,7 +103,7 @@ services: - OPENAI_API_KEY=${OPENAI_API_KEY} - LANGFLOW_LOAD_FLOWS_PATH=/app/flows - LANGFLOW_SECRET_KEY=${LANGFLOW_SECRET_KEY} - - JWT="dummy" + - JWT=None - OWNER=None - OWNER_NAME=None - OWNER_EMAIL=None diff --git a/docs/docs/_partial-onboarding.mdx b/docs/docs/_partial-onboarding.mdx index 5efbf2eb..44222371 100644 --- a/docs/docs/_partial-onboarding.mdx +++ b/docs/docs/_partial-onboarding.mdx @@ -5,10 +5,12 @@ import TabItem from '@theme/TabItem'; The first time you start OpenRAG, whether using the TUI or a `.env` file, you must complete application onboarding. -Values input during onboarding can be changed later in the OpenRAG **Settings** page, except for the language model and embedding model _provider_. -**Your provider can only be selected once, and you must use the same provider for your language model and embedding model.** -The language model can be changed, but the embeddings model cannot be changed. -To change your provider selection, you must completely reinstall OpenRAG. +Most values from onboarding can be changed later in the OpenRAG **Settings** page, but there are important restrictions. + +The **language model provider** and **embeddings model provider** can only be selected at onboarding, and you must use the same provider for your language model and embedding model. +To change your provider selection later, you must completely reinstall OpenRAG. + +The **language model** can be changed later in **Settings**, but the **embeddings model** cannot be changed later. @@ -36,14 +38,12 @@ To change your provider selection, you must completely reinstall OpenRAG. ::: 1. Enter your Ollama server's base URL address. The default Ollama server address is `http://localhost:11434`. - Since OpenRAG is running in a container, you may need to change `localhost` to access services outside of the container. For example, change `http://localhost:11434` to `http://host.docker.internal:11434` to connect to Ollama. - OpenRAG automatically sends a test connection to your Ollama server to confirm connectivity. + OpenRAG automatically transforms `localhost` to access services outside of the container, and sends a test connection to your Ollama server to confirm connectivity. 2. Select the **Embedding Model** and **Language Model** your Ollama server is running. - OpenRAG automatically lists the available models from your Ollama server. + OpenRAG retrieves the available models from your Ollama server. 3. To load 2 sample PDFs, enable **Sample dataset**. This is recommended, but not required. 4. Click **Complete**. 5. Continue with the [Quickstart](/quickstart). - - + \ No newline at end of file diff --git a/docs/docs/configure/configuration.mdx b/docs/docs/configure/configuration.mdx deleted file mode 100644 index d8058254..00000000 --- a/docs/docs/configure/configuration.mdx +++ /dev/null @@ -1,110 +0,0 @@ ---- -title: Configuration -slug: /configure/configuration ---- - -import PartialExternalPreview from '@site/docs/_partial-external-preview.mdx'; - - - -OpenRAG supports multiple configuration methods with the following priority: - -1. **Environment Variables** (highest priority) -2. **Configuration File** (`config.yaml`) -3. **Langflow Flow Settings** (runtime override) -4. **Default Values** (fallback) - -## Configuration File - -Create a `config.yaml` file in the project root to configure OpenRAG: - -```yaml -# OpenRAG Configuration File -provider: - model_provider: "openai" # openai, anthropic, azure, etc. - api_key: "your-api-key" # or use OPENAI_API_KEY env var - -knowledge: - embedding_model: "text-embedding-3-small" - chunk_size: 1000 - chunk_overlap: 200 - ocr: true - picture_descriptions: false - -agent: - llm_model: "gpt-4o-mini" - system_prompt: "You are a helpful AI assistant..." -``` - -## Environment Variables - -Environment variables will override configuration file settings. You can still use `.env` files: - -```bash -cp .env.example .env -``` - -## Required Variables - -| Variable | Description | -| ----------------------------- | ------------------------------------------- | -| `OPENAI_API_KEY` | Your OpenAI API key | -| `OPENSEARCH_PASSWORD` | Password for OpenSearch admin user | -| `LANGFLOW_SUPERUSER` | Langflow admin username | -| `LANGFLOW_SUPERUSER_PASSWORD` | Langflow admin password | -| `LANGFLOW_CHAT_FLOW_ID` | ID of your Langflow chat flow | -| `LANGFLOW_INGEST_FLOW_ID` | ID of your Langflow ingestion flow | -| `NUDGES_FLOW_ID` | ID of your Langflow nudges/suggestions flow | - -## Ingestion Configuration - -| Variable | Description | -| ------------------------------ | ------------------------------------------------------ | -| `DISABLE_INGEST_WITH_LANGFLOW` | Disable Langflow ingestion pipeline (default: `false`) | - -- `false` or unset: Uses Langflow pipeline (upload → ingest → delete) -- `true`: Uses traditional OpenRAG processor for document ingestion - -## Optional Variables - -| Variable | Description | -| ------------------------------------------------------------------------- | ------------------------------------------------------------------ | -| `LANGFLOW_PUBLIC_URL` | Public URL for Langflow (default: `http://localhost:7860`) | -| `GOOGLE_OAUTH_CLIENT_ID` / `GOOGLE_OAUTH_CLIENT_SECRET` | Google OAuth authentication | -| `MICROSOFT_GRAPH_OAUTH_CLIENT_ID` / `MICROSOFT_GRAPH_OAUTH_CLIENT_SECRET` | Microsoft OAuth | -| `WEBHOOK_BASE_URL` | Base URL for webhook endpoints | -| `AWS_ACCESS_KEY_ID` / `AWS_SECRET_ACCESS_KEY` | AWS integrations | -| `SESSION_SECRET` | Session management (default: auto-generated, change in production) | -| `LANGFLOW_KEY` | Explicit Langflow API key (auto-generated if not provided) | -| `LANGFLOW_SECRET_KEY` | Secret key for Langflow internal operations | - -## OpenRAG Configuration Variables - -These environment variables override settings in `config.yaml`: - -### Provider Settings - -| Variable | Description | Default | -| ------------------ | ---------------------------------------- | -------- | -| `MODEL_PROVIDER` | Model provider (openai, anthropic, etc.) | `openai` | -| `PROVIDER_API_KEY` | API key for the model provider | | -| `OPENAI_API_KEY` | OpenAI API key (backward compatibility) | | - -### Knowledge Settings - -| Variable | Description | Default | -| ------------------------------ | --------------------------------------- | ------------------------ | -| `EMBEDDING_MODEL` | Embedding model for vector search | `text-embedding-3-small` | -| `CHUNK_SIZE` | Text chunk size for document processing | `1000` | -| `CHUNK_OVERLAP` | Overlap between chunks | `200` | -| `OCR_ENABLED` | Enable OCR for image processing | `true` | -| `PICTURE_DESCRIPTIONS_ENABLED` | Enable picture descriptions | `false` | - -### Agent Settings - -| Variable | Description | Default | -| --------------- | --------------------------------- | ------------------------ | -| `LLM_MODEL` | Language model for the chat agent | `gpt-4o-mini` | -| `SYSTEM_PROMPT` | System prompt for the agent | Default assistant prompt | - -See `.env.example` for a complete list with descriptions, and `docker-compose*.yml` for runtime usage. diff --git a/docs/docs/core-components/agents.mdx b/docs/docs/core-components/agents.mdx index 9b4adb4b..3ee4617b 100644 --- a/docs/docs/core-components/agents.mdx +++ b/docs/docs/core-components/agents.mdx @@ -52,7 +52,7 @@ This filter is the [Knowledge filter](/knowledge#create-knowledge-filters), and -For an example of changing out the agent's LLM in OpenRAG, see the [Quickstart](/quickstart#change-components). +For an example of changing out the agent's language model in OpenRAG, see the [Quickstart](/quickstart#change-components). To restore the flow to its initial state, in OpenRAG, click