openrag/README.md
2025-10-08 11:33:50 -04:00

5.2 KiB

OpenRAG

Langflow    OpenSearch    Starlette    Next.js    Ask DeepWiki

OpenRAG is a comprehensive Retrieval-Augmented Generation platform that enables intelligent document search and AI-powered conversations. Users can upload, process, and query documents through a chat interface backed by large language models and semantic search capabilities. The system utilizes Langflow for document ingestion, retrieval workflows, and intelligent nudges, providing a seamless RAG experience. Built with Starlette, Next.js, OpenSearch, and Langflow integration.

Quick Start   |   TUI Interface   |   Docker Deployment   |   Development   |   Troubleshooting

Quick Start

Prerequisites

  • Docker or Podman with Compose installed
  • Make (for development commands)

1. Environment Setup

# Clone and setup environment
git clone https://github.com/langflow-ai/openrag.git
cd openrag
make setup  # Creates .env and installs dependencies

2. Configure Environment

Edit .env with your API keys and credentials:

# Required
OPENAI_API_KEY=your_openai_api_key
OPENSEARCH_PASSWORD=your_secure_password
LANGFLOW_SUPERUSER=admin
LANGFLOW_SUPERUSER_PASSWORD=your_secure_password
LANGFLOW_CHAT_FLOW_ID=your_chat_flow_id
LANGFLOW_INGEST_FLOW_ID=your_ingest_flow_id
NUDGES_FLOW_ID=your_nudges_flow_id

See extended configuration, including ingestion and optional variables: docs/reference/configuration.mdx

3. Start OpenRAG

# Full stack with GPU support
make dev

# Or CPU only
make dev-cpu

Access the services:

With OpenRAG started, ingest and retrieve documents with the OpenRAG Quickstart.

TUI interface

OpenRAG includes a powerful Terminal User Interface (TUI) for easy setup, configuration, and monitoring. The TUI provides a user-friendly way to manage your OpenRAG installation without complex command-line operations.

OpenRAG TUI Interface

Launch OpenRAG with the TUI

From the repository root, run:

# Install dependencies first
uv sync

# Launch the TUI
uv run openrag

For the full TUI guide, see docs/get-started/tui.mdx

Docker Deployment

The repository includes two Docker Compose files. They deploy the same applications and containers, but to different environments.

  • docker-compose.yml is an OpenRAG deployment with GPU support for accelerated AI processing.

  • docker-compose-cpu.yml is a CPU-only version of OpenRAG for systems without GPU support. Use this Docker compose file for environments where GPU drivers aren't available.

  1. Clone the OpenRAG repository.
git clone https://github.com/langflow-ai/openrag.git
cd openrag
  1. Build and start all services.

For the GPU-accelerated deployment, run:

docker compose build
docker compose up -d

For environments without GPU support, run:

docker compose -f docker-compose-cpu.yml up -d

For more information, see docs/get-started/docker.mdx

Troubleshooting

For common issues and fixes, see docs/support/troubleshoot.mdx.

Development

For developers wanting to contribute to OpenRAG or set up a development environment, please see our comprehensive development guide:

📚 See CONTRIBUTING.md for detailed development instructions

Quick Development Commands

make help                    # See all available commands
make setup                   # Initial development setup
make infra                   # Start infrastructure services
make backend                 # Run backend locally
make frontend                # Run frontend locally