No description
Find a file
2025-10-24 10:55:11 -04:00
.github/workflows LANGFLOW env vars 2025-10-13 14:20:55 -04:00
assets adding tui screenshots 2025-09-10 13:06:47 -04:00
docs add-docusaurus-image-zoom-package 2025-10-24 10:55:11 -04:00
documents adds support to load files in document folder at startup 2025-09-03 13:41:53 -04:00
flows add button scale and fix agent flow name 2025-10-15 14:47:21 -05:00
frontend fix aws icon in light mode 2025-10-23 10:21:21 -05:00
keys empty keys directory 2025-09-02 17:12:21 -04:00
scripts flows 2025-10-11 00:03:58 -04:00
securityconfig ingest flow works multi-embedding 2025-10-10 22:14:51 -04:00
src ingest should use task tracker 2025-10-16 20:52:44 -04:00
tests fix conftest and more optionals 2025-10-14 12:17:07 -04:00
.dockerignore Add environment and build file exclusions to .dockerignore 2025-09-08 18:06:18 -03:00
.env.example Update .env.example 2025-10-13 07:03:50 +13:00
.gitignore tui copies flows and v0.1.17 update 2025-10-08 11:05:44 -04:00
.python-version take 0 2025-07-10 22:36:45 -04:00
CONTRIBUTING.md tui-quickstart 2025-10-08 11:59:36 -04:00
docker-compose-cpu.yml Merge branch 'main' into tui-improvements 2025-10-08 23:12:59 -04:00
docker-compose.yml Merge branch 'main' into tui-improvements 2025-10-08 23:12:59 -04:00
Dockerfile better os pw parsing dockerfile 2025-10-13 15:52:26 -04:00
Dockerfile.backend make flows visible to backend container 2025-09-09 14:12:02 -04:00
Dockerfile.frontend change the parameter! 2025-09-08 15:42:20 -04:00
Dockerfile.langflow Update base image to langflow-nightly:1.6.3.dev1 2025-10-06 21:16:56 -04:00
LICENSE Added ASFv2 license file. Closes #250 2025-10-13 07:33:00 +13:00
Makefile docker build no cache 2025-10-14 02:07:13 -04:00
MANIFEST.in MANIFEST.in 2025-10-07 12:34:07 -04:00
pyproject.toml fix: Add easyocr as direct dependency to support out of the box OCR, closes #262 2025-10-15 11:14:36 +13:00
README.md split-out-tui-and-remove 2025-10-23 21:17:31 -04:00
uv.lock fix: Add easyocr as direct dependency to support out of the box OCR, closes #262 2025-10-15 11:14:36 +13:00
warm_up_docling.py doc processing knobs 2025-09-18 16:27:01 -04:00

OpenRAG

Langflow    OpenSearch    Langflow   

OpenRAG is a comprehensive Retrieval-Augmented Generation platform that enables intelligent document search and AI-powered conversations. Users can upload, process, and query documents through a chat interface backed by large language models and semantic search capabilities. The system utilizes Langflow for document ingestion, retrieval workflows, and intelligent nudges, providing a seamless RAG experience. Built with Starlette and Next.js. Powered by OpenSearch, Langflow, and Docling.

Ask DeepWiki

Quickstart   |   TUI Interface   |   Docker Deployment   |   Development   |   Troubleshooting

Quickstart

Use the OpenRAG Terminal User Interface (TUI) to manage your OpenRAG installation without complex command-line operations.

To launch OpenRAG with the TUI, do the following:

  1. Clone the OpenRAG repository.

    git clone https://github.com/langflow-ai/openrag.git
    cd openrag
    
  2. To start the TUI, from the repository root, run:

    # Install dependencies first
    uv sync
    
    # Launch the TUI
    uv run openrag
    

    The TUI opens and guides you through OpenRAG setup.

For the full TUI installation guide, see TUI.

Docker installation

If you prefer to use Docker to run OpenRAG, the repository includes two Docker Compose .yml files. They deploy the same applications and containers locally, but to different environments.

  • docker-compose.yml is an OpenRAG deployment for environments with GPU support. GPU support requires an NVIDIA GPU with CUDA support and compatible NVIDIA drivers installed on the OpenRAG host machine.

  • docker-compose-cpu.yml is a CPU-only version of OpenRAG for systems without GPU support. Use this Docker compose file for environments where GPU drivers aren't available.

Both Docker deployments depend on docling serve to be running on port 5001 on the host machine. This enables Mac MLX support for document processing. Installing OpenRAG with the TUI starts docling serve automatically, but for a Docker deployment you must manually start the docling serve process.

To install OpenRAG with Docker:

  1. Clone the OpenRAG repository.

    git clone https://github.com/langflow-ai/openrag.git
    cd openrag
    
  2. Install dependencies.

    uv sync
    
  3. Start docling serve on the host machine.

    uv run python scripts/docling_ctl.py start --port 5001
    
  4. Confirm docling serve is running.

    uv run python scripts/docling_ctl.py status
    

    Successful result:

    Status: running
    Endpoint: http://127.0.0.1:5001
    Docs: http://127.0.0.1:5001/docs
    PID: 27746
    
  5. Build and start all services.

    For the GPU-accelerated deployment, run:

    docker compose build
    docker compose up -d
    

    For environments without GPU support, run:

    docker compose -f docker-compose-cpu.yml up -d
    

    The OpenRAG Docker Compose file starts five containers:

    Container Name Default Address Purpose
    OpenRAG Backend http://localhost:8000 FastAPI server and core functionality.
    OpenRAG Frontend http://localhost:3000 React web interface for users.
    Langflow http://localhost:7860 AI workflow engine and flow management.
    OpenSearch http://localhost:9200 Vector database for document storage.
    OpenSearch Dashboards http://localhost:5601 Database administration interface.
  6. Access the OpenRAG application at http://localhost:3000 and continue with the Quickstart.

    To stop docling serve, run:

    uv run python scripts/docling_ctl.py stop
    

For more information, see Install with Docker.

Troubleshooting

For common issues and fixes, see Troubleshoot.

Development

For developers wanting to contribute to OpenRAG or set up a development environment, see CONTRIBUTING.md.