No description
Find a file
Mike Fortman d8a8a5c961
Merge pull request #312 from langflow-ai/feat/new-convo-indicator
properly show new conversations in side nav
2025-10-27 13:51:47 -05:00
.github/workflows
assets
docs Implement mermaid diagram additions 2025-10-27 10:23:33 -04:00
documents documents about openrag 2025-10-24 15:15:15 -04:00
flows nuke extra flow 2025-10-24 14:27:29 -04:00
frontend Merge branch 'main' of github.com:langflow-ai/openrag into feat/new-convo-indicator 2025-10-27 13:26:28 -05:00
keys
scripts
securityconfig
src loosen reconfigure check 2025-10-24 04:11:58 -04:00
tests
.dockerignore
.env.example
.gitignore
.python-version
CONTRIBUTING.md
docker-compose-cpu.yml
docker-compose.yml
Dockerfile
Dockerfile.backend
Dockerfile.frontend
Dockerfile.langflow
LICENSE
Makefile
MANIFEST.in
pyproject.toml
README.md split-out-tui-and-remove 2025-10-23 21:17:31 -04:00
uv.lock
warm_up_docling.py

OpenRAG

Langflow    OpenSearch    Langflow   

OpenRAG is a comprehensive Retrieval-Augmented Generation platform that enables intelligent document search and AI-powered conversations. Users can upload, process, and query documents through a chat interface backed by large language models and semantic search capabilities. The system utilizes Langflow for document ingestion, retrieval workflows, and intelligent nudges, providing a seamless RAG experience. Built with Starlette and Next.js. Powered by OpenSearch, Langflow, and Docling.

Ask DeepWiki

Quickstart   |   TUI Interface   |   Docker Deployment   |   Development   |   Troubleshooting

Quickstart

Use the OpenRAG Terminal User Interface (TUI) to manage your OpenRAG installation without complex command-line operations.

To launch OpenRAG with the TUI, do the following:

  1. Clone the OpenRAG repository.

    git clone https://github.com/langflow-ai/openrag.git
    cd openrag
    
  2. To start the TUI, from the repository root, run:

    # Install dependencies first
    uv sync
    
    # Launch the TUI
    uv run openrag
    

    The TUI opens and guides you through OpenRAG setup.

For the full TUI installation guide, see TUI.

Docker installation

If you prefer to use Docker to run OpenRAG, the repository includes two Docker Compose .yml files. They deploy the same applications and containers locally, but to different environments.

  • docker-compose.yml is an OpenRAG deployment for environments with GPU support. GPU support requires an NVIDIA GPU with CUDA support and compatible NVIDIA drivers installed on the OpenRAG host machine.

  • docker-compose-cpu.yml is a CPU-only version of OpenRAG for systems without GPU support. Use this Docker compose file for environments where GPU drivers aren't available.

Both Docker deployments depend on docling serve to be running on port 5001 on the host machine. This enables Mac MLX support for document processing. Installing OpenRAG with the TUI starts docling serve automatically, but for a Docker deployment you must manually start the docling serve process.

To install OpenRAG with Docker:

  1. Clone the OpenRAG repository.

    git clone https://github.com/langflow-ai/openrag.git
    cd openrag
    
  2. Install dependencies.

    uv sync
    
  3. Start docling serve on the host machine.

    uv run python scripts/docling_ctl.py start --port 5001
    
  4. Confirm docling serve is running.

    uv run python scripts/docling_ctl.py status
    

    Successful result:

    Status: running
    Endpoint: http://127.0.0.1:5001
    Docs: http://127.0.0.1:5001/docs
    PID: 27746
    
  5. Build and start all services.

    For the GPU-accelerated deployment, run:

    docker compose build
    docker compose up -d
    

    For environments without GPU support, run:

    docker compose -f docker-compose-cpu.yml up -d
    

    The OpenRAG Docker Compose file starts five containers:

    Container Name Default Address Purpose
    OpenRAG Backend http://localhost:8000 FastAPI server and core functionality.
    OpenRAG Frontend http://localhost:3000 React web interface for users.
    Langflow http://localhost:7860 AI workflow engine and flow management.
    OpenSearch http://localhost:9200 Vector database for document storage.
    OpenSearch Dashboards http://localhost:5601 Database administration interface.
  6. Access the OpenRAG application at http://localhost:3000 and continue with the Quickstart.

    To stop docling serve, run:

    uv run python scripts/docling_ctl.py stop
    

For more information, see Install with Docker.

Troubleshooting

For common issues and fixes, see Troubleshoot.

Development

For developers wanting to contribute to OpenRAG or set up a development environment, see CONTRIBUTING.md.