5.3 KiB
OpenRAG
OpenRAG is a comprehensive Retrieval-Augmented Generation platform that enables intelligent document search and AI-powered conversations. Users can upload, process, and query documents through a chat interface backed by large language models and semantic search capabilities. The system utilizes Langflow for document ingestion, retrieval workflows, and intelligent nudges, providing a seamless RAG experience. Built with Starlette, Next.js, OpenSearch, and Langflow integration.
Quickstart
Prerequisites
- Docker or Podman with Compose installed
- Make (for development commands)
Install and start OpenRAG
- Set up development environment.
# Clone and setup environment
git clone https://github.com/langflow-ai/openrag.git
cd openrag
make setup # Creates .env and installs dependencies
- Configure the
.envfile with your API keys and credentials.
# Required
OPENAI_API_KEY=your_openai_api_key
OPENSEARCH_PASSWORD=your_secure_password
LANGFLOW_SUPERUSER=admin
LANGFLOW_SUPERUSER_PASSWORD=your_secure_password
LANGFLOW_CHAT_FLOW_ID=your_chat_flow_id
LANGFLOW_INGEST_FLOW_ID=your_ingest_flow_id
NUDGES_FLOW_ID=your_nudges_flow_id
For extended configuration, including ingestion and optional variables, see docs/reference/configuration.mdx
- Start OpenRAG.
# Full stack with GPU support
make dev
# Or CPU only
make dev-cpu
Access the services:
- Frontend: http://localhost:3000
- Backend API: http://localhost:8000
- Langflow: http://localhost:7860
- OpenSearch: http://localhost:9200
- OpenSearch Dashboards: http://localhost:5601
With OpenRAG started, ingest and retrieve documents with the OpenRAG Quickstart.
TUI interface
OpenRAG includes a powerful Terminal User Interface (TUI) for easy setup, configuration, and monitoring. The TUI provides a user-friendly way to manage your OpenRAG installation without complex command-line operations.
Launch OpenRAG with the TUI
From the repository root, run:
# Install dependencies first
uv sync
# Launch the TUI
uv run openrag
For the full TUI guide, see TUI.
Docker Deployment
The repository includes two Docker Compose .yml files.
They deploy the same applications and containers, but to different environments.
-
docker-compose.ymlis an OpenRAG deployment for environments with GPU support. GPU support requires an NVIDIA GPU with CUDA support and compatible NVIDIA drivers installed on the OpenRAG host machine. -
docker-compose-cpu.ymlis a CPU-only version of OpenRAG for systems without GPU support. Use this Docker compose file for environments where GPU drivers aren't available.
- Clone the OpenRAG repository.
git clone https://github.com/langflow-ai/openrag.git
cd openrag
- Build and start all services.
For the GPU-accelerated deployment, run:
docker compose build
docker compose up -d
For environments without GPU support, run:
docker compose -f docker-compose-cpu.yml up -d
For more information, see Deploy with Docker.
Troubleshooting
For common issues and fixes, see Troubleshoot.
Development
For developers wanting to contribute to OpenRAG or set up a development environment, please see our comprehensive development guide:
📚 See CONTRIBUTING.md for detailed development instructions
Quick Development Commands
make help # See all available commands
make setup # Initial development setup
make infra # Start infrastructure services
make backend # Run backend locally
make frontend # Run frontend locally