5 KiB
OpenRAG
OpenRAG is a comprehensive Retrieval-Augmented Generation platform that enables intelligent document search and AI-powered conversations. Users can upload, process, and query documents through a chat interface backed by large language models and semantic search capabilities. The system utilizes Langflow for document ingestion, retrieval workflows, and intelligent nudges, providing a seamless RAG experience. Built with Starlette, Next.js, OpenSearch, and Langflow integration.
🚀 Quick Start
Prerequisites
- Docker or Podman with Compose installed
- Make (for development commands)
1. Environment Setup
# Clone and setup environment
git clone https://github.com/langflow-ai/openrag.git
cd openrag
make setup # Creates .env and installs dependencies
2. Configure Environment
Edit .env with your API keys and credentials:
# Required
OPENAI_API_KEY=your_openai_api_key
OPENSEARCH_PASSWORD=your_secure_password
LANGFLOW_SUPERUSER=admin
LANGFLOW_SUPERUSER_PASSWORD=your_secure_password
LANGFLOW_CHAT_FLOW_ID=your_chat_flow_id
LANGFLOW_INGEST_FLOW_ID=your_ingest_flow_id
NUDGES_FLOW_ID=your_nudges_flow_id
See extended configuration, including ingestion and optional variables: docs/reference/configuration.md
3. Start OpenRAG
# Full stack with GPU support
make dev
# Or CPU only
make dev-cpu
Access the services:
- Frontend: http://localhost:3000
- Backend API: http://localhost:8000
- Langflow: http://localhost:7860
- OpenSearch: http://localhost:9200
- OpenSearch Dashboards: http://localhost:5601
🖥️ TUI Interface
OpenRAG includes a powerful Terminal User Interface (TUI) for easy setup, configuration, and monitoring. The TUI provides a user-friendly way to manage your OpenRAG installation without complex command-line operations.
Launching the TUI
# Install dependencies first
uv sync
# Launch the TUI
uv run openrag
TUI Features
See the full TUI guide for features, navigation, and benefits: docs/get-started/tui.mdx
🐳 Docker Deployment
Standard Deployment
# Build and start all services
docker compose build
docker compose up -d
CPU-Only Deployment
For environments without GPU support:
docker compose -f docker-compose-cpu.yml up -d
More deployment commands and tips: docs/get-started/docker.mdx
🔧 Troubleshooting
Podman on macOS
If using Podman on macOS, you may need to increase VM memory:
podman machine stop
podman machine rm
podman machine init --memory 8192 # 8 GB example
podman machine start
Common Issues
See common issues and fixes: docs/reference/troubleshooting.mdx
🛠️ Development
For developers wanting to contribute to OpenRAG or set up a development environment, please see our comprehensive development guide:
📚 See CONTRIBUTING.md for detailed development instructions
The contributing guide includes:
- Complete development environment setup
- Local development workflows
- Testing and debugging procedures
- Code style guidelines
- Architecture overview
- Pull request guidelines
Quick Development Commands
make help # See all available commands
make setup # Initial development setup
make infra # Start infrastructure services
make backend # Run backend locally
make frontend # Run frontend locally