No description
Find a file
Lucas Oliveira 37faf94979
feat: adds anthropic provider, splits onboarding editing into two, support provider changing with generic llm and embedding components (#373)
* Added flows with new components

* commented model provider assignment

* Added agent component display name

* commented provider assignment, assign provider on the generic component, assign custom values

* fixed ollama not showing loading steps, fixed loading steps never being removed

* made embedding and llm model optional on onboarding call

* added isEmbedding handling on useModelSelection

* added isEmbedding on onboarding card, separating embedding from non embedding card

* Added one additional step to configure embeddings

* Added embedding provider config

* Changed settings.py to return if not embedding

* Added editing fields to onboarding

* updated onboarding and flows_service to change embedding and llm separately

* updated templates that needs to be changed with provider values

* updated flows with new components

* Changed config manager to not have default models

* Changed flows_service settings

* Complete steps if not embedding

* Add more onboarding steps

* Removed one step from llm steps

* Added Anthropic as a model for the language model on the frontend

* Added anthropic models

* Added anthropic support on Backend

* Fixed provider health and validation

* Format settings

* Change anthropic logo

* Changed button to not jump

* Changed flows service to make anthropic work

* Fixed some things

* add embedding specific global variables

* updated flows

* fixed ingestion flow

* Implemented anthropic on settings page

* add embedding provider logo

* updated backend to work with multiple provider config

* update useUpdateSettings with new settings type

* updated provider health banner to check for health with new api

* changed queries and mutations to use new api

* changed embedding model input to work with new api

* Implemented provider based config on the frontend

* update existing design

* fixed settings configured

* fixed provider health query to include health check for both the providers

* Changed model-providers to show correctly the configured providers

* Updated prompt

* updated openrag agent

* Fixed settings to allow editing providers and changing llm and embedding models

* updated settings

* changed lf ver

* bump openrag version

* added more steps

* update settings to create the global variables

* updated steps

* updated default prompt

---------

Co-authored-by: Sebastián Estévez <estevezsebastian@gmail.com>
2025-11-11 19:22:16 -03:00
.github/workflows integration tests with local images checkbox 2025-11-10 12:53:59 -05:00
assets adding tui screenshots 2025-09-10 13:06:47 -04:00
docs Merge pull request #374 from langflow-ai/docs-quickstart-plus-install 2025-11-10 16:45:51 -05:00
documents update-pdf-to-include-provider-selection (#344) 2025-11-03 17:17:30 -05:00
flows feat: adds anthropic provider, splits onboarding editing into two, support provider changing with generic llm and embedding components (#373) 2025-11-11 19:22:16 -03:00
frontend feat: adds anthropic provider, splits onboarding editing into two, support provider changing with generic llm and embedding components (#373) 2025-11-11 19:22:16 -03:00
keys empty keys directory 2025-09-02 17:12:21 -04:00
scripts move-script 2025-11-06 16:44:01 -05:00
securityconfig ingest flow works multi-embedding 2025-10-10 22:14:51 -04:00
src feat: adds anthropic provider, splits onboarding editing into two, support provider changing with generic llm and embedding components (#373) 2025-11-11 19:22:16 -03:00
tests fix conftest and more optionals 2025-10-14 12:17:07 -04:00
.dockerignore Add environment and build file exclusions to .dockerignore 2025-09-08 18:06:18 -03:00
.env.example Update .env.example 2025-10-13 07:03:50 +13:00
.gitignore tui copies flows and v0.1.17 update 2025-10-08 11:05:44 -04:00
.python-version take 0 2025-07-10 22:36:45 -04:00
CONTRIBUTING.md tui-quickstart 2025-10-08 11:59:36 -04:00
docker-compose-cpu.yml feat: adds anthropic provider, splits onboarding editing into two, support provider changing with generic llm and embedding components (#373) 2025-11-11 19:22:16 -03:00
docker-compose.yml feat: adds anthropic provider, splits onboarding editing into two, support provider changing with generic llm and embedding components (#373) 2025-11-11 19:22:16 -03:00
Dockerfile better os pw parsing dockerfile 2025-10-13 15:52:26 -04:00
Dockerfile.backend make flows visible to backend container 2025-09-09 14:12:02 -04:00
Dockerfile.frontend change the parameter! 2025-09-08 15:42:20 -04:00
Dockerfile.langflow Update base image to langflow-nightly:1.6.3.dev1 2025-10-06 21:16:56 -04:00
LICENSE Added ASFv2 license file. Closes #250 2025-10-13 07:33:00 +13:00
Makefile integration tests with local images checkbox 2025-11-10 12:53:59 -05:00
MANIFEST.in MANIFEST.in 2025-10-07 12:34:07 -04:00
pyproject.toml feat: adds anthropic provider, splits onboarding editing into two, support provider changing with generic llm and embedding components (#373) 2025-11-11 19:22:16 -03:00
README.md docker-page-slug-correction 2025-11-10 11:57:00 -05:00
uv.lock feat: adds anthropic provider, splits onboarding editing into two, support provider changing with generic llm and embedding components (#373) 2025-11-11 19:22:16 -03:00
warm_up_docling.py doc processing knobs 2025-09-18 16:27:01 -04:00

OpenRAG

Langflow    OpenSearch    Langflow   

OpenRAG is a comprehensive Retrieval-Augmented Generation platform that enables intelligent document search and AI-powered conversations. Users can upload, process, and query documents through a chat interface backed by large language models and semantic search capabilities. The system utilizes Langflow for document ingestion, retrieval workflows, and intelligent nudges, providing a seamless RAG experience. Built with Starlette and Next.js. Powered by OpenSearch, Langflow, and Docling.

Ask DeepWiki

Quickstart   |   TUI Interface   |   Docker Deployment   |   Development   |   Troubleshooting

Quickstart

To quickly run OpenRAG without creating or modifying any project files, use uvx:

uvx openrag

This runs OpenRAG without installing it to your project or globally. To run a specific version of OpenRAG, add the version to the command, such as: uvx --from openrag==0.1.25 openrag.

Install Python package

To first set up a project and then install the OpenRAG Python package, do the following:

  1. Create a new project with a virtual environment using uv init.

    uv init YOUR_PROJECT_NAME
    cd YOUR_PROJECT_NAME
    

    The (venv) prompt doesn't change, but uv commands will automatically use the project's virtual environment. For more information on virtual environments, see the uv documentation.

  2. Add OpenRAG to your project.

    uv add openrag
    

    To add a specific version of OpenRAG:

    uv add openrag==0.1.25
    
  3. Start the OpenRAG TUI.

    uv run openrag
    
  4. Continue with the Quickstart.

For the full TUI installation guide, see TUI.

Docker or Podman installation

For more information, see Install OpenRAG containers.

Troubleshooting

For common issues and fixes, see Troubleshoot.

Development

For developers wanting to contribute to OpenRAG or set up a development environment, see CONTRIBUTING.md.