* added telemetry utils
* added telemetry to openrag
* fixed http timeout
* Added OS and GPU logging
* Track task fail and cancel
* Updated messages to be more readable
* Changed backend to mount config at volume
* update lock
* Changed backend to reapply settings after detecting that flow is reset
* Added periodic backup for flows, make better reset
* tui warning
* Changed settings page to alert user that he has to disable lock flow
* Changed flows to be locked
* Do periodic backup only if onboarding is done
* Change backup function to only back up flows if flow lock is disabled
* Added session manager to reapply all settings
---------
Co-authored-by: Sebastián Estévez <estevezsebastian@gmail.com>
Introduces a 'fail_safe_mode' option to the Embedding Model and OpenSearch (Multi-Model Multi-Embedding) components, allowing errors to be logged and None returned instead of raising exceptions. Refactors embedding model fetching logic for better error handling and updates component metadata, field order, and dependencies. Also adds 'className' fields and updates frontend node folder IDs for improved UI consistency.
Replaces all references to 'OpenSearchHybrid-Ve6bS' with 'OpenSearchVectorStoreComponentMultimodalMultiEmbedding-By9U4' in main.py, processors, and file service. Adds a utility for injecting provider credentials into Langflow request headers and integrates it into chat and file services for improved credential handling.
* Added flows with new components
* commented model provider assignment
* Added agent component display name
* commented provider assignment, assign provider on the generic component, assign custom values
* fixed ollama not showing loading steps, fixed loading steps never being removed
* made embedding and llm model optional on onboarding call
* added isEmbedding handling on useModelSelection
* added isEmbedding on onboarding card, separating embedding from non embedding card
* Added one additional step to configure embeddings
* Added embedding provider config
* Changed settings.py to return if not embedding
* Added editing fields to onboarding
* updated onboarding and flows_service to change embedding and llm separately
* updated templates that needs to be changed with provider values
* updated flows with new components
* Changed config manager to not have default models
* Changed flows_service settings
* Complete steps if not embedding
* Add more onboarding steps
* Removed one step from llm steps
* Added Anthropic as a model for the language model on the frontend
* Added anthropic models
* Added anthropic support on Backend
* Fixed provider health and validation
* Format settings
* Change anthropic logo
* Changed button to not jump
* Changed flows service to make anthropic work
* Fixed some things
* add embedding specific global variables
* updated flows
* fixed ingestion flow
* Implemented anthropic on settings page
* add embedding provider logo
* updated backend to work with multiple provider config
* update useUpdateSettings with new settings type
* updated provider health banner to check for health with new api
* changed queries and mutations to use new api
* changed embedding model input to work with new api
* Implemented provider based config on the frontend
* update existing design
* fixed settings configured
* fixed provider health query to include health check for both the providers
* Changed model-providers to show correctly the configured providers
* Updated prompt
* updated openrag agent
* Fixed settings to allow editing providers and changing llm and embedding models
* updated settings
* changed lf ver
* bump openrag version
* added more steps
* update settings to create the global variables
* updated steps
* updated default prompt
---------
Co-authored-by: Sebastián Estévez <estevezsebastian@gmail.com>
* Removed upload start message
* Made onboarding upload refetch nudges and only finish when document is ingested
* Implemented query filters on nudges
* changed get to post
* Implemented filtering for documents that are not sample data on nudges
---------
Co-authored-by: Sebastián Estévez <estevezsebastian@gmail.com>
* models query combined
* make endpoint to handle provider health
* provider health banner
* update-pdf-to-include-provider-selection (#344)
* polishing the error fixing experience
* fix agent instructions and up char limit
* fix provider
* disable tracing in langflow
* improve docling serve banner remove false positives
* Changed pyproject.toml docling versions
* Added another uv lock revision
* version bump
* unused things and fix bad conflicts
* add isFetching to the hook
* put back settings for models queries to never cache results
* update banner refetching indicator
* validate provider settings when saving
* fix settings page layout issue
* Added retry as false on the get models, to not take a long time
---------
Co-authored-by: Mendon Kissling <59585235+mendonk@users.noreply.github.com>
Co-authored-by: Mike Fortman <michael.fortman@datastax.com>
Co-authored-by: phact <estevezsebastian@gmail.com>
Co-authored-by: Lucas Oliveira <lucas.edu.oli@hotmail.com>
* Initial plan
* Implement dynamic Ollama embedding dimension resolution with probing
Co-authored-by: phact <1313220+phact@users.noreply.github.com>
* Fix Ollama probing
* raise instead of dims 0
* Show better error
* Run embedding probe before saving settings so that user can update
---------
Co-authored-by: copilot-swe-agent[bot] <198982749+Copilot@users.noreply.github.com>
Co-authored-by: phact <1313220+phact@users.noreply.github.com>
Co-authored-by: Lucas Oliveira <lucas.edu.oli@hotmail.com>
Co-authored-by: phact <estevezsebastian@gmail.com>
* hard-coded openai models
* ensure index if disable ingest with langflow is active
* update backend to not update embedding model when flag is disabled
* initialize index on startup when feature flag is enabled
* put config.yaml on docker compose
* changed tooltip stype
* added start on label wrapper
* changed switch to checkbox on openai onboarding and changed copies
* made border be red when api key is invalid
* Added embedding configuration after onboarding
* changed openrag ingest docling to have same embedding model component as other flows
* changed flows service to get flow by id, not by path
* modify reset_langflow to also put right embedding model
* added endpoint and project id to provider config
* added replacing the model with the provider model when resetting
* Moved consts to settings.py
* raise when flow_id is not found
Switched OpenRAG backend and frontend in docker-compose.yml to use local Dockerfile builds instead of remote images. Updated environment variables for better clarity and system integration. In flows/openrag_agent.json and langflow_file_service, improved handling of docs_metadata to support Data objects and added logging for metadata ingestion. Added agent_llm edge to agent node in flow definition.
* implement delete user conversation on agent
* format
* implement delete session endpoint
* implement delete session on persistence services
* added deletion of sessions and added fetch sessions with query instead of with useEffect
* removed unused texts
* implemented dropdown menu on conversations
* fixed logos
* pre-populate ollama endpoint
* show/hide api keys with button
* Added dropdown selector for ibm endpoint
* Added programatic dot pattern instead of background image
* Updated copies
* wait for 500ms before show connecting
* Changed copy
* removed unused log
* Added padding when password button is present
* made toggle be on mouse up
* show placeholder as loading models when they're loading
* removed description from model selector
* implemented getting key from env
* fixed complete button not updating
Added LangflowMCPService to update MCP servers with the user's JWT header after authentication. AuthService now triggers a background update to MCP servers on successful login, ensuring JWT propagation for downstream services.