Introduces opensearch_multimodel.py, a new component supporting multi-model hybrid search and ingestion in OpenSearch with dynamic vector fields, parallel embedding, and advanced filtering. Refactors embedding generation in opensearch.py to use tenacity-based retry logic and IBM/Watsonx rate limiting. Updates related flow JSONs to integrate the new component.
* Changed backend to mount config at volume
* update lock
* Changed backend to reapply settings after detecting that flow is reset
* Added periodic backup for flows, make better reset
* tui warning
* Changed settings page to alert user that he has to disable lock flow
* Changed flows to be locked
* Do periodic backup only if onboarding is done
* Change backup function to only back up flows if flow lock is disabled
* Added session manager to reapply all settings
---------
Co-authored-by: Sebastián Estévez <estevezsebastian@gmail.com>
Introduces a 'fail_safe_mode' option to the Embedding Model component, allowing errors to be logged and None returned instead of raising exceptions. Refactors embedding initialization logic for OpenAI, Ollama, and IBM watsonx.ai providers to support this mode, and updates UI configuration and metadata accordingly.
Replaces OpenSearchHybrid with OpenSearchVectorStoreComponentMultimodalMultiEmbedding in ingestion_flow.json, updating all relevant edges and embedding connections. Updates docker-compose.yml to use local builds for backend, frontend, and langflow, and improves environment variable handling for API keys. This refactor enables multi-model and multimodal embedding support for document ingestion and search.
Switches Docker Compose services to local builds for backend, frontend, and langflow. Updates embedding model component to support IBM watsonx.ai features, including input token truncation and original text output, adds new dependencies, and improves configuration options in ingestion and agent flows.
* remove connection dot indicators on settings page, better toast message for provider setup dialogs, fix typo in default agent prompt
* format
* open llm model select when toast button to settings is clicked
* Added flows with new components
* commented model provider assignment
* Added agent component display name
* commented provider assignment, assign provider on the generic component, assign custom values
* fixed ollama not showing loading steps, fixed loading steps never being removed
* made embedding and llm model optional on onboarding call
* added isEmbedding handling on useModelSelection
* added isEmbedding on onboarding card, separating embedding from non embedding card
* Added one additional step to configure embeddings
* Added embedding provider config
* Changed settings.py to return if not embedding
* Added editing fields to onboarding
* updated onboarding and flows_service to change embedding and llm separately
* updated templates that needs to be changed with provider values
* updated flows with new components
* Changed config manager to not have default models
* Changed flows_service settings
* Complete steps if not embedding
* Add more onboarding steps
* Removed one step from llm steps
* Added Anthropic as a model for the language model on the frontend
* Added anthropic models
* Added anthropic support on Backend
* Fixed provider health and validation
* Format settings
* Change anthropic logo
* Changed button to not jump
* Changed flows service to make anthropic work
* Fixed some things
* add embedding specific global variables
* updated flows
* fixed ingestion flow
* Implemented anthropic on settings page
* add embedding provider logo
* updated backend to work with multiple provider config
* update useUpdateSettings with new settings type
* updated provider health banner to check for health with new api
* changed queries and mutations to use new api
* changed embedding model input to work with new api
* Implemented provider based config on the frontend
* update existing design
* fixed settings configured
* fixed provider health query to include health check for both the providers
* Changed model-providers to show correctly the configured providers
* Updated prompt
* updated openrag agent
* Fixed settings to allow editing providers and changing llm and embedding models
* updated settings
* changed lf ver
* bump openrag version
* added more steps
* update settings to create the global variables
* updated steps
* updated default prompt
---------
Co-authored-by: Sebastián Estévez <estevezsebastian@gmail.com>
* Changed prompts to include info about OpenRAG, change status of As Dataframe and As Vector Store to false on OpenSearch component
* added markdown to onboarding step
* added className to markdown renderer
* changed onboarding step to not render span
* Added nudges to onboarding content
* Added onboarding style for nudges
* updated user message and assistant message designs
* updated route.ts to handle streaming messages
* created new useChatStreaming to handle streaming
* changed useChatStreaming to work with the chat page
* changed onboarding content to use default messages instead of onboarding steps, and to use the new hook to send messages
* added span to the markdown renderer on stream
* updated page to use new chat streaming hook
* disable animation on completed steps
* changed markdown renderer margins
* changed css to not display markdown links and texts on white always
* added isCompleted to assistant and user messages
* removed space between elements on onboarding step to ensure smoother animation
* removed opacity 50 on onboarding messages
* changed default api to be langflow on chat streaming
* added fade in and color transition
* added color transition
* Rendered onboarding with use-stick-to-bottom
* Added use stick to bottom on page
* fixed nudges design
* changed chat input design
* fixed nudges design
* made overflow be hidden on main
* Added overflow y auto on other pages
* Put animate on messages
* Add source to types
* Adds animate and delay props to messages
Replaces the File component with a new OpenSearch hybrid search component in the ingestion flow, adds support for document metadata, and updates flow edges for DataFrame operations. Updates OpenSearch component implementation with advanced authentication, metadata handling, and vector store features. Docker Compose files and related service references are also updated to support the new OpenSearch integration.
Changed Dockerfile.langflow to use the 'test-openai-responses' branch. Improved file handling in ChatInput, updated ChatOutput to fix source property unpacking, and made AgentComponent use LCToolsAgentComponent._base_inputs. Also updated code hashes, dependency versions, and metadata timestamps in openrag_agent.json.
Switched OpenRAG backend and frontend in docker-compose.yml to use local Dockerfile builds instead of remote images. Updated environment variables for better clarity and system integration. In flows/openrag_agent.json and langflow_file_service, improved handling of docs_metadata to support Data objects and added logging for metadata ingestion. Added agent_llm edge to agent node in flow definition.
Updated the OpenSearchVectorStoreComponent to improve document metadata ingestion, including support for Data objects in docs_metadata. Added new edges and nodes to ingestion_flow.json for dynamic metadata input. Changed Dockerfile.langflow to use the fix-file-component branch.