* remove connection dot indicators on settings page, better toast message for provider setup dialogs, fix typo in default agent prompt
* format
* open llm model select when toast button to settings is clicked
* Fixed models service to try api key with first available model
* fixed ibm onboarding to not disable query when no data is available
* make ibm query disabled when not configured
* enable ollama query only when configured or endpoint present
* enable get openai models query when already configured
* just enable get from env when not configured
* Simplify ollama models validation
* fix max_tokens error on gpt 4o
* Fixed refetch interval to be 3 seconds when Docling is unhealthy, fixed query to refetch on window focus
* Changed time to refetch provider health
* Added starting state to Docling on the TUI
* Fixed welcome screen using Docker instead of Podman to check for services
* fixed password generator to always generate with symbols
* Fixed config to auto generate password and to not let the user input invalid passwords
* Added flows with new components
* commented model provider assignment
* Added agent component display name
* commented provider assignment, assign provider on the generic component, assign custom values
* fixed ollama not showing loading steps, fixed loading steps never being removed
* made embedding and llm model optional on onboarding call
* added isEmbedding handling on useModelSelection
* added isEmbedding on onboarding card, separating embedding from non embedding card
* Added one additional step to configure embeddings
* Added embedding provider config
* Changed settings.py to return if not embedding
* Added editing fields to onboarding
* updated onboarding and flows_service to change embedding and llm separately
* updated templates that needs to be changed with provider values
* updated flows with new components
* Changed config manager to not have default models
* Changed flows_service settings
* Complete steps if not embedding
* Add more onboarding steps
* Removed one step from llm steps
* Added Anthropic as a model for the language model on the frontend
* Added anthropic models
* Added anthropic support on Backend
* Fixed provider health and validation
* Format settings
* Change anthropic logo
* Changed button to not jump
* Changed flows service to make anthropic work
* Fixed some things
* add embedding specific global variables
* updated flows
* fixed ingestion flow
* Implemented anthropic on settings page
* add embedding provider logo
* updated backend to work with multiple provider config
* update useUpdateSettings with new settings type
* updated provider health banner to check for health with new api
* changed queries and mutations to use new api
* changed embedding model input to work with new api
* Implemented provider based config on the frontend
* update existing design
* fixed settings configured
* fixed provider health query to include health check for both the providers
* Changed model-providers to show correctly the configured providers
* Updated prompt
* updated openrag agent
* Fixed settings to allow editing providers and changing llm and embedding models
* updated settings
* changed lf ver
* bump openrag version
* added more steps
* update settings to create the global variables
* updated steps
* updated default prompt
---------
Co-authored-by: Sebastián Estévez <estevezsebastian@gmail.com>
* Removed upload start message
* Made onboarding upload refetch nudges and only finish when document is ingested
* Implemented query filters on nudges
* changed get to post
* Implemented filtering for documents that are not sample data on nudges
---------
Co-authored-by: Sebastián Estévez <estevezsebastian@gmail.com>
* models query combined
* make endpoint to handle provider health
* provider health banner
* update-pdf-to-include-provider-selection (#344)
* polishing the error fixing experience
* fix agent instructions and up char limit
* fix provider
* disable tracing in langflow
* improve docling serve banner remove false positives
* Changed pyproject.toml docling versions
* Added another uv lock revision
* version bump
* unused things and fix bad conflicts
* add isFetching to the hook
* put back settings for models queries to never cache results
* update banner refetching indicator
* validate provider settings when saving
* fix settings page layout issue
* Added retry as false on the get models, to not take a long time
---------
Co-authored-by: Mendon Kissling <59585235+mendonk@users.noreply.github.com>
Co-authored-by: Mike Fortman <michael.fortman@datastax.com>
Co-authored-by: phact <estevezsebastian@gmail.com>
Co-authored-by: Lucas Oliveira <lucas.edu.oli@hotmail.com>
* update settings update api to allow changing model provider config
* use react hook form
* make settings page small width
* re-use the onboarding forms instead of rolling a custom one
* issue
* remove test
* make custom forms with react-hook-form
* replace the updateFlow mutation with updateSettings
* show all the model providers
* revert changes to onboarding forms
* disabled state styles for providers
* break model selectors into their own file
* use existing selector component, use settings endpoint instead of onboarding, clean up form styles
* revert changes to openai onboarding
* small form changes
* Updated ollama components
* Changed ollama display name to be correct
* Changed prompt of provider validation
* removed event dispatched from file upload
* Changed onboarding to upload the entire knowledge
* Changed default models for ollama
* Changed prompts to include info about OpenRAG, change status of As Dataframe and As Vector Store to false on OpenSearch component
* added markdown to onboarding step
* added className to markdown renderer
* changed onboarding step to not render span
* Added nudges to onboarding content
* Added onboarding style for nudges
* updated user message and assistant message designs
* updated route.ts to handle streaming messages
* created new useChatStreaming to handle streaming
* changed useChatStreaming to work with the chat page
* changed onboarding content to use default messages instead of onboarding steps, and to use the new hook to send messages
* added span to the markdown renderer on stream
* updated page to use new chat streaming hook
* disable animation on completed steps
* changed markdown renderer margins
* changed css to not display markdown links and texts on white always
* added isCompleted to assistant and user messages
* removed space between elements on onboarding step to ensure smoother animation
* removed opacity 50 on onboarding messages
* changed default api to be langflow on chat streaming
* added fade in and color transition
* added color transition
* Rendered onboarding with use-stick-to-bottom
* Added use stick to bottom on page
* fixed nudges design
* changed chat input design
* fixed nudges design
* made overflow be hidden on main
* Added overflow y auto on other pages
* Put animate on messages
* Add source to types
* Adds animate and delay props to messages