* Fixed models service to try api key with first available model
* fixed ibm onboarding to not disable query when no data is available
* make ibm query disabled when not configured
* enable ollama query only when configured or endpoint present
* enable get openai models query when already configured
* just enable get from env when not configured
* Simplify ollama models validation
* fix max_tokens error on gpt 4o
* Added flows with new components
* commented model provider assignment
* Added agent component display name
* commented provider assignment, assign provider on the generic component, assign custom values
* fixed ollama not showing loading steps, fixed loading steps never being removed
* made embedding and llm model optional on onboarding call
* added isEmbedding handling on useModelSelection
* added isEmbedding on onboarding card, separating embedding from non embedding card
* Added one additional step to configure embeddings
* Added embedding provider config
* Changed settings.py to return if not embedding
* Added editing fields to onboarding
* updated onboarding and flows_service to change embedding and llm separately
* updated templates that needs to be changed with provider values
* updated flows with new components
* Changed config manager to not have default models
* Changed flows_service settings
* Complete steps if not embedding
* Add more onboarding steps
* Removed one step from llm steps
* Added Anthropic as a model for the language model on the frontend
* Added anthropic models
* Added anthropic support on Backend
* Fixed provider health and validation
* Format settings
* Change anthropic logo
* Changed button to not jump
* Changed flows service to make anthropic work
* Fixed some things
* add embedding specific global variables
* updated flows
* fixed ingestion flow
* Implemented anthropic on settings page
* add embedding provider logo
* updated backend to work with multiple provider config
* update useUpdateSettings with new settings type
* updated provider health banner to check for health with new api
* changed queries and mutations to use new api
* changed embedding model input to work with new api
* Implemented provider based config on the frontend
* update existing design
* fixed settings configured
* fixed provider health query to include health check for both the providers
* Changed model-providers to show correctly the configured providers
* Updated prompt
* updated openrag agent
* Fixed settings to allow editing providers and changing llm and embedding models
* updated settings
* changed lf ver
* bump openrag version
* added more steps
* update settings to create the global variables
* updated steps
* updated default prompt
---------
Co-authored-by: Sebastián Estévez <estevezsebastian@gmail.com>
* models query combined
* make endpoint to handle provider health
* provider health banner
* update-pdf-to-include-provider-selection (#344)
* polishing the error fixing experience
* fix agent instructions and up char limit
* fix provider
* disable tracing in langflow
* improve docling serve banner remove false positives
* Changed pyproject.toml docling versions
* Added another uv lock revision
* version bump
* unused things and fix bad conflicts
* add isFetching to the hook
* put back settings for models queries to never cache results
* update banner refetching indicator
* validate provider settings when saving
* fix settings page layout issue
* Added retry as false on the get models, to not take a long time
---------
Co-authored-by: Mendon Kissling <59585235+mendonk@users.noreply.github.com>
Co-authored-by: Mike Fortman <michael.fortman@datastax.com>
Co-authored-by: phact <estevezsebastian@gmail.com>
Co-authored-by: Lucas Oliveira <lucas.edu.oli@hotmail.com>
* add container utils
* added localhost url to settings
* added localhost_url as a constant
* added localhost_url to get settings query
* make ollama onboarding have localhost url by default
* make endpoint be changed in models service and in onboarding backend instead of onboarding screen
* fixed embedding dimensions to get stripped model
* make config come as localhost but global variable be set as the transformed endpoint
* remove setting ollama url since it comes from the global variable
* use localhost again on ollama
---------
Co-authored-by: Lucas Oliveira <lucas.edu.oli@hotmail.com>
* fixed logos
* pre-populate ollama endpoint
* show/hide api keys with button
* Added dropdown selector for ibm endpoint
* Added programatic dot pattern instead of background image
* Updated copies
* wait for 500ms before show connecting
* Changed copy
* removed unused log
* Added padding when password button is present
* made toggle be on mouse up
* show placeholder as loading models when they're loading
* removed description from model selector
* implemented getting key from env
* fixed complete button not updating