Commit graph

504 commits

Author SHA1 Message Date
phact
f242a700a8 crank langflow client config to 20 minutes 2025-11-23 19:33:28 -05:00
phact
2322c0e14f longer notifications 2025-11-21 17:25:40 -05:00
phact
406b783a1a conditional buttons and copy 2025-11-21 17:14:05 -05:00
phact
9e4502d07b pick up env vars for config (not just existing .env files) 2025-11-21 16:59:26 -05:00
phact
84c0c8b4ed copy 2025-11-21 14:39:36 -05:00
Edwin Jose
933e600e9d Add support for Anthropic, Ollama, and Watsonx config
Introduces fields and validation for Anthropic API key, Ollama endpoint, and IBM watsonx.ai API key, endpoint, and project ID in environment management and configuration screens. Updates validation utilities and config UI to support these providers, allowing users to set and validate credentials and endpoints for additional AI services.
2025-11-21 14:00:55 -05:00
phact
ab3c57705a opensearch volume 2025-11-20 13:44:18 -05:00
pushkala-datastax
c7a8e89132
Merge pull request #421 from langflow-ai/fix-gdrive-connector
fix: Improve the Google Drive / Sharepoint / OneDrive connector's validation and sync
2025-11-19 14:22:32 -08:00
Cole Goldsmith
f6e6aa43a2
Feat/provider improvements (#422)
* remove connection dot indicators on settings page, better toast message for provider setup dialogs, fix typo in default agent prompt

* format

* open llm model select when toast button to settings is clicked
2025-11-19 15:20:27 -06:00
Eric Hare
0394df2052
Fix sharepoint and onedrive connectors 2025-11-19 12:39:39 -08:00
Eric Hare
b8a0f41d61
Merge branch 'main' into fix-gdrive-connector 2025-11-19 11:59:50 -08:00
Lucas Oliveira
141a7da339 update template when provider updates 2025-11-19 16:18:26 -03:00
Eric Hare
856a1d141b
fix: Improve the Google Drive connector 2025-11-19 11:04:09 -08:00
Eric Hare
cfe7f6b581
fix: Make sure we exclude the warmup file ingestion 2025-11-18 12:07:38 -08:00
Lucas Oliveira
c295431484
fix: refactor models validation to fix bugs related to ollama, watsonx and openai (#406)
* Fixed models service to try api key with first available model

* fixed ibm onboarding to not disable query when no data is available

* make ibm query disabled when not configured

* enable ollama query only when configured or endpoint present

* enable get openai models query when already configured

* just enable get from env when not configured

* Simplify ollama models validation

* fix max_tokens error on gpt 4o
2025-11-14 18:09:47 -03:00
Lucas Oliveira
3a6a05d043
Fix: reduce docling and provider banner refresh interval, implemented Starting on docling TUI (#404)
* Fixed refetch interval to be 3 seconds when Docling is unhealthy, fixed query to refetch on window focus

* Changed time to refetch provider health

* Added starting state to Docling on the TUI
2025-11-14 17:25:22 -03:00
Lucas Oliveira
e93febf391
fix: make tui status check with podman, change opensearch password validation (#394)
* Fixed welcome screen using Docker instead of Podman to check for services

* fixed password generator to always generate with symbols

* Fixed config to auto generate password and to not let the user input invalid passwords
2025-11-14 16:43:55 -03:00
Cole Goldsmith
1385fd5d5c
better settings form validation, grouped model selection (#383)
* better form validation, grouped model selection

* bump version

* fix fe build issue

* fix test

* change linting error

* Fixed integration tests

* fixed tests

* sample commit

---------

Co-authored-by: Lucas Oliveira <lucas.edu.oli@hotmail.com>
2025-11-11 22:39:59 -03:00
Lucas Oliveira
37faf94979
feat: adds anthropic provider, splits onboarding editing into two, support provider changing with generic llm and embedding components (#373)
* Added flows with new components

* commented model provider assignment

* Added agent component display name

* commented provider assignment, assign provider on the generic component, assign custom values

* fixed ollama not showing loading steps, fixed loading steps never being removed

* made embedding and llm model optional on onboarding call

* added isEmbedding handling on useModelSelection

* added isEmbedding on onboarding card, separating embedding from non embedding card

* Added one additional step to configure embeddings

* Added embedding provider config

* Changed settings.py to return if not embedding

* Added editing fields to onboarding

* updated onboarding and flows_service to change embedding and llm separately

* updated templates that needs to be changed with provider values

* updated flows with new components

* Changed config manager to not have default models

* Changed flows_service settings

* Complete steps if not embedding

* Add more onboarding steps

* Removed one step from llm steps

* Added Anthropic as a model for the language model on the frontend

* Added anthropic models

* Added anthropic support on Backend

* Fixed provider health and validation

* Format settings

* Change anthropic logo

* Changed button to not jump

* Changed flows service to make anthropic work

* Fixed some things

* add embedding specific global variables

* updated flows

* fixed ingestion flow

* Implemented anthropic on settings page

* add embedding provider logo

* updated backend to work with multiple provider config

* update useUpdateSettings with new settings type

* updated provider health banner to check for health with new api

* changed queries and mutations to use new api

* changed embedding model input to work with new api

* Implemented provider based config on the frontend

* update existing design

* fixed settings configured

* fixed provider health query to include health check for both the providers

* Changed model-providers to show correctly the configured providers

* Updated prompt

* updated openrag agent

* Fixed settings to allow editing providers and changing llm and embedding models

* updated settings

* changed lf ver

* bump openrag version

* added more steps

* update settings to create the global variables

* updated steps

* updated default prompt

---------

Co-authored-by: Sebastián Estévez <estevezsebastian@gmail.com>
2025-11-11 19:22:16 -03:00
Lucas Oliveira
a5d25e0c0b
fix: disable upload message when ingesting on onboarding, wait for file to be ingested, added knowledge filters on nudges (#345)
* Removed upload start message

* Made onboarding upload refetch nudges and only finish when document is ingested

* Implemented query filters on nudges

* changed get to post

* Implemented filtering for documents that are not sample data on nudges

---------

Co-authored-by: Sebastián Estévez <estevezsebastian@gmail.com>
2025-11-11 18:20:39 -03:00
phact
75c1ea1cfe system prompt to avoid hallucinations 2025-11-10 15:49:06 -05:00
phact
4f2fd0b2d4 tui service status parse fix 2025-11-10 12:37:41 -05:00
Cole Goldsmith
b88c8b20df
Feat/provider validation banner (#353)
* models query combined

* make endpoint to handle provider health

* provider health banner

* update-pdf-to-include-provider-selection (#344)

* polishing the error fixing experience

* fix agent instructions and up char limit

* fix provider

* disable tracing in langflow

* improve docling serve banner remove false positives

* Changed pyproject.toml docling versions

* Added another uv lock revision

* version bump

* unused things and fix bad conflicts

* add isFetching to the hook

* put back settings for models queries to never cache results

* update banner refetching indicator

* validate provider settings when saving

* fix settings page layout issue

* Added retry as false on the get models, to not take a long time

---------

Co-authored-by: Mendon Kissling <59585235+mendonk@users.noreply.github.com>
Co-authored-by: Mike Fortman <michael.fortman@datastax.com>
Co-authored-by: phact <estevezsebastian@gmail.com>
Co-authored-by: Lucas Oliveira <lucas.edu.oli@hotmail.com>
2025-11-06 13:03:50 -06:00
Sebastián Estévez
380e7f1fad
Merge pull request #362 from langflow-ai/improve-gpu-detection
improve gpu detection
2025-11-05 12:12:43 -08:00
Sebastián Estévez
96971a0572
Merge pull request #359 from langflow-ai/bug/358-replicas-zero
bug: Adjust replicas to zero as we are in single server mode. Closes …
2025-11-05 11:58:18 -08:00
phact
8ac2575015 improve gpu detection 2025-11-05 14:33:15 -05:00
phact
992a08fda6 better compose detection 2025-11-05 14:27:56 -05:00
zznate
088ddfa6c5 bug: Adjust replicas to zero as we are in single server mode. Closes #358. 2025-11-05 15:10:32 +13:00
Mike Fortman
69d2132a33 fix provider 2025-11-04 12:59:23 -06:00
Mike Fortman
a02e500183 fix agent instructions and up char limit 2025-11-04 12:00:19 -06:00
Sebastián Estévez
28f417ab5c
Merge branch 'main' into tui-optional-openai-key 2025-10-31 15:54:18 -04:00
phact
563efd957f lazy client initialization + client cleanup + http2 probe and fallback 2025-10-31 15:52:10 -04:00
Cole Goldsmith
2d31c4b9b0
Feat/278 Edit current model provider settings (#307)
* update settings update api to allow changing model provider config

* use react hook form

* make settings page small width

* re-use the onboarding forms instead of rolling a custom one

* issue

* remove test

* make custom forms with react-hook-form

* replace the updateFlow mutation with updateSettings

* show all the model providers

* revert changes to onboarding forms

* disabled state styles for providers

* break model selectors into their own file

* use existing selector component, use settings endpoint instead of onboarding, clean up form styles

* revert changes to openai onboarding

* small form changes
2025-10-31 13:22:51 -05:00
Lucas Oliveira
e02ea85431
Changed default llm model to be gpt 4o (#334) 2025-10-31 12:17:47 -03:00
Lucas Oliveira
16dbc31cc6
Delete unused models (#333) 2025-10-30 15:06:44 -03:00
Lucas Oliveira
cece8a91d5
check if model is embedding by testing it (#332) 2025-10-30 15:03:23 -03:00
Lucas Oliveira
b9ea9c99f1
fix: fixed bugs on ollama integration, added ingestion on onboarding (#330)
* Updated ollama components

* Changed ollama display name to be correct

* Changed prompt of provider validation

* removed event dispatched from file upload

* Changed onboarding to upload the entire knowledge

* Changed default models for ollama
2025-10-30 09:02:06 -03:00
phact
80fdd9680d make openai optional in tui and lazy client creation in backend 2025-10-29 22:38:31 -04:00
Lucas Oliveira
7b635df9d0
fix: added better onboarding error handling, added probing api keys and models (#326)
* Added error showing to onboarding card

* Added error state on animated provider steps

* removed toast on error

* Fixed animation on onboarding card

* fixed animation time

* Implemented provider validation

* Added provider validation before ingestion

* Changed error border

* remove log

---------

Co-authored-by: Mike Fortman <michael.fortman@datastax.com>
2025-10-29 15:59:10 -03:00
phact
6b71fe4f69 copy 2025-10-28 14:04:09 -04:00
phact
a9ac9d0894 message 2025-10-28 14:02:13 -04:00
phact
ceb426e1c0 exit 2025-10-28 13:59:19 -04:00
phact
dc55671191 windows check 2025-10-28 13:26:40 -04:00
phact
efa4b91736 update symlinks 2025-10-27 16:59:00 -04:00
phact
e3353bb0f8 loosen reconfigure check 2025-10-24 04:11:58 -04:00
Lucas Oliveira
cfd28ede6e Fix backend file context upload 2025-10-23 18:30:35 -03:00
Lucas Oliveira
fcf7a302d0
feat: adds what is openrag prompt, refactors chat design, adds scroll to bottom on chat, adds streaming support (#283)
* Changed prompts to include info about OpenRAG, change status of As Dataframe and As Vector Store to false on OpenSearch component

* added markdown to onboarding step

* added className to markdown renderer

* changed onboarding step to not render span

* Added nudges to onboarding content

* Added onboarding style for nudges

* updated user message and assistant message designs

* updated route.ts to handle streaming messages

* created new useChatStreaming to handle streaming

* changed useChatStreaming to work with the chat page

* changed onboarding content to use default messages instead of onboarding steps, and to use the new hook to send messages

* added span to the markdown renderer on stream

* updated page to use new chat streaming hook

* disable animation on completed steps

* changed markdown renderer margins

* changed css to not display markdown links and texts on white always

* added isCompleted to assistant and user messages

* removed space between elements on onboarding step to ensure smoother animation

* removed opacity 50 on onboarding messages

* changed default api to be langflow on chat streaming

* added fade in and color transition

* added color transition

* Rendered onboarding with use-stick-to-bottom

* Added use stick to bottom on page

* fixed nudges design

* changed chat input design

* fixed nudges design

* made overflow be hidden on main

* Added overflow y auto on other pages

* Put animate on messages

* Add source to types

* Adds animate and delay props to messages
2025-10-22 14:03:23 -03:00
phact
163d313849 ingest should use task tracker 2025-10-16 20:52:44 -04:00
phact
77edef26f7 fix conftest and more optionals 2025-10-14 12:17:07 -04:00
phact
9674021fae v0.1.24 2025-10-14 12:15:45 -04:00