Commit graph

492 commits

Author SHA1 Message Date
Lucas Oliveira
141a7da339 update template when provider updates 2025-11-19 16:18:26 -03:00
Eric Hare
cfe7f6b581
fix: Make sure we exclude the warmup file ingestion 2025-11-18 12:07:38 -08:00
Lucas Oliveira
c295431484
fix: refactor models validation to fix bugs related to ollama, watsonx and openai (#406)
* Fixed models service to try api key with first available model

* fixed ibm onboarding to not disable query when no data is available

* make ibm query disabled when not configured

* enable ollama query only when configured or endpoint present

* enable get openai models query when already configured

* just enable get from env when not configured

* Simplify ollama models validation

* fix max_tokens error on gpt 4o
2025-11-14 18:09:47 -03:00
Lucas Oliveira
3a6a05d043
Fix: reduce docling and provider banner refresh interval, implemented Starting on docling TUI (#404)
* Fixed refetch interval to be 3 seconds when Docling is unhealthy, fixed query to refetch on window focus

* Changed time to refetch provider health

* Added starting state to Docling on the TUI
2025-11-14 17:25:22 -03:00
Lucas Oliveira
e93febf391
fix: make tui status check with podman, change opensearch password validation (#394)
* Fixed welcome screen using Docker instead of Podman to check for services

* fixed password generator to always generate with symbols

* Fixed config to auto generate password and to not let the user input invalid passwords
2025-11-14 16:43:55 -03:00
Cole Goldsmith
1385fd5d5c
better settings form validation, grouped model selection (#383)
* better form validation, grouped model selection

* bump version

* fix fe build issue

* fix test

* change linting error

* Fixed integration tests

* fixed tests

* sample commit

---------

Co-authored-by: Lucas Oliveira <lucas.edu.oli@hotmail.com>
2025-11-11 22:39:59 -03:00
Lucas Oliveira
37faf94979
feat: adds anthropic provider, splits onboarding editing into two, support provider changing with generic llm and embedding components (#373)
* Added flows with new components

* commented model provider assignment

* Added agent component display name

* commented provider assignment, assign provider on the generic component, assign custom values

* fixed ollama not showing loading steps, fixed loading steps never being removed

* made embedding and llm model optional on onboarding call

* added isEmbedding handling on useModelSelection

* added isEmbedding on onboarding card, separating embedding from non embedding card

* Added one additional step to configure embeddings

* Added embedding provider config

* Changed settings.py to return if not embedding

* Added editing fields to onboarding

* updated onboarding and flows_service to change embedding and llm separately

* updated templates that needs to be changed with provider values

* updated flows with new components

* Changed config manager to not have default models

* Changed flows_service settings

* Complete steps if not embedding

* Add more onboarding steps

* Removed one step from llm steps

* Added Anthropic as a model for the language model on the frontend

* Added anthropic models

* Added anthropic support on Backend

* Fixed provider health and validation

* Format settings

* Change anthropic logo

* Changed button to not jump

* Changed flows service to make anthropic work

* Fixed some things

* add embedding specific global variables

* updated flows

* fixed ingestion flow

* Implemented anthropic on settings page

* add embedding provider logo

* updated backend to work with multiple provider config

* update useUpdateSettings with new settings type

* updated provider health banner to check for health with new api

* changed queries and mutations to use new api

* changed embedding model input to work with new api

* Implemented provider based config on the frontend

* update existing design

* fixed settings configured

* fixed provider health query to include health check for both the providers

* Changed model-providers to show correctly the configured providers

* Updated prompt

* updated openrag agent

* Fixed settings to allow editing providers and changing llm and embedding models

* updated settings

* changed lf ver

* bump openrag version

* added more steps

* update settings to create the global variables

* updated steps

* updated default prompt

---------

Co-authored-by: Sebastián Estévez <estevezsebastian@gmail.com>
2025-11-11 19:22:16 -03:00
Lucas Oliveira
a5d25e0c0b
fix: disable upload message when ingesting on onboarding, wait for file to be ingested, added knowledge filters on nudges (#345)
* Removed upload start message

* Made onboarding upload refetch nudges and only finish when document is ingested

* Implemented query filters on nudges

* changed get to post

* Implemented filtering for documents that are not sample data on nudges

---------

Co-authored-by: Sebastián Estévez <estevezsebastian@gmail.com>
2025-11-11 18:20:39 -03:00
phact
75c1ea1cfe system prompt to avoid hallucinations 2025-11-10 15:49:06 -05:00
phact
4f2fd0b2d4 tui service status parse fix 2025-11-10 12:37:41 -05:00
Cole Goldsmith
b88c8b20df
Feat/provider validation banner (#353)
* models query combined

* make endpoint to handle provider health

* provider health banner

* update-pdf-to-include-provider-selection (#344)

* polishing the error fixing experience

* fix agent instructions and up char limit

* fix provider

* disable tracing in langflow

* improve docling serve banner remove false positives

* Changed pyproject.toml docling versions

* Added another uv lock revision

* version bump

* unused things and fix bad conflicts

* add isFetching to the hook

* put back settings for models queries to never cache results

* update banner refetching indicator

* validate provider settings when saving

* fix settings page layout issue

* Added retry as false on the get models, to not take a long time

---------

Co-authored-by: Mendon Kissling <59585235+mendonk@users.noreply.github.com>
Co-authored-by: Mike Fortman <michael.fortman@datastax.com>
Co-authored-by: phact <estevezsebastian@gmail.com>
Co-authored-by: Lucas Oliveira <lucas.edu.oli@hotmail.com>
2025-11-06 13:03:50 -06:00
Sebastián Estévez
380e7f1fad
Merge pull request #362 from langflow-ai/improve-gpu-detection
improve gpu detection
2025-11-05 12:12:43 -08:00
Sebastián Estévez
96971a0572
Merge pull request #359 from langflow-ai/bug/358-replicas-zero
bug: Adjust replicas to zero as we are in single server mode. Closes …
2025-11-05 11:58:18 -08:00
phact
8ac2575015 improve gpu detection 2025-11-05 14:33:15 -05:00
phact
992a08fda6 better compose detection 2025-11-05 14:27:56 -05:00
zznate
088ddfa6c5 bug: Adjust replicas to zero as we are in single server mode. Closes #358. 2025-11-05 15:10:32 +13:00
Mike Fortman
69d2132a33 fix provider 2025-11-04 12:59:23 -06:00
Mike Fortman
a02e500183 fix agent instructions and up char limit 2025-11-04 12:00:19 -06:00
Sebastián Estévez
28f417ab5c
Merge branch 'main' into tui-optional-openai-key 2025-10-31 15:54:18 -04:00
phact
563efd957f lazy client initialization + client cleanup + http2 probe and fallback 2025-10-31 15:52:10 -04:00
Cole Goldsmith
2d31c4b9b0
Feat/278 Edit current model provider settings (#307)
* update settings update api to allow changing model provider config

* use react hook form

* make settings page small width

* re-use the onboarding forms instead of rolling a custom one

* issue

* remove test

* make custom forms with react-hook-form

* replace the updateFlow mutation with updateSettings

* show all the model providers

* revert changes to onboarding forms

* disabled state styles for providers

* break model selectors into their own file

* use existing selector component, use settings endpoint instead of onboarding, clean up form styles

* revert changes to openai onboarding

* small form changes
2025-10-31 13:22:51 -05:00
Lucas Oliveira
e02ea85431
Changed default llm model to be gpt 4o (#334) 2025-10-31 12:17:47 -03:00
Lucas Oliveira
16dbc31cc6
Delete unused models (#333) 2025-10-30 15:06:44 -03:00
Lucas Oliveira
cece8a91d5
check if model is embedding by testing it (#332) 2025-10-30 15:03:23 -03:00
Lucas Oliveira
b9ea9c99f1
fix: fixed bugs on ollama integration, added ingestion on onboarding (#330)
* Updated ollama components

* Changed ollama display name to be correct

* Changed prompt of provider validation

* removed event dispatched from file upload

* Changed onboarding to upload the entire knowledge

* Changed default models for ollama
2025-10-30 09:02:06 -03:00
phact
80fdd9680d make openai optional in tui and lazy client creation in backend 2025-10-29 22:38:31 -04:00
Lucas Oliveira
7b635df9d0
fix: added better onboarding error handling, added probing api keys and models (#326)
* Added error showing to onboarding card

* Added error state on animated provider steps

* removed toast on error

* Fixed animation on onboarding card

* fixed animation time

* Implemented provider validation

* Added provider validation before ingestion

* Changed error border

* remove log

---------

Co-authored-by: Mike Fortman <michael.fortman@datastax.com>
2025-10-29 15:59:10 -03:00
phact
6b71fe4f69 copy 2025-10-28 14:04:09 -04:00
phact
a9ac9d0894 message 2025-10-28 14:02:13 -04:00
phact
ceb426e1c0 exit 2025-10-28 13:59:19 -04:00
phact
dc55671191 windows check 2025-10-28 13:26:40 -04:00
phact
efa4b91736 update symlinks 2025-10-27 16:59:00 -04:00
phact
e3353bb0f8 loosen reconfigure check 2025-10-24 04:11:58 -04:00
Lucas Oliveira
cfd28ede6e Fix backend file context upload 2025-10-23 18:30:35 -03:00
Lucas Oliveira
fcf7a302d0
feat: adds what is openrag prompt, refactors chat design, adds scroll to bottom on chat, adds streaming support (#283)
* Changed prompts to include info about OpenRAG, change status of As Dataframe and As Vector Store to false on OpenSearch component

* added markdown to onboarding step

* added className to markdown renderer

* changed onboarding step to not render span

* Added nudges to onboarding content

* Added onboarding style for nudges

* updated user message and assistant message designs

* updated route.ts to handle streaming messages

* created new useChatStreaming to handle streaming

* changed useChatStreaming to work with the chat page

* changed onboarding content to use default messages instead of onboarding steps, and to use the new hook to send messages

* added span to the markdown renderer on stream

* updated page to use new chat streaming hook

* disable animation on completed steps

* changed markdown renderer margins

* changed css to not display markdown links and texts on white always

* added isCompleted to assistant and user messages

* removed space between elements on onboarding step to ensure smoother animation

* removed opacity 50 on onboarding messages

* changed default api to be langflow on chat streaming

* added fade in and color transition

* added color transition

* Rendered onboarding with use-stick-to-bottom

* Added use stick to bottom on page

* fixed nudges design

* changed chat input design

* fixed nudges design

* made overflow be hidden on main

* Added overflow y auto on other pages

* Put animate on messages

* Add source to types

* Adds animate and delay props to messages
2025-10-22 14:03:23 -03:00
phact
163d313849 ingest should use task tracker 2025-10-16 20:52:44 -04:00
phact
77edef26f7 fix conftest and more optionals 2025-10-14 12:17:07 -04:00
phact
9674021fae v0.1.24 2025-10-14 12:15:45 -04:00
phact
612a98f083 remove /upload endpoint, switch tests to router 2025-10-13 23:21:19 -04:00
phact
3998014561 fix document processing embedding model bug 2025-10-13 11:31:56 -04:00
phact
bba02910e6 await 2025-10-11 02:53:16 -04:00
Sebastián Estévez
b9f109ea7d
Merge branch 'main' into multi-embedding-support 2025-10-11 02:44:16 -04:00
phact
a7c5a9f8f3 fis: keyword type field name for search 2025-10-11 02:10:01 -04:00
phact
a424bb422a improve embedding generation timeout hadling w/ retry and error handling 2025-10-11 01:06:14 -04:00
phact
0c696afef8 make overquery optional 2025-10-11 00:59:45 -04:00
phact
5a4d5158bc crank langflow timeout 2025-10-11 00:52:56 -04:00
phact
aff70096ce .keyword fix 2025-10-11 00:48:09 -04:00
phact
81901c666b fix: ensure /settings changes embedding models across all flows 2025-10-11 00:05:07 -04:00
phact
12ae6d3fb1 ingest flow works multi-embedding 2025-10-10 22:14:51 -04:00
phact
59f45a2db7 fix: TUI should not pull contianers on start, fixed image detection logic bug 2025-10-10 21:56:36 -04:00