### What problem does this PR solve? https://github.com/infiniflow/ragflow/issues/9177 The reason should be due to the gemin internal use a different parameter name ` max_output_tokens (int): Optional. The maximum number of tokens to include in a response candidate. Note: The default value varies by model, see the ``Model.output_token_limit`` attribute of the ``Model`` returned from the ``getModel`` function. This field is a member of `oneof`_ ``_max_output_tokens``. ` ### Type of change - [x] Bug Fix (non-breaking change which fixes an issue) |
||
|---|---|---|
| .. | ||
| __init__.py | ||
| chat_model.py | ||
| cv_model.py | ||
| embedding_model.py | ||
| rerank_model.py | ||
| sequence2txt_model.py | ||
| tts_model.py | ||