fix: Resolve issues with GPT5 models (#1483)

<!-- .github/pull_request_template.md -->

## Description
Resolve issues with gpt 5 models for structured outputs by forcing JSON
mode in instructor

## Type of Change
<!-- Please check the relevant option -->
- [x] Bug fix (non-breaking change that fixes an issue)
- [ ] New feature (non-breaking change that adds functionality)
- [ ] Breaking change (fix or feature that would cause existing
functionality to change)
- [ ] Documentation update
- [ ] Code refactoring
- [ ] Performance improvement
- [ ] Other (please specify):

## Screenshots/Videos (if applicable)
<!-- Add screenshots or videos to help explain your changes -->

## Pre-submission Checklist
<!-- Please check all boxes that apply before submitting your PR -->
- [x] **I have tested my changes thoroughly before submitting this PR**
- [x] **This PR contains minimal changes necessary to address the
issue/feature**
- [x] My code follows the project's coding standards and style
guidelines
- [x] I have added tests that prove my fix is effective or that my
feature works
- [x] I have added necessary documentation (if applicable)
- [x] All new and existing tests pass
- [x] I have searched existing PRs to ensure this change hasn't been
submitted already
- [x] I have linked any relevant issues in the description
- [x] My commits have clear and descriptive messages

## DCO Affirmation
I affirm that all code in every commit of this pull request conforms to
the terms of the Topoteretes Developer Certificate of Origin.
This commit is contained in:
Vasilije 2025-09-29 19:25:25 +02:00 committed by GitHub
commit 1986ccff0a
No known key found for this signature in database
GPG key ID: B5690EEEBB952194

View file

@ -29,9 +29,6 @@ observe = get_observe()
logger = get_logger()
# litellm to drop unsupported params, e.g., reasoning_effort when not supported by the model.
litellm.drop_params = True
class OpenAIAdapter(LLMInterface):
"""
@ -76,8 +73,10 @@ class OpenAIAdapter(LLMInterface):
fallback_api_key: str = None,
fallback_endpoint: str = None,
):
self.aclient = instructor.from_litellm(litellm.acompletion)
self.client = instructor.from_litellm(litellm.completion)
self.aclient = instructor.from_litellm(
litellm.acompletion, mode=instructor.Mode.JSON_SCHEMA
)
self.client = instructor.from_litellm(litellm.completion, mode=instructor.Mode.JSON_SCHEMA)
self.transcription_model = transcription_model
self.model = model
self.api_key = api_key
@ -135,7 +134,6 @@ class OpenAIAdapter(LLMInterface):
api_version=self.api_version,
response_model=response_model,
max_retries=self.MAX_RETRIES,
reasoning_effort="minimal",
)
except (
ContentFilterFinishReasonError,
@ -223,7 +221,6 @@ class OpenAIAdapter(LLMInterface):
api_base=self.endpoint,
api_version=self.api_version,
response_model=response_model,
reasoning_effort="minimal",
max_retries=self.MAX_RETRIES,
)