Commit graph

1 commit

Author SHA1 Message Date
claude[bot]
93ab7375cd
feat: eliminate ghost variables with configurable provider defaults system
Replace hardcoded DEFAULT_MODEL and DEFAULT_SMALL_MODEL constants across all LLM clients with a centralized, configurable provider defaults system.

Key changes:
- Created provider_defaults.py with centralized configuration for all providers
- Added environment variable support for easy customization (e.g., GEMINI_DEFAULT_MODEL)
- Updated all LLM clients to use configurable defaults instead of hardcoded constants
- Made edge operations max_tokens configurable via EXTRACT_EDGES_MAX_TOKENS
- Updated cross-encoder reranker clients to use provider defaults
- Maintained full backward compatibility with existing configurations

This resolves the issue where Gemini's flash-lite model has location constraints in Vertex AI that differ from the regular flash model, and users couldn't easily override these without editing source code.

Environment variables now supported:
- {PROVIDER}_DEFAULT_MODEL
- {PROVIDER}_DEFAULT_SMALL_MODEL  
- {PROVIDER}_DEFAULT_MAX_TOKENS
- {PROVIDER}_DEFAULT_TEMPERATURE
- {PROVIDER}_EXTRACT_EDGES_MAX_TOKENS
- EXTRACT_EDGES_MAX_TOKENS (global fallback)

Fixes #681

Co-authored-by: Daniel Chalef <danielchalef@users.noreply.github.com>
2025-07-09 14:59:17 +00:00