cognee/cognee/infrastructure/databases/vector/embeddings/config.py
Igor Ilic 38cdacbcb6
fix: Resolve issue with Gemini adapter (#1494)
<!-- .github/pull_request_template.md -->

## Description
Resolve Gemini Adapter issues:
 1. resolve embedding batch issue,
2. Resolve slowness because gemini tokenizer was sending word per word
to Googles API to count tokens (using OpenAI's local tokenizer to count
tokens for Gemini now)
 3. Update deprecated library and move to instructor

## Type of Change
<!-- Please check the relevant option -->
- [x] Bug fix (non-breaking change that fixes an issue)
- [ ] New feature (non-breaking change that adds functionality)
- [ ] Breaking change (fix or feature that would cause existing
functionality to change)
- [ ] Documentation update
- [ ] Code refactoring
- [ ] Performance improvement
- [ ] Other (please specify):

## Pre-submission Checklist
<!-- Please check all boxes that apply before submitting your PR -->
- [ ] **I have tested my changes thoroughly before submitting this PR**
- [ ] **This PR contains minimal changes necessary to address the
issue/feature**
- [ ] My code follows the project's coding standards and style
guidelines
- [ ] I have added tests that prove my fix is effective or that my
feature works
- [ ] I have added necessary documentation (if applicable)
- [ ] All new and existing tests pass
- [ ] I have searched existing PRs to ensure this change hasn't been
submitted already
- [ ] I have linked any relevant issues in the description
- [ ] My commits have clear and descriptive messages

## DCO Affirmation
I affirm that all code in every commit of this pull request conforms to
the terms of the Topoteretes Developer Certificate of Origin.
2025-10-07 18:04:18 +02:00

69 lines
2.6 KiB
Python

from typing import Optional
from functools import lru_cache
from pydantic_settings import BaseSettings, SettingsConfigDict
class EmbeddingConfig(BaseSettings):
"""
Manage configuration settings for embedding operations, including provider, model
details, API configuration, and tokenizer settings.
Public methods:
- to_dict: Serialize the configuration settings to a dictionary.
"""
embedding_provider: Optional[str] = "openai"
embedding_model: Optional[str] = "openai/text-embedding-3-large"
embedding_dimensions: Optional[int] = 3072
embedding_endpoint: Optional[str] = None
embedding_api_key: Optional[str] = None
embedding_api_version: Optional[str] = None
embedding_max_completion_tokens: Optional[int] = 8191
embedding_batch_size: Optional[int] = None
huggingface_tokenizer: Optional[str] = None
model_config = SettingsConfigDict(env_file=".env", extra="allow")
def model_post_init(self, __context) -> None:
# If embedding batch size is not defined use 2048 as default for OpenAI and 100 for all other embedding models
if not self.embedding_batch_size and self.embedding_provider.lower() == "openai":
self.embedding_batch_size = 2048
elif not self.embedding_batch_size:
self.embedding_batch_size = 100
def to_dict(self) -> dict:
"""
Serialize all embedding configuration settings to a dictionary.
Returns:
--------
- dict: A dictionary containing the embedding configuration settings.
"""
return {
"embedding_provider": self.embedding_provider,
"embedding_model": self.embedding_model,
"embedding_dimensions": self.embedding_dimensions,
"embedding_endpoint": self.embedding_endpoint,
"embedding_api_key": self.embedding_api_key,
"embedding_api_version": self.embedding_api_version,
"embedding_max_completion_tokens": self.embedding_max_completion_tokens,
"huggingface_tokenizer": self.huggingface_tokenizer,
}
@lru_cache
def get_embedding_config():
"""
Retrieve a cached instance of the EmbeddingConfig class.
This function returns an instance of EmbeddingConfig with default settings. It uses
memoization to cache the result, ensuring that subsequent calls return the same instance
without re-initialization, improving performance and resource utilization.
Returns:
--------
- EmbeddingConfig: An instance of EmbeddingConfig containing the embedding
configuration settings.
"""
return EmbeddingConfig()