Commit graph

117 commits

Author SHA1 Message Date
yangdx
3d9de5ed03 feat: improve Gemini client error handling and retry logic
• Add google-api-core dependency
• Add specific exception handling
• Create InvalidResponseError class
• Update retry decorators
• Fix empty response handling
2025-11-08 22:10:09 +08:00
yangdx
de4ed73652 Add Gemini embedding support
- Implement gemini_embed function
- Add gemini to embedding binding choices
- Add L2 normalization for dims < 3072
2025-11-08 03:34:30 +08:00
yangdx
f83ea3394e Add section header comment for Gemini binding options 2025-11-08 02:07:31 +08:00
yangdx
ffeeae4208 refactor: simplify jina embedding dimension handling 2025-11-07 22:09:57 +08:00
yangdx
01b07b2be5 Refactor Jina embedding dimension by changing param to optional with default 2025-11-07 22:04:34 +08:00
yangdx
d8a6355e41 Merge branch 'main' into apply-dim-to-embedding-call 2025-11-07 20:48:22 +08:00
yangdx
33a1482f7f Add optional embedding dimension parameter control via env var
* Add EMBEDDING_SEND_DIM environment variable
* Update Jina/OpenAI embed functions
* Add send_dimensions to EmbeddingFunc
* Auto-inject embedding_dim when enabled
* Add parameter validation warnings
2025-11-07 20:46:40 +08:00
yangdx
fc40a36968 Add timeout support to Gemini LLM and improve parameter handling
• Add timeout parameter to Gemini client
• Convert timeout seconds to milliseconds
• Update function signatures consistently
• Add Gemini thinking config example
• Clean up parameter documentation
2025-11-07 15:50:14 +08:00
yangdx
3cb4eae492 Add Chain of Thought support to Gemini LLM integration
- Extract thoughts from response parts
- Add COT enable/disable parameter
2025-11-07 15:22:14 +08:00
yangdx
6686edfd35 Update Gemini LLM options: add seed and thinking config, remove MIME type 2025-11-07 14:32:42 +08:00
Yasiru Rangana
d94aae9c5e Add dimensions parameter support to openai_embed() 2025-11-07 09:55:06 +11:00
yangdx
8c27555358 Fix Gemini response parsing to avoid warnings from non-text parts 2025-11-07 04:00:37 +08:00
yangdx
6e36ff41e1 Fix linting 2025-11-06 16:01:24 +08:00
yangdx
5f49cee20f Merge branch 'main' into VOXWAVE-FOUNDRY/main 2025-11-06 15:37:35 +08:00
yangdx
10f6e6955f Improve Langfuse integration and stream response cleanup handling
• Check env vars before enabling Langfuse
• Move imports after env check logic
• Handle wrapper client aclose() issues
• Add debug logs for cleanup failures
2025-11-03 13:09:45 +08:00
anouarbm
9495778c2d refactor: reorder Langfuse import logic for improved clarity
Moved logger import before Langfuse block to fix NameError.
2025-11-03 05:27:41 +01:00
anouarbm
626b42bc40 feat: add optional Langfuse observability integration
This contribution adds optional Langfuse support for LLM observability and tracing.
Langfuse provides a drop-in replacement for the OpenAI client that automatically
tracks all LLM interactions without requiring code changes.

Features:
- Optional Langfuse integration with graceful fallback
- Automatic LLM request/response tracing
- Token usage tracking
- Latency metrics
- Error tracking
- Zero code changes required for existing functionality

Implementation:
- Modified lightrag/llm/openai.py to conditionally use Langfuse's AsyncOpenAI
- Falls back to standard OpenAI client if Langfuse is not installed
- Logs observability status on import

Configuration:
To enable Langfuse tracing, install the observability extras and set environment variables:

```bash
pip install lightrag-hku[observability]

export LANGFUSE_PUBLIC_KEY="your_public_key"
export LANGFUSE_SECRET_KEY="your_secret_key"
export LANGFUSE_HOST="https://cloud.langfuse.com"  # or your self-hosted instance
```

If Langfuse is not installed or environment variables are not set, LightRAG
will use the standard OpenAI client without any functionality changes.

Changes:
- Modified lightrag/llm/openai.py (added optional Langfuse import)
- Updated pyproject.toml with optional 'observability' dependencies

Dependencies (optional):
- langfuse>=3.8.1
2025-11-01 21:40:22 +01:00
Humphry
0b3d31507e extended to use gemini, sswitched to use gemini-flash-latest 2025-10-20 13:17:16 +03:00
yangdx
a5c05f1b92 Add offline deployment support with cache management and layered deps
• Add tiktoken cache downloader CLI
• Add layered offline dependencies
• Add offline requirements files
• Add offline deployment guide
2025-10-11 10:28:14 +08:00
Yasiru Rangana
ae9f4ae73f fix: Remove trailing whitespace for pre-commit linting 2025-10-09 15:01:53 +11:00
Yasiru Rangana
ec40b17eea feat: Add token tracking support to openai_embed function
- Add optional token_tracker parameter to openai_embed()
- Track prompt_tokens and total_tokens for embedding API calls
- Enables monitoring of embedding token usage alongside LLM calls
- Maintains backward compatibility with existing code
2025-10-08 14:36:08 +11:00
yangdx
b9c37bd937 Fix linting 2025-10-03 02:10:02 +08:00
yangdx
112349ed5b Modernize type hints and remove Python 3.8 compatibility code
• Use collections.abc.AsyncIterator only
• Remove sys.version_info checks
• Use union syntax for None types
• Simplify string emptiness checks
2025-10-02 23:15:42 +08:00
yangdx
42d1d04147 Fix boolean parser problem for for LLM environment variable
• Add custom boolean parser for argparse in BindingOptions
2025-09-28 19:23:57 +08:00
yangdx
cff6029508 Ensure COT tags are properly closed in all stream termination scenarios
- Add COT closure after stream completion
- Handle COT in exception scenarios
- Add final safety check in finally block
- Prevent unclosed thinking tags
- Log COT closure failures
2025-09-22 00:09:27 +08:00
yangdx
077d9be5d7 Add Deepseek Style Chain of Thought (CoT) Support for OpenAI Compatible LLM providers
- Add enable_cot parameter to all LLM APIs
- Implement CoT for OpenAI with <think> tags
- Log warnings for unsupported providers
- Enable CoT in query operations
- Handle streaming and non-streaming CoT
2025-09-09 22:34:36 +08:00
yangdx
451f488f72 Add debug logging for client configs in OpenAI LLM function 2025-09-07 02:29:37 +08:00
yangdx
4b2ef71c25 feat: Add extra_body parameter support for OpenRouter/vLLM compatibility
- Enhanced add_args function to handle dict types with JSON parsing
- Added reasoning and extra_body parameters for OpenRouter/vLLM compatibility
- Updated env.example with OpenRouter/vLLM parameter examples
2025-08-21 13:06:28 +08:00
yangdx
aa22772721 Refactor LLM temperature handling to be provider-specific
• Remove global temperature parameter
• Add provider-specific temp configs
• Update env example with new settings
• Fix Bedrock temperature handling
• Clean up splash screen display
2025-08-20 23:52:33 +08:00
yangdx
df7bcb1e3d Add LLM_TIMEOUT configuration for all LLM providers
- Add LLM_TIMEOUT env variable
- Apply timeout to all LLM bindings
2025-08-20 23:50:57 +08:00
SJ
f7ca9ae16a Ruff formatted 2025-08-15 22:21:34 +00:00
SJ
99643f01de
Enhancement: support aws bedrock as an LLm binding #1733 2025-08-13 02:08:13 -05:00
yangdx
ffb642a5ce Fix linting 2025-08-09 08:41:41 +08:00
yangdx
ecd7777e61 Update OpenAI embedding handling for both list and base64 embeddings
- Fix OpenAI embedding array parsing
- Improve embedding data type safety
2025-08-09 08:40:33 +08:00
yangdx
6ff25210ea feat: improve Jina API error handling to show clean messages instead of HTML 2025-08-05 11:46:02 +08:00
yangdx
c5babf61d7 Feat: Change embedding formats from float to base64 for efficiency
- Add base64 support for Jina embeddings
- Add base64 support for OpenAI embeddings
- Update env.example with new embedding options
2025-08-05 11:38:40 +08:00
yangdx
adf7ec8e35 feat: Add OpenAI LLM Options support with BindingOptions framework
- Add OpenAILLMOptions dataclass with full OpenAI API parameter support
- Integrate OpenAI options in config.py for automatic binding detection
- Update server functions to inject OpenAI options for openai/azure_openai bindings
2025-08-05 03:47:26 +08:00
yangdx
3099748668 Add temperature fallback for Ollama LLM binding
- Implement OLLAMA_LLM_TEMPERATURE env var
- Fallback to global TEMPERATURE if unset
- Remove redundant OllamaLLMOptions logic
- Update env.example with new setting
2025-08-05 01:50:09 +08:00
yangdx
e5e3f0f878 Fix(Ollama option): change stop option from string to list and add fallback global temperature setting 2025-08-04 19:43:14 +08:00
yangdx
f8a880ac66 Improved binding options testing and documentation 2025-08-04 18:21:55 +08:00
yangdx
32af45ff46 refactor: improve JSON parsing reliability with json-repair library
Replace regex-based JSON extraction with json-repair for better handling of malformed LLM responses. Remove deprecated JSON parsing utilities and clean up keyword_extraction parameter across LLM providers.

- Remove locate_json_string_body_from_string() and convert_response_to_json()
- Use json-repair.loads() in extract_keywords_only() for robust parsing
- Clean up LLM interfaces and remove unused parameters
- Add json-repair dependency
2025-08-01 19:36:20 +08:00
yangdx
9d5603d35e Set the default LLM temperature to 1.0 and centralize constant management 2025-07-31 17:15:10 +08:00
administrator
9c3e1505b5 fix timeout issue 2025-07-29 13:38:46 +07:00
yangdx
9923821d75 refactor: Remove deprecated max_token_size from embedding configuration
This parameter is no longer used. Its removal simplifies the API and clarifies that token length management is handled by upstream text chunking logic rather than the embedding wrapper.
2025-07-29 10:49:35 +08:00
yangdx
75d1b1e9f8 Update Ollama context length configuration
- Rename OLLAMA_NUM_CTX to OLLAMA_LLM_NUM_CTX
- Increase default context window size
- Add requirement for minimum context size
- Update documentation examples
2025-07-29 09:53:37 +08:00
Michele Comitini
bd94714b15 options needs to be passed to ollama client embed() method
Fix line length

Create binding_options.py

Remove test property

Add dynamic binding options to CLI and environment config

Automatically generate command-line arguments and environment variable
support for all LLM provider bindings using BindingOptions. Add sample
.env generation and extensible framework for new providers.

Add example option definitions and fix test arg check in OllamaOptions

Add options_dict method to BindingOptions for argument parsing

Add comprehensive Ollama binding configuration options

ruff formatting Apply ruff formatting to binding_options.py

Add Ollama separate options for embedding and LLM

Refactor Ollama binding options and fix class var handling

The changes improve how class variables are handled in binding options
and better organize the Ollama-specific options into LLM and embedding
subclasses.

Fix typo in arg test.

Rename cls parameter to klass to avoid keyword shadowing

Fix Ollama embedding binding name typo

Fix ollama embedder context param name

Split Ollama options into LLM and embedding configs with mixin base

Add Ollama option configuration to LLM and embeddings in lightrag_server

Update sample .env generation and environment handling

Conditionally add env vars and cmdline options only when ollama bindings
are used. Add example env file for Ollama binding options.
2025-07-28 12:05:40 +02:00
yangdx
2767212ba0 Fix linting 2025-07-24 12:25:50 +08:00
yangdx
d979e9078f feat: Integrate Jina embeddings API support
- Implemented Jina embedding function
- Add new EMBEDDING_BINDING type of jina for LightRAG Server
- Add env var sample
2025-07-24 12:15:00 +08:00
Dario Chini
5b28233903 fix Azure deployment 2025-07-17 23:11:07 +02:00
zrguo
e254c3dd81 Update openai.py 2025-07-15 17:30:30 +08:00