Commit graph

66 commits

Author SHA1 Message Date
yangdx
02fdceb959 Update OpenAI client to use stable API and bump minimum version to 2.0.0
- Remove beta prefix from completions.parse
- Update OpenAI dependency to >=2.0.0
- Fix whitespace formatting
- Update all requirement files
- Clean up pyproject.toml dependencies
2025-11-21 12:55:44 +08:00
yangdx
9f69c5bf85 feat: Support structured output parsed from OpenAI
Added support for structured output (JSON mode) from the OpenAI API in `openai.py` and `azure_openai.py`.

When `response_format` is used to request structured data, the new logic checks for the `message.parsed` attribute. If it exists, it's serialized into a JSON string as the final content. If not, the code falls back to the existing `message.content` handling, ensuring backward compatibility.
2025-11-21 12:46:31 +08:00
yangdx
c9e1c86e81 Refactor keyword extraction handling to centralize response format logic
• Move response format to core function
• Remove duplicate format assignments
• Standardize keyword extraction flow
• Clean up redundant parameter handling
• Improve Azure OpenAI compatibility
2025-11-21 12:10:04 +08:00
yangdx
05852e1ab2 Add max_token_size parameter to embedding function decorators
- Add max_token_size=8192 to all embed funcs
- Move siliconcloud to deprecated folder
- Import wrap_embedding_func_with_attrs
- Update EmbeddingFunc docstring
- Fix langfuse import type annotation
2025-11-14 18:41:43 +08:00
yangdx
2f16065256 Refactor keyword_extraction from kwargs to explicit parameter
• Add keyword_extraction param to functions
• Remove kwargs.pop() calls
• Update function signatures
• Improve parameter documentation
• Make parameter handling consistent
2025-11-09 12:02:17 +08:00
yangdx
88ab73f6ae HotFix: Restore streaming response in OpenAI LLM
The stream and timeout parameters were moved from **kwargs to explicit
parameters in a previous commit, but were not being passed to the OpenAI
API, causing streaming responses to fail and fall back to non-streaming
mode.Fixes the issue where stream=True was being silently ignored, resulting
in unexpected non-streaming behavior.
2025-11-09 11:52:26 +08:00
yangdx
d8a6355e41 Merge branch 'main' into apply-dim-to-embedding-call 2025-11-07 20:48:22 +08:00
yangdx
33a1482f7f Add optional embedding dimension parameter control via env var
* Add EMBEDDING_SEND_DIM environment variable
* Update Jina/OpenAI embed functions
* Add send_dimensions to EmbeddingFunc
* Auto-inject embedding_dim when enabled
* Add parameter validation warnings
2025-11-07 20:46:40 +08:00
yangdx
fc40a36968 Add timeout support to Gemini LLM and improve parameter handling
• Add timeout parameter to Gemini client
• Convert timeout seconds to milliseconds
• Update function signatures consistently
• Add Gemini thinking config example
• Clean up parameter documentation
2025-11-07 15:50:14 +08:00
yangdx
3cb4eae492 Add Chain of Thought support to Gemini LLM integration
- Extract thoughts from response parts
- Add COT enable/disable parameter
2025-11-07 15:22:14 +08:00
Yasiru Rangana
d94aae9c5e Add dimensions parameter support to openai_embed() 2025-11-07 09:55:06 +11:00
yangdx
10f6e6955f Improve Langfuse integration and stream response cleanup handling
• Check env vars before enabling Langfuse
• Move imports after env check logic
• Handle wrapper client aclose() issues
• Add debug logs for cleanup failures
2025-11-03 13:09:45 +08:00
anouarbm
9495778c2d refactor: reorder Langfuse import logic for improved clarity
Moved logger import before Langfuse block to fix NameError.
2025-11-03 05:27:41 +01:00
anouarbm
626b42bc40 feat: add optional Langfuse observability integration
This contribution adds optional Langfuse support for LLM observability and tracing.
Langfuse provides a drop-in replacement for the OpenAI client that automatically
tracks all LLM interactions without requiring code changes.

Features:
- Optional Langfuse integration with graceful fallback
- Automatic LLM request/response tracing
- Token usage tracking
- Latency metrics
- Error tracking
- Zero code changes required for existing functionality

Implementation:
- Modified lightrag/llm/openai.py to conditionally use Langfuse's AsyncOpenAI
- Falls back to standard OpenAI client if Langfuse is not installed
- Logs observability status on import

Configuration:
To enable Langfuse tracing, install the observability extras and set environment variables:

```bash
pip install lightrag-hku[observability]

export LANGFUSE_PUBLIC_KEY="your_public_key"
export LANGFUSE_SECRET_KEY="your_secret_key"
export LANGFUSE_HOST="https://cloud.langfuse.com"  # or your self-hosted instance
```

If Langfuse is not installed or environment variables are not set, LightRAG
will use the standard OpenAI client without any functionality changes.

Changes:
- Modified lightrag/llm/openai.py (added optional Langfuse import)
- Updated pyproject.toml with optional 'observability' dependencies

Dependencies (optional):
- langfuse>=3.8.1
2025-11-01 21:40:22 +01:00
Yasiru Rangana
ae9f4ae73f fix: Remove trailing whitespace for pre-commit linting 2025-10-09 15:01:53 +11:00
Yasiru Rangana
ec40b17eea feat: Add token tracking support to openai_embed function
- Add optional token_tracker parameter to openai_embed()
- Track prompt_tokens and total_tokens for embedding API calls
- Enables monitoring of embedding token usage alongside LLM calls
- Maintains backward compatibility with existing code
2025-10-08 14:36:08 +11:00
yangdx
b9c37bd937 Fix linting 2025-10-03 02:10:02 +08:00
yangdx
112349ed5b Modernize type hints and remove Python 3.8 compatibility code
• Use collections.abc.AsyncIterator only
• Remove sys.version_info checks
• Use union syntax for None types
• Simplify string emptiness checks
2025-10-02 23:15:42 +08:00
yangdx
cff6029508 Ensure COT tags are properly closed in all stream termination scenarios
- Add COT closure after stream completion
- Handle COT in exception scenarios
- Add final safety check in finally block
- Prevent unclosed thinking tags
- Log COT closure failures
2025-09-22 00:09:27 +08:00
yangdx
077d9be5d7 Add Deepseek Style Chain of Thought (CoT) Support for OpenAI Compatible LLM providers
- Add enable_cot parameter to all LLM APIs
- Implement CoT for OpenAI with <think> tags
- Log warnings for unsupported providers
- Enable CoT in query operations
- Handle streaming and non-streaming CoT
2025-09-09 22:34:36 +08:00
yangdx
451f488f72 Add debug logging for client configs in OpenAI LLM function 2025-09-07 02:29:37 +08:00
yangdx
aa22772721 Refactor LLM temperature handling to be provider-specific
• Remove global temperature parameter
• Add provider-specific temp configs
• Update env example with new settings
• Fix Bedrock temperature handling
• Clean up splash screen display
2025-08-20 23:52:33 +08:00
yangdx
df7bcb1e3d Add LLM_TIMEOUT configuration for all LLM providers
- Add LLM_TIMEOUT env variable
- Apply timeout to all LLM bindings
2025-08-20 23:50:57 +08:00
yangdx
ffb642a5ce Fix linting 2025-08-09 08:41:41 +08:00
yangdx
ecd7777e61 Update OpenAI embedding handling for both list and base64 embeddings
- Fix OpenAI embedding array parsing
- Improve embedding data type safety
2025-08-09 08:40:33 +08:00
yangdx
c5babf61d7 Feat: Change embedding formats from float to base64 for efficiency
- Add base64 support for Jina embeddings
- Add base64 support for OpenAI embeddings
- Update env.example with new embedding options
2025-08-05 11:38:40 +08:00
yangdx
32af45ff46 refactor: improve JSON parsing reliability with json-repair library
Replace regex-based JSON extraction with json-repair for better handling of malformed LLM responses. Remove deprecated JSON parsing utilities and clean up keyword_extraction parameter across LLM providers.

- Remove locate_json_string_body_from_string() and convert_response_to_json()
- Use json-repair.loads() in extract_keywords_only() for robust parsing
- Clean up LLM interfaces and remove unused parameters
- Add json-repair dependency
2025-08-01 19:36:20 +08:00
yangdx
9923821d75 refactor: Remove deprecated max_token_size from embedding configuration
This parameter is no longer used. Its removal simplifies the API and clarifies that token length management is handled by upstream text chunking logic rather than the embedding wrapper.
2025-07-29 10:49:35 +08:00
zrguo
e254c3dd81 Update openai.py 2025-07-15 17:30:30 +08:00
yangdx
2a0cff3ed6 Fix linting 2025-07-08 18:17:21 +08:00
Molion Surya
8cbba6e9db Fix #1746: [openai.py logic for streaming complete] 2025-07-08 13:25:52 +08:00
yangdx
56f82bdcd5 Ensure OpenAI connection is closed after streaming response finished 2025-05-12 17:37:28 +08:00
yangdx
c2938a71a4 Fix streaming problem for OpenAI 2025-05-09 15:54:54 +08:00
Arjun Rao
b7eae4d7c0 Use the context manager for the openai client
This avoids issues of resource cleanup (too many open files) when dealing with massively parallel calls to the openai API since RAII in python is highly unreliable in such contexts.
2025-05-08 11:42:53 +10:00
yangdx
34cc8b6a51 Fix linting 2025-04-29 17:52:07 +08:00
yangdx
f58c8276bc fix: correct retry_if_exception_type usage and improve async iterator resource management
- Corrects the syntax of retry_if_exception_type decorators to ensure proper exception handling and retry behavior
- Implements proper resource cleanup for async iterators to prevent memory leaks and potential SIGSEGV errors
2025-04-29 17:43:27 +08:00
yangdx
39540f3f8b Fix linting 2025-04-20 14:33:33 +08:00
yangdx
5f2cd871a8 Update sample code and README 2025-04-20 14:33:16 +08:00
yangdx
a418b18ed1 Fix linting 2025-04-20 11:17:51 +08:00
Enoughappens
704ef16ce3
fix streaming "list index out of range" 2025-04-19 12:57:08 +08:00
yangdx
14b4bc96ce Fix OPENAI_API_BASE not working in .env 2025-04-17 05:20:22 +08:00
Qodi
8f3068f1c0
Update openai.py
增加环境变量,支持OPENAI_API_BASE 支持中转站
2025-04-10 12:10:35 +08:00
zrguo
e17e61f58e fix lint 2025-04-03 14:44:56 +08:00
zrguo
9648300b18
Merge pull request #1208 from shane-lil/openai-client-config
feat(openai): add client configuration support to OpenAI integration
2025-04-03 17:43:57 +11:00
yangdx
80335d57a5 Fix linting 2025-03-28 21:43:47 +08:00
yangdx
491c78dac1 Improve OpenAI LLM logging with more detailed debug information 2025-03-28 21:33:59 +08:00
Shane Walker
d45dc14069
feat(openai): add client configuration support to OpenAI integration
Add support for custom client configurations in the OpenAI integration,
allowing for more flexible configuration of the AsyncOpenAI client.
This includes:

- Create a reusable helper function `create_openai_async_client`
- Add proper documentation for client configuration options
- Ensure consistent parameter precedence across the codebase
- Update the embedding function to support client configurations
- Add example script demonstrating custom client configuration usage

The changes maintain backward compatibility while providing a cleaner
and more maintainable approach to configuring OpenAI clients.
2025-03-27 15:39:39 -07:00
choizhang
8488229a29 feat: Add TokenTracker to track token usage for LLM calls 2025-03-28 01:25:15 +08:00
zrguo
adba09f6c2 fix stream 2025-03-17 11:41:55 +08:00
zrguo
f5ab76dc4c fix linting 2025-03-14 14:10:59 +08:00