Commit graph

96 commits

Author SHA1 Message Date
yangdx
b0bdbb5839 Add offline deployment support with cache management and layered deps
• Add tiktoken cache downloader CLI
• Add layered offline dependencies
• Add offline requirements files
• Add offline deployment guide

(cherry picked from commit a5c05f1b92)
2025-12-04 19:07:09 +08:00
Raphael MANSUY
fe9b8ec02a
tests: stabilize integration tests + skip external services; fix multi-tenant API behavior and idempotency (#4)
* feat: Implement multi-tenant architecture with tenant and knowledge base models

- Added data models for tenants, knowledge bases, and related configurations.
- Introduced role and permission management for users in the multi-tenant system.
- Created a service layer for managing tenants and knowledge bases, including CRUD operations.
- Developed a tenant-aware instance manager for LightRAG with caching and isolation features.
- Added a migration script to transition existing workspace-based deployments to the new multi-tenant architecture.

* chore: ignore lightrag/api/webui/assets/ directory

* chore: stop tracking lightrag/api/webui/assets (ignore in .gitignore)

* feat: Initialize LightRAG Multi-Tenant Stack with PostgreSQL

- Added README.md for project overview, setup instructions, and architecture details.
- Created docker-compose.yml to define services: PostgreSQL, Redis, LightRAG API, and Web UI.
- Introduced env.example for environment variable configuration.
- Implemented init-postgres.sql for PostgreSQL schema initialization with multi-tenant support.
- Added reproduce_issue.py for testing default tenant access via API.

* feat: Enhance TenantSelector and update related components for improved multi-tenant support

* feat: Enhance testing capabilities and update documentation

- Updated Makefile to include new test commands for various modes (compatibility, isolation, multi-tenant, security, coverage, and dry-run).
- Modified API health check endpoint in Makefile to reflect new port configuration.
- Updated QUICK_START.md and README.md to reflect changes in service URLs and ports.
- Added environment variables for testing modes in env.example.
- Introduced run_all_tests.sh script to automate testing across different modes.
- Created conftest.py for pytest configuration, including database fixtures and mock services.
- Implemented database helper functions for streamlined database operations in tests.
- Added test collection hooks to skip tests based on the current MULTITENANT_MODE.

* feat: Implement multi-tenant support with demo mode enabled by default

- Added multi-tenant configuration to the environment and Docker setup.
- Created pre-configured demo tenants (acme-corp and techstart) for testing.
- Updated API endpoints to support tenant-specific data access.
- Enhanced Makefile commands for better service management and database operations.
- Introduced user-tenant membership system with role-based access control.
- Added comprehensive documentation for multi-tenant setup and usage.
- Fixed issues with document visibility in multi-tenant environments.
- Implemented necessary database migrations for user memberships and legacy support.

* feat(audit): Add final audit report for multi-tenant implementation

- Documented overall assessment, architecture overview, test results, security findings, and recommendations.
- Included detailed findings on critical security issues and architectural concerns.

fix(security): Implement security fixes based on audit findings

- Removed global RAG fallback and enforced strict tenant context.
- Configured super-admin access and required user authentication for tenant access.
- Cleared localStorage on logout and improved error handling in WebUI.

chore(logs): Create task logs for audit and security fixes implementation

- Documented actions, decisions, and next steps for both audit and security fixes.
- Summarized test results and remaining recommendations.

chore(scripts): Enhance development stack management scripts

- Added scripts for cleaning, starting, and stopping the development stack.
- Improved output messages and ensured graceful shutdown of services.

feat(starter): Initialize PostgreSQL with AGE extension support

- Created initialization scripts for PostgreSQL extensions including uuid-ossp, vector, and AGE.
- Ensured successful installation and verification of extensions.

* feat: Implement auto-select for first tenant and KB on initial load in WebUI

- Removed WEBUI_INITIAL_STATE_FIX.md as the issue is resolved.
- Added useTenantInitialization hook to automatically select the first available tenant and KB on app load.
- Integrated the new hook into the Root component of the WebUI.
- Updated RetrievalTesting component to ensure a KB is selected before allowing user interaction.
- Created end-to-end tests for multi-tenant isolation and real service interactions.
- Added scripts for starting, stopping, and cleaning the development stack.
- Enhanced API and tenant routes to support tenant-specific pipeline status initialization.
- Updated constants for backend URL to reflect the correct port.
- Improved error handling and logging in various components.

* feat: Add multi-tenant support with enhanced E2E testing scripts and client functionality

* update client

* Add integration and unit tests for multi-tenant API, models, security, and storage

- Implement integration tests for tenant and knowledge base management endpoints in `test_tenant_api_routes.py`.
- Create unit tests for tenant isolation, model validation, and role permissions in `test_tenant_models.py`.
- Add security tests to enforce role-based permissions and context validation in `test_tenant_security.py`.
- Develop tests for tenant-aware storage operations and context isolation in `test_tenant_storage_phase3.py`.

* feat(e2e): Implement OpenAI model support and database reset functionality

* Add comprehensive test suite for gpt-5-nano compatibility

- Introduced tests for parameter normalization, embeddings, and entity extraction.
- Implemented direct API testing for gpt-5-nano.
- Validated .env configuration loading and OpenAI API connectivity.
- Analyzed reasoning token overhead with various token limits.
- Documented test procedures and expected outcomes in README files.
- Ensured all tests pass for production readiness.

* kg(postgres_impl): ensure AGE extension is loaded in session and configure graph initialization

* dev: add hybrid dev helper scripts, Makefile, docker-compose.dev-db and local development docs

* feat(dev): add dev helper scripts and local development documentation for hybrid setup

* feat(multi-tenant): add detailed specifications and logs for multi-tenant improvements, including UX, backend handling, and ingestion pipeline

* feat(migration): add generated tenant/kb columns, indexes, triggers; drop unused tables; update schema and docs

* test(backward-compat): adapt tests to new StorageNameSpace/TenantService APIs (use concrete dummy storages)

* chore: multi-tenant and UX updates — docs, webui, storage, tenant service adjustments

* tests: stabilize integration tests + skip external services; fix multi-tenant API behavior and idempotency

- gpt5_nano_compatibility: add pytest-asyncio markers, skip when OPENAI key missing, prevent module-level asyncio.run collection, add conftest
- Ollama tests: add server availability check and skip markers; avoid pytest collection warnings by renaming helper classes
- Graph storage tests: rename interactive test functions to avoid pytest collection
- Document & Tenant routes: support external_ids for idempotency; ensure HTTPExceptions are re-raised
- LightRAG core: support external_ids in apipeline_enqueue_documents and idempotent logic
- Tests updated to match API changes (tenant routes & document routes)
- Add logs and scripts for inspection and audit
2025-12-04 16:04:21 +08:00
yangdx
42d1d04147 Fix boolean parser problem for for LLM environment variable
• Add custom boolean parser for argparse in BindingOptions
2025-09-28 19:23:57 +08:00
yangdx
cff6029508 Ensure COT tags are properly closed in all stream termination scenarios
- Add COT closure after stream completion
- Handle COT in exception scenarios
- Add final safety check in finally block
- Prevent unclosed thinking tags
- Log COT closure failures
2025-09-22 00:09:27 +08:00
yangdx
077d9be5d7 Add Deepseek Style Chain of Thought (CoT) Support for OpenAI Compatible LLM providers
- Add enable_cot parameter to all LLM APIs
- Implement CoT for OpenAI with <think> tags
- Log warnings for unsupported providers
- Enable CoT in query operations
- Handle streaming and non-streaming CoT
2025-09-09 22:34:36 +08:00
yangdx
451f488f72 Add debug logging for client configs in OpenAI LLM function 2025-09-07 02:29:37 +08:00
yangdx
4b2ef71c25 feat: Add extra_body parameter support for OpenRouter/vLLM compatibility
- Enhanced add_args function to handle dict types with JSON parsing
- Added reasoning and extra_body parameters for OpenRouter/vLLM compatibility
- Updated env.example with OpenRouter/vLLM parameter examples
2025-08-21 13:06:28 +08:00
yangdx
aa22772721 Refactor LLM temperature handling to be provider-specific
• Remove global temperature parameter
• Add provider-specific temp configs
• Update env example with new settings
• Fix Bedrock temperature handling
• Clean up splash screen display
2025-08-20 23:52:33 +08:00
yangdx
df7bcb1e3d Add LLM_TIMEOUT configuration for all LLM providers
- Add LLM_TIMEOUT env variable
- Apply timeout to all LLM bindings
2025-08-20 23:50:57 +08:00
SJ
f7ca9ae16a Ruff formatted 2025-08-15 22:21:34 +00:00
SJ
99643f01de
Enhancement: support aws bedrock as an LLm binding #1733 2025-08-13 02:08:13 -05:00
yangdx
ffb642a5ce Fix linting 2025-08-09 08:41:41 +08:00
yangdx
ecd7777e61 Update OpenAI embedding handling for both list and base64 embeddings
- Fix OpenAI embedding array parsing
- Improve embedding data type safety
2025-08-09 08:40:33 +08:00
yangdx
6ff25210ea feat: improve Jina API error handling to show clean messages instead of HTML 2025-08-05 11:46:02 +08:00
yangdx
c5babf61d7 Feat: Change embedding formats from float to base64 for efficiency
- Add base64 support for Jina embeddings
- Add base64 support for OpenAI embeddings
- Update env.example with new embedding options
2025-08-05 11:38:40 +08:00
yangdx
adf7ec8e35 feat: Add OpenAI LLM Options support with BindingOptions framework
- Add OpenAILLMOptions dataclass with full OpenAI API parameter support
- Integrate OpenAI options in config.py for automatic binding detection
- Update server functions to inject OpenAI options for openai/azure_openai bindings
2025-08-05 03:47:26 +08:00
yangdx
3099748668 Add temperature fallback for Ollama LLM binding
- Implement OLLAMA_LLM_TEMPERATURE env var
- Fallback to global TEMPERATURE if unset
- Remove redundant OllamaLLMOptions logic
- Update env.example with new setting
2025-08-05 01:50:09 +08:00
yangdx
e5e3f0f878 Fix(Ollama option): change stop option from string to list and add fallback global temperature setting 2025-08-04 19:43:14 +08:00
yangdx
f8a880ac66 Improved binding options testing and documentation 2025-08-04 18:21:55 +08:00
yangdx
32af45ff46 refactor: improve JSON parsing reliability with json-repair library
Replace regex-based JSON extraction with json-repair for better handling of malformed LLM responses. Remove deprecated JSON parsing utilities and clean up keyword_extraction parameter across LLM providers.

- Remove locate_json_string_body_from_string() and convert_response_to_json()
- Use json-repair.loads() in extract_keywords_only() for robust parsing
- Clean up LLM interfaces and remove unused parameters
- Add json-repair dependency
2025-08-01 19:36:20 +08:00
yangdx
9d5603d35e Set the default LLM temperature to 1.0 and centralize constant management 2025-07-31 17:15:10 +08:00
administrator
9c3e1505b5 fix timeout issue 2025-07-29 13:38:46 +07:00
yangdx
9923821d75 refactor: Remove deprecated max_token_size from embedding configuration
This parameter is no longer used. Its removal simplifies the API and clarifies that token length management is handled by upstream text chunking logic rather than the embedding wrapper.
2025-07-29 10:49:35 +08:00
yangdx
75d1b1e9f8 Update Ollama context length configuration
- Rename OLLAMA_NUM_CTX to OLLAMA_LLM_NUM_CTX
- Increase default context window size
- Add requirement for minimum context size
- Update documentation examples
2025-07-29 09:53:37 +08:00
Michele Comitini
bd94714b15 options needs to be passed to ollama client embed() method
Fix line length

Create binding_options.py

Remove test property

Add dynamic binding options to CLI and environment config

Automatically generate command-line arguments and environment variable
support for all LLM provider bindings using BindingOptions. Add sample
.env generation and extensible framework for new providers.

Add example option definitions and fix test arg check in OllamaOptions

Add options_dict method to BindingOptions for argument parsing

Add comprehensive Ollama binding configuration options

ruff formatting Apply ruff formatting to binding_options.py

Add Ollama separate options for embedding and LLM

Refactor Ollama binding options and fix class var handling

The changes improve how class variables are handled in binding options
and better organize the Ollama-specific options into LLM and embedding
subclasses.

Fix typo in arg test.

Rename cls parameter to klass to avoid keyword shadowing

Fix Ollama embedding binding name typo

Fix ollama embedder context param name

Split Ollama options into LLM and embedding configs with mixin base

Add Ollama option configuration to LLM and embeddings in lightrag_server

Update sample .env generation and environment handling

Conditionally add env vars and cmdline options only when ollama bindings
are used. Add example env file for Ollama binding options.
2025-07-28 12:05:40 +02:00
yangdx
2767212ba0 Fix linting 2025-07-24 12:25:50 +08:00
yangdx
d979e9078f feat: Integrate Jina embeddings API support
- Implemented Jina embedding function
- Add new EMBEDDING_BINDING type of jina for LightRAG Server
- Add env var sample
2025-07-24 12:15:00 +08:00
Dario Chini
5b28233903 fix Azure deployment 2025-07-17 23:11:07 +02:00
zrguo
e254c3dd81 Update openai.py 2025-07-15 17:30:30 +08:00
yangdx
2a0cff3ed6 Fix linting 2025-07-08 18:17:21 +08:00
Molion Surya
8cbba6e9db Fix #1746: [openai.py logic for streaming complete] 2025-07-08 13:25:52 +08:00
Daniel.y
c740401b7f
Merge pull request #1654 from a-bruhn/azure-env-vars
Clean up azure env vars
2025-06-26 19:11:20 +08:00
zrguo
96b9bd8cc5 fix lint 2025-06-19 14:16:24 +08:00
Alexander Bruhn
5e3970e18b
Resolve confusion between azure embedding and completion environment variables 2025-06-04 14:45:11 +02:00
eddiemaru-101
77399e051f Fix: Increase Ollama timeout values to prevent ReadTimeout errors 2025-05-30 22:43:52 +09:00
yangdx
3b9c28fae9 Fix linting 2025-05-22 10:46:03 +08:00
Martin Perez-Guevara
3d418d95c5 feat: Integrate Opik for Enhanced Observability in LlamaIndex LLM Interactions
This pull request demonstrates how to create a new Opik project when using LiteLLM for LlamaIndex-based LLM calls. The primary goal is to enable detailed tracing, monitoring, and logging of LLM interactions in a new Opik project_name, particularly when using LiteLLM as an API proxy. This enhancement allows for better debugging, performance analysis, observability when using LightRAG with LiteLLM and Opik.

**Motivation:**

As our application's reliance on Large Language Models (LLMs) grows, robust observability becomes crucial for maintaining system health, optimizing performance, and understanding usage patterns. Integrating Opik provides the following key benefits:

1.  **Improved Debugging:** Enables end-to-end tracing of requests through the LlamaIndex and LiteLLM layers, making it easier to identify and resolve issues or performance bottlenecks.
2.  **Comprehensive Performance Monitoring:** Allows for the collection of vital metrics such as LLM call latency, token usage, and error rates. This data can be filtered and analyzed within Opik using project names and tags.
3.  **Effective Cost Management:** Facilitates tracking of token consumption associated with specific requests or projects, leading to better cost control and optimization.
4.  **Deeper Usage Insights:** Provides a clearer understanding of how different components of the application or various projects are utilizing LLM capabilities.

These changes empower developers to seamlessly add observability to their LlamaIndex-based LLM workflows, especially when leveraging LiteLLM, by passing necessary Opik metadata.

**Changes Made:**

1.  **`lightrag/llm/llama_index_impl.py`:**
    *   Modified the `llama_index_complete_if_cache` function:
        *   The `**kwargs` parameter, which previously handled additional arguments, has been refined. A dedicated `chat_kwargs={}` parameter is now used to pass keyword arguments directly to the `model.achat()` method. This change ensures that vendor-specific parameters, such as LiteLLM's `litellm_params` for Opik metadata, are correctly propagated.
        *   The logic for retrieving `llm_instance` from `kwargs` was removed as `model` is now a direct parameter, simplifying the function.
    *   Updated the `llama_index_complete` function:
        *   Ensured that `**kwargs` (which may include `chat_kwargs` or other parameters intended for `llama_index_complete_if_cache`) are correctly passed down.

2.  **`examples/unofficial-sample/lightrag_llamaindex_litellm_demo.py`:**
    *   This existing demo file was updated to align with the changes in `llama_index_impl.py`.
    *   The `llm_model_func` now passes an empty `chat_kwargs={}` by default to `llama_index_complete_if_cache` if no specific chat arguments are needed, maintaining compatibility with the updated function signature. This file serves as a baseline example without Opik integration.

3.  **`examples/unofficial-sample/lightrag_llamaindex_litellm_opik_demo.py` (New File):**
    *   A new example script has been added to specifically demonstrate the integration of LightRAG with LlamaIndex, LiteLLM, and Opik for observability.
    *   The `llm_model_func` in this demo showcases how to construct the `chat_kwargs` dictionary.
    *   It includes `litellm_params` with a `metadata` field for Opik, containing `project_name` and `tags`. This provides a clear example of how to send observability data to Opik.
    *   The call to `llama_index_complete_if_cache` within `llm_model_func` passes these `chat_kwargs`, ensuring Opik metadata is included in the LiteLLM request.

These modifications provide a more robust and extensible way to pass parameters to the underlying LLM calls, specifically enabling the integration of observability tools like Opik.

Co-authored-by: Martin Perez-Guevara <8766915+MartinPerez@users.noreply.github.com>
Co-authored-by: Young Jin Kim <157011356+jidodata-ykim@users.noreply.github.com>
2025-05-20 17:47:05 +02:00
yangdx
29be2aac71 Remove tenacity from dynamic import 2025-05-14 11:30:48 +08:00
yangdx
ac2b6af97e Eliminate tenacity from dynamic import 2025-05-14 10:57:05 +08:00
yangdx
0e26cbebd0 Fix linting 2025-05-14 01:14:45 +08:00
yangdx
b836d02cac Optimize Ollama LLM driver 2025-05-14 01:13:03 +08:00
yangdx
56f82bdcd5 Ensure OpenAI connection is closed after streaming response finished 2025-05-12 17:37:28 +08:00
yangdx
c2938a71a4 Fix streaming problem for OpenAI 2025-05-09 15:54:54 +08:00
Daniel.y
3597239768
Merge pull request #1548 from maharjun/use_openai_context_manager
Use Openai Client Context Manager
2025-05-09 14:33:48 +08:00
Arjun Rao
b7eae4d7c0 Use the context manager for the openai client
This avoids issues of resource cleanup (too many open files) when dealing with massively parallel calls to the openai API since RAII in python is highly unreliable in such contexts.
2025-05-08 11:42:53 +10:00
Balaji Munusamy
c3bc0eb53b made bedrock complete generic 2025-05-02 18:25:48 +02:00
yangdx
34cc8b6a51 Fix linting 2025-04-29 17:52:07 +08:00
yangdx
f58c8276bc fix: correct retry_if_exception_type usage and improve async iterator resource management
- Corrects the syntax of retry_if_exception_type decorators to ensure proper exception handling and retry behavior
- Implements proper resource cleanup for async iterators to prevent memory leaks and potential SIGSEGV errors
2025-04-29 17:43:27 +08:00
yangdx
99522a088d Fix ollama embedding func ruturn data type bugs 2025-04-21 00:01:25 +08:00
yangdx
39540f3f8b Fix linting 2025-04-20 14:33:33 +08:00