Commit graph

2423 commits

Author SHA1 Message Date
yangdx
070a5db801 Update README 2025-05-23 12:50:48 +08:00
yangdx
ada2443653 Optimize default setting of PostgreSQL 2025-05-22 17:09:26 +08:00
yangdx
2ee809cf58 Increase PG connection pool to 20 2025-05-22 16:37:18 +08:00
yangdx
c300f2fc91 Merge branch 'jidodata-ykim/main' 2025-05-22 10:48:24 +08:00
yangdx
3b9c28fae9 Fix linting 2025-05-22 10:46:03 +08:00
yangdx
e14c69ce4a Merge branch 'belabon25/main' 2025-05-22 10:06:52 +08:00
yangdx
a6046bf827 Fix linting 2025-05-22 10:06:09 +08:00
yangdx
bb27bb4309 Fix linting 2025-05-22 09:59:53 +08:00
Benjamin L
1b6ddcaf5b change validator method names 2025-05-21 16:06:35 +02:00
Benjamin L
62b536ea6f Adding file_source.s as optional attribute to text.s requests 2025-05-21 15:10:27 +02:00
yumpyy
4d806a1263
feat(api): update endpoint to support new parameter
Update the API server to support the new parameter from the core library (PR #1032).
2025-05-21 15:50:05 +05:30
yangdx
702e87492c Bump api version to 0171 2025-05-21 16:52:33 +08:00
yangdx
85bed30764 Fix linting 2025-05-21 16:46:36 +08:00
yangdx
45cebc71c5 Refactor: Optimize static file caching for WebUI
- Renamed `NoCacheStaticFiles` to `SmartStaticFiles`.
- Implemented long-term caching (1 year, immutable) for versioned assets in `/webui/assets/`.
- Ensured `index.html` remains un-cached.
- Set correct `Content-Type` for JS and CSS files.
2025-05-21 16:46:18 +08:00
yangdx
0961a21722 Set correct Content-Type header for JavaScript files
• Fix missing Content-Type header for .js files
• Ensure proper MIME type handling
• Improve browser compatibility
2025-05-21 16:17:42 +08:00
Martin Perez-Guevara
3d418d95c5 feat: Integrate Opik for Enhanced Observability in LlamaIndex LLM Interactions
This pull request demonstrates how to create a new Opik project when using LiteLLM for LlamaIndex-based LLM calls. The primary goal is to enable detailed tracing, monitoring, and logging of LLM interactions in a new Opik project_name, particularly when using LiteLLM as an API proxy. This enhancement allows for better debugging, performance analysis, observability when using LightRAG with LiteLLM and Opik.

**Motivation:**

As our application's reliance on Large Language Models (LLMs) grows, robust observability becomes crucial for maintaining system health, optimizing performance, and understanding usage patterns. Integrating Opik provides the following key benefits:

1.  **Improved Debugging:** Enables end-to-end tracing of requests through the LlamaIndex and LiteLLM layers, making it easier to identify and resolve issues or performance bottlenecks.
2.  **Comprehensive Performance Monitoring:** Allows for the collection of vital metrics such as LLM call latency, token usage, and error rates. This data can be filtered and analyzed within Opik using project names and tags.
3.  **Effective Cost Management:** Facilitates tracking of token consumption associated with specific requests or projects, leading to better cost control and optimization.
4.  **Deeper Usage Insights:** Provides a clearer understanding of how different components of the application or various projects are utilizing LLM capabilities.

These changes empower developers to seamlessly add observability to their LlamaIndex-based LLM workflows, especially when leveraging LiteLLM, by passing necessary Opik metadata.

**Changes Made:**

1.  **`lightrag/llm/llama_index_impl.py`:**
    *   Modified the `llama_index_complete_if_cache` function:
        *   The `**kwargs` parameter, which previously handled additional arguments, has been refined. A dedicated `chat_kwargs={}` parameter is now used to pass keyword arguments directly to the `model.achat()` method. This change ensures that vendor-specific parameters, such as LiteLLM's `litellm_params` for Opik metadata, are correctly propagated.
        *   The logic for retrieving `llm_instance` from `kwargs` was removed as `model` is now a direct parameter, simplifying the function.
    *   Updated the `llama_index_complete` function:
        *   Ensured that `**kwargs` (which may include `chat_kwargs` or other parameters intended for `llama_index_complete_if_cache`) are correctly passed down.

2.  **`examples/unofficial-sample/lightrag_llamaindex_litellm_demo.py`:**
    *   This existing demo file was updated to align with the changes in `llama_index_impl.py`.
    *   The `llm_model_func` now passes an empty `chat_kwargs={}` by default to `llama_index_complete_if_cache` if no specific chat arguments are needed, maintaining compatibility with the updated function signature. This file serves as a baseline example without Opik integration.

3.  **`examples/unofficial-sample/lightrag_llamaindex_litellm_opik_demo.py` (New File):**
    *   A new example script has been added to specifically demonstrate the integration of LightRAG with LlamaIndex, LiteLLM, and Opik for observability.
    *   The `llm_model_func` in this demo showcases how to construct the `chat_kwargs` dictionary.
    *   It includes `litellm_params` with a `metadata` field for Opik, containing `project_name` and `tags`. This provides a clear example of how to send observability data to Opik.
    *   The call to `llama_index_complete_if_cache` within `llm_model_func` passes these `chat_kwargs`, ensuring Opik metadata is included in the LiteLLM request.

These modifications provide a more robust and extensible way to pass parameters to the underlying LLM calls, specifically enabling the integration of observability tools like Opik.

Co-authored-by: Martin Perez-Guevara <8766915+MartinPerez@users.noreply.github.com>
Co-authored-by: Young Jin Kim <157011356+jidodata-ykim@users.noreply.github.com>
2025-05-20 17:47:05 +02:00
yangdx
b4615247c9 Bump core version to 1.3.8 2025-05-18 07:20:00 +08:00
yangdx
38b862e993 Remove unsed functions 2025-05-18 07:16:52 +08:00
sa9arr
36b606d0db Fix: Correct GraphML to JSON mapping in xml_to_json function 2025-05-17 19:32:25 +05:45
yangdx
c41f8d9ed3 Update README 2025-05-16 09:05:50 +08:00
yangdx
e5b0807298 Update README 2025-05-15 17:36:45 +08:00
yangdx
b9c25dfeb0 Update README 2025-05-14 14:42:52 +08:00
yangdx
29be2aac71 Remove tenacity from dynamic import 2025-05-14 11:30:48 +08:00
yangdx
db125c3764 Update README 2025-05-14 11:29:46 +08:00
yangdx
ac2b6af97e Eliminate tenacity from dynamic import 2025-05-14 10:57:05 +08:00
yangdx
0e26cbebd0 Fix linting 2025-05-14 01:14:45 +08:00
yangdx
b836d02cac Optimize Ollama LLM driver 2025-05-14 01:13:03 +08:00
yangdx
bb7b360269 Fix linting 2025-05-13 21:35:04 +08:00
yangdx
55e28f45e4 Updage logo 2025-05-13 20:35:35 +08:00
yangdx
2845e268e4 Ensure priority_limit_async_func_call decorator receive callable 2025-05-13 02:00:01 +08:00
yangdx
d0029b5b53 Update favicon.png 2025-05-12 20:02:03 +08:00
yangdx
85ac94bae0 Update icon.svg 2025-05-12 19:47:35 +08:00
yangdx
00b78b91d6 Change website icon 2025-05-12 19:14:02 +08:00
yangdx
cbc8796bb0 Update logo from png to svg 2025-05-12 18:49:58 +08:00
yangdx
dfc44ec4be Update logo.png 2025-05-12 18:21:52 +08:00
yangdx
56f82bdcd5 Ensure OpenAI connection is closed after streaming response finished 2025-05-12 17:37:28 +08:00
zrguo
cf4bb148fb fix linting 2025-05-12 16:28:36 +08:00
zrguo
61a21f8d5d
Merge pull request #1325 from venkateshpabbati/main
security fix
2025-05-12 16:25:11 +08:00
yangdx
c36d499a43 Update webui assets 2025-05-11 12:44:50 +08:00
yangdx
d5b9318553 Bump api version to 0170 2025-05-11 11:51:53 +08:00
yangdx
9ec9579a95 Fix linting 2025-05-11 11:24:52 +08:00
yangdx
68653f853a fix: handle missing 'weight' attribute in edge data to prevent KeyError
- Add validation in _find_most_related_edges_from_entities and  _get_edge_data function during edge data construction
- Add warning logs when 'weight' attribute is missing and set default value of 0.0
2025-05-11 11:16:32 +08:00
yangdx
4d57370c94 Refactor: Move get_env_value from api.config to utils
Relocates the `get_env_value` utility function
from `lightrag.api.config` to `lightrag.utils` to decouple
LightRAG core from API Server
2025-05-10 08:58:18 +08:00
yangdx
c2938a71a4 Fix streaming problem for OpenAI 2025-05-09 15:54:54 +08:00
Daniel.y
3597239768
Merge pull request #1548 from maharjun/use_openai_context_manager
Use Openai Client Context Manager
2025-05-09 14:33:48 +08:00
yangdx
ebdc7cea49 Merge branch 'allow_max_connection_config' into pg-max-connection 2025-05-09 14:16:53 +08:00
yangdx
8145b436c8 Fix linting 2025-05-09 11:52:10 +08:00
yangdx
0751382e65 Update README.md 2025-05-09 11:51:22 +08:00
yangdx
fb4f12ba8e Add user prompt support for Ollama api 2025-05-09 11:37:43 +08:00
Arjun Rao
6ebd76d5da bugfix: convert config val to int 2025-05-09 04:22:46 +10:00