LightRAG/docs/diff_hku/wave_2.csv
2025-12-04 19:17:42 +08:00

9.4 KiB

1commitauth_dateauthorsubjectcategory
2ec40b17e2025-10-08YasiruRangana feat: Add token tracking support to openai_embed functionembedding
30f15fdc32025-10-09Daniel.yMerge pull request #2181 from yrangana/feat/openai-embedding-token-trackingembedding
46d1ae4042025-10-15yangdxAdd offline Docker build support with embedded models and cacheembedding
56a29b5da2025-10-23yangdxUpdate Docker deployment comments for LLM and embedding hostsembedding
67b8223da2025-11-03yangdxUpdate env.example with host/endpoint clarifications for LLM/embeddingembedding
79c0570602025-11-05yangdxAdd separate endpoint configuration for LLM and embeddings in evaluationembedding
801b07b2b2025-11-07yangdxRefactor Jina embedding dimension by changing param to optional with defaultembedding
933a1482f2025-11-07yangdxAdd optional embedding dimension parameter control via env varembedding
109cee5a632025-11-07yangdxMerge branch 'main' into apply-dim-to-embedding-callembedding
11ce28f30c2025-11-07yangdxAdd embedding_dim parameter support to embedding functionsembedding
12d8a6355e2025-11-07yangdxMerge branch 'main' into apply-dim-to-embedding-callembedding
13d94aae9c2025-11-07YasiruRangana Add dimensions parameter support to openai_embed()embedding
14ffeeae422025-11-07yangdxrefactor: simplify jina embedding dimension handlingembedding
1503cc62622025-11-08yangdxProhibit direct access to internal functions of EmbeddingFunc.embedding
160b2a15c42025-11-08yangdxCentralize embedding_send_dim config through args instead of env varembedding
1729a349f22025-11-08Daniel.yMerge pull request #2329 from danielaskdd/gemini-embeddingembedding
18a624a9502025-11-08yangdxAdd Gemini to APIs requiring embedding dimension parameterembedding
19de4ed7362025-11-08yangdxAdd Gemini embedding supportembedding
20f4492d482025-11-08Daniel.yMerge pull request #2328 from HKUDS/apply-dim-to-embedding-callembedding
2105852e1a2025-11-14yangdxAdd max_token_size parameter to embedding function decoratorsembedding
2214a6c24e2025-11-14yangdxAdd configurable embedding token limit with validationembedding
232fb57e762025-11-14yangdxFix embedding token limit initialization orderembedding
2439b49e922025-11-14yangdxConvert embedding_token_limit from property to field with __post_init__embedding
255dec4dea2025-11-14yangdxImprove embedding config priority and add debug loggingembedding
266b2af2b52025-11-14yangdxRefactor embedding function creation with proper attribute inheritanceembedding
27772215642025-11-14yangdxAdd max_token_size parameter to embedding function decoratorsembedding
28963a0a5d2025-11-14yangdxRefactor embedding function creation with proper attribute inheritanceembedding
29ab4d7ac22025-11-14yangdxAdd configurable embedding token limit with validationembedding
30de4412dd2025-11-14yangdxFix embedding token limit initialization orderembedding
31e5addf4d2025-11-14yangdxImprove embedding config priority and add debug loggingembedding
32f02547732025-11-14yangdxConvert embedding_token_limit from property to field with __post_init__embedding
333b76eea22025-11-15Daniel.yMerge pull request #2359 from danielaskdd/embedding-limitembedding
34b5589ce42025-11-15yangdxMerge branch 'main' into embedding-limitembedding
35c13f91162025-11-17yangdxAdd embedding dimension validation to EmbeddingFunc wrapperembedding
3646ce6d9a2025-11-20yangdxFix Azure OpenAI embedding model parameter fallbackembedding
370c4cba382025-11-21yangdxFix double decoration in azure_openai_embed and document decorator usageembedding
387b7621102025-11-22yangdxAdd fallback to AZURE_OPENAI_API_VERSION for embedding API versionembedding
391b02684e2025-11-28Daniel.yMerge pull request #2432 from danielaskdd/embedding-exampleembedding
401d07ff7f2025-11-28yangdxUpdate OpenAI and Ollama embedding func examples in READMEembedding
414ab4a7ac2025-11-28yangdxAllow embedding models to use provider defaults when unspecifiedembedding
4256e0365c2025-11-28yangdxAdd configurable model parameter to jina_embed functionembedding
436e2946e72025-11-28yangdxAdd max_token_size parameter to azure_openai_embed wrapperembedding
4497a9dfca2025-11-28yangdxAdd important note about embedding function wrapping restrictionsembedding
45b67054492025-11-28Daniel.yMerge pull request #2433 from danielaskdd/fix-jina-embeddingembedding
46ea8d55ab2025-11-28yangdxAdd documentation for embedding provider configuration rulesembedding
4737e8898c2025-10-01yangdxSimplify reference formatting in LLM context generationllm_cloud
4883d99e142025-10-01yangdxfix(OllamaAPI): Add validation to ensure last message is from user rolellm_cloud
490b3d31502025-10-20Humphryextended to use gemini, sswitched to use gemini-flash-latestllm_cloud
50746942142025-10-20dependabot[bot]Update openai requirement from <2.0.0,>=1.0.0 to >=1.0.0,<3.0.0llm_cloud
51175ef4592025-10-21Daniel.yMerge pull request #2238 from HKUDS/dependabot/pip/openai-gte-1.0.0-and-lt-3.0.0llm_cloud
52162370b62025-10-22yangdxAdd optional LLM cache deletion when deleting documentsllm_cloud
53aa916f282025-11-01anouarbmdocs: add generic test_dataset.json for evaluation examples Test cases with generic examples about: - LightRAG framework features and capabilities - RAG system architecture and components - Vector database support (ChromaDB, Neo4j, Milvus, etc.) - LLM provider integrations (OpenAI, Anthropic, Ollama, etc.) - RAG evaluation metrics explanation - Deployment options (Docker, FastAPI, direct integration) - Knowledge graph-based retrieval conceptsllm_cloud
54994a82dc2025-11-05yangdxSuppress token usage warnings for custom OpenAI-compatible endpointsllm_cloud
553cb4eae42025-11-07yangdxAdd Chain of Thought support to Gemini LLM integrationllm_cloud
566686edfd2025-11-07yangdxUpdate Gemini LLM options: add seed and thinking config, remove MIME typellm_cloud
57732846232025-11-07Daniel.yMerge pull request #2326 from danielaskdd/gemini-cotllm_cloud
588c2755532025-11-07yangdxFix Gemini response parsing to avoid warnings from non-text partsllm_cloud
59924c8cb82025-11-07yangdxMerge branch 'main' into gemini-cotllm_cloud
60fc40a3692025-11-07yangdxAdd timeout support to Gemini LLM and improve parameter handlingllm_cloud
613d9de5ed2025-11-08yangdxfeat: improve Gemini client error handling and retry logicllm_cloud
6255274dde2025-11-08yangdxAdd LLM cache migration tool for KV storage backendsllm_cloud
6357ee7d5a2025-11-08yangdxMerge branch 'main' into llm-cache-migratellm_cloud
646b9f13c72025-11-08yangdxEnhance LLM cache migration tool with streaming and improved UXllm_cloud
656fc54d362025-11-08yangdxMove LLM cache migration tool to lightrag.tools modulellm_cloud
6685bb98b32025-11-08Daniel.yMerge pull request #2331 from danielaskdd/gemini-retryllm_cloud
67987bc09c2025-11-08yangdxUpdate LLM cache migration docs and improve UX promptsllm_cloud
68d0d31e922025-11-08yangdxImprove LLM cache migration tool configuration and messagingllm_cloud
69f83ea3392025-11-08yangdxAdd section header comment for Gemini binding optionsllm_cloud
701485cb822025-11-09yangdxAdd LLM query cache cleanup tool for KV storage backendsllm_cloud
713110ca512025-11-09Daniel.yMerge pull request #2335 from danielaskdd/llm-cache-cleanupllm_cloud
72754d2ad22025-11-09yangdxAdd documentation for LLM cache migration between storage typesllm_cloud
7388ab73f62025-11-09yangdxHotFix: Restore streaming response in OpenAI LLMllm_cloud
748adf31802025-11-09Daniel.yMerge pull request #2330 from danielaskdd/llm-cache-migratellm_cloud
75188930152025-11-13yangdxMerge branch 'feat/add_cloud_ollama_support'llm_cloud
76680e36c62025-11-14yangdxImprove Bedrock error handling with retry logic and custom exceptionsllm_cloud
77f5b485872025-11-14yangdxImprove Bedrock error handling with retry logic and custom exceptionsllm_cloud
7895e1fb162025-11-17yangdxRemove final_namespace attribute for in-memory storage and use namespace in clean_llm_query_cache.pyllm_cloud
79a990c1d42025-11-17BukeLyfix: Correct Mock LLM output format in E2E testllm_cloud
80021b637d2025-11-21Daniel.yMerge pull request #2403 from danielaskdd/azure-cot-handlingllm_cloud
8102fdceb92025-11-21yangdxUpdate OpenAI client to use stable API and bump minimum version to 2.0.0llm_cloud
821e477e952025-11-21yangdxAdd lightrag-clean-llmqc console script entry pointllm_cloud
8345f4f8232025-11-21yangdxRefactor Azure OpenAI client creation to support client_configs mergingllm_cloud
848777895e2025-11-21Daniel.yMerge pull request #2401 from danielaskdd/fix-openai-keyword-extractionllm_cloud
859f69c5bf2025-11-21yangdxfeat: Support structured output `parsed` from OpenAIllm_cloud
86ac9f25742025-11-21yangdxImprove Azure OpenAI wrapper functions with full parameter supportllm_cloud
87b709f8f82025-11-21yangdxConsolidate Azure OpenAI implementation into main OpenAI modulellm_cloud
88fafa17912025-11-21yangdxFix Azure OpenAI model parameter to use deployment name consistentlyllm_cloud
89ffd8da512025-11-21yangdxImprove Azure OpenAI compatibility and error handlingllm_cloud
9049fb11e22025-11-22yangdxUpdate Azure OpenAI configuration examplesllm_cloud
915f53de882025-11-22yangdxFix Azure configuration examples and correct typos in env.examplellm_cloud
92a898f0542025-11-25palanisdMerge branch 'HKUDS:main' into cohere-rerankrerank
938e50eef52025-12-02yangdxMerge branch 'main' into cohere-rerankrerank
94f0d67f162025-12-03yangdxMerge branch 'cohere-rerank'rerank