LightRAG/docs/diff_hku/wave_2.csv
2025-12-04 19:17:42 +08:00

94 lines
9.4 KiB
CSV

commit,auth_date,author,subject,category
ec40b17e,2025-10-08,Yasiru,Rangana feat: Add token tracking support to openai_embed function,embedding
0f15fdc3,2025-10-09,Daniel.y,Merge pull request #2181 from yrangana/feat/openai-embedding-token-tracking,embedding
6d1ae404,2025-10-15,yangdx,Add offline Docker build support with embedded models and cache,embedding
6a29b5da,2025-10-23,yangdx,Update Docker deployment comments for LLM and embedding hosts,embedding
7b8223da,2025-11-03,yangdx,Update env.example with host/endpoint clarifications for LLM/embedding,embedding
9c057060,2025-11-05,yangdx,Add separate endpoint configuration for LLM and embeddings in evaluation,embedding
01b07b2b,2025-11-07,yangdx,Refactor Jina embedding dimension by changing param to optional with default,embedding
33a1482f,2025-11-07,yangdx,Add optional embedding dimension parameter control via env var,embedding
9cee5a63,2025-11-07,yangdx,Merge branch 'main' into apply-dim-to-embedding-call,embedding
ce28f30c,2025-11-07,yangdx,Add embedding_dim parameter support to embedding functions,embedding
d8a6355e,2025-11-07,yangdx,Merge branch 'main' into apply-dim-to-embedding-call,embedding
d94aae9c,2025-11-07,Yasiru,Rangana Add dimensions parameter support to openai_embed(),embedding
ffeeae42,2025-11-07,yangdx,refactor: simplify jina embedding dimension handling,embedding
03cc6262,2025-11-08,yangdx,Prohibit direct access to internal functions of EmbeddingFunc.,embedding
0b2a15c4,2025-11-08,yangdx,Centralize embedding_send_dim config through args instead of env var,embedding
29a349f2,2025-11-08,Daniel.y,Merge pull request #2329 from danielaskdd/gemini-embedding,embedding
a624a950,2025-11-08,yangdx,Add Gemini to APIs requiring embedding dimension parameter,embedding
de4ed736,2025-11-08,yangdx,Add Gemini embedding support,embedding
f4492d48,2025-11-08,Daniel.y,Merge pull request #2328 from HKUDS/apply-dim-to-embedding-call,embedding
05852e1a,2025-11-14,yangdx,Add max_token_size parameter to embedding function decorators,embedding
14a6c24e,2025-11-14,yangdx,Add configurable embedding token limit with validation,embedding
2fb57e76,2025-11-14,yangdx,Fix embedding token limit initialization order,embedding
39b49e92,2025-11-14,yangdx,Convert embedding_token_limit from property to field with __post_init__,embedding
5dec4dea,2025-11-14,yangdx,Improve embedding config priority and add debug logging,embedding
6b2af2b5,2025-11-14,yangdx,Refactor embedding function creation with proper attribute inheritance,embedding
77221564,2025-11-14,yangdx,Add max_token_size parameter to embedding function decorators,embedding
963a0a5d,2025-11-14,yangdx,Refactor embedding function creation with proper attribute inheritance,embedding
ab4d7ac2,2025-11-14,yangdx,Add configurable embedding token limit with validation,embedding
de4412dd,2025-11-14,yangdx,Fix embedding token limit initialization order,embedding
e5addf4d,2025-11-14,yangdx,Improve embedding config priority and add debug logging,embedding
f0254773,2025-11-14,yangdx,Convert embedding_token_limit from property to field with __post_init__,embedding
3b76eea2,2025-11-15,Daniel.y,Merge pull request #2359 from danielaskdd/embedding-limit,embedding
b5589ce4,2025-11-15,yangdx,Merge branch 'main' into embedding-limit,embedding
c13f9116,2025-11-17,yangdx,Add embedding dimension validation to EmbeddingFunc wrapper,embedding
46ce6d9a,2025-11-20,yangdx,Fix Azure OpenAI embedding model parameter fallback,embedding
0c4cba38,2025-11-21,yangdx,Fix double decoration in azure_openai_embed and document decorator usage,embedding
7b762110,2025-11-22,yangdx,Add fallback to AZURE_OPENAI_API_VERSION for embedding API version,embedding
1b02684e,2025-11-28,Daniel.y,Merge pull request #2432 from danielaskdd/embedding-example,embedding
1d07ff7f,2025-11-28,yangdx,Update OpenAI and Ollama embedding func examples in README,embedding
4ab4a7ac,2025-11-28,yangdx,Allow embedding models to use provider defaults when unspecified,embedding
56e0365c,2025-11-28,yangdx,Add configurable model parameter to jina_embed function,embedding
6e2946e7,2025-11-28,yangdx,Add max_token_size parameter to azure_openai_embed wrapper,embedding
97a9dfca,2025-11-28,yangdx,Add important note about embedding function wrapping restrictions,embedding
b6705449,2025-11-28,Daniel.y,Merge pull request #2433 from danielaskdd/fix-jina-embedding,embedding
ea8d55ab,2025-11-28,yangdx,Add documentation for embedding provider configuration rules,embedding
37e8898c,2025-10-01,yangdx,Simplify reference formatting in LLM context generation,llm_cloud
83d99e14,2025-10-01,yangdx,fix(OllamaAPI): Add validation to ensure last message is from user role,llm_cloud
0b3d3150,2025-10-20,Humphry,"extended to use gemini, sswitched to use gemini-flash-latest",llm_cloud
74694214,2025-10-20,dependabot[bot],"Update openai requirement from <2.0.0,>=1.0.0 to >=1.0.0,<3.0.0",llm_cloud
175ef459,2025-10-21,Daniel.y,Merge pull request #2238 from HKUDS/dependabot/pip/openai-gte-1.0.0-and-lt-3.0.0,llm_cloud
162370b6,2025-10-22,yangdx,Add optional LLM cache deletion when deleting documents,llm_cloud
aa916f28,2025-11-01,anouarbm,"docs: add generic test_dataset.json for evaluation examples Test cases with generic examples about: - LightRAG framework features and capabilities - RAG system architecture and components - Vector database support (ChromaDB, Neo4j, Milvus, etc.) - LLM provider integrations (OpenAI, Anthropic, Ollama, etc.) - RAG evaluation metrics explanation - Deployment options (Docker, FastAPI, direct integration) - Knowledge graph-based retrieval concepts",llm_cloud
994a82dc,2025-11-05,yangdx,Suppress token usage warnings for custom OpenAI-compatible endpoints,llm_cloud
3cb4eae4,2025-11-07,yangdx,Add Chain of Thought support to Gemini LLM integration,llm_cloud
6686edfd,2025-11-07,yangdx,"Update Gemini LLM options: add seed and thinking config, remove MIME type",llm_cloud
73284623,2025-11-07,Daniel.y,Merge pull request #2326 from danielaskdd/gemini-cot,llm_cloud
8c275553,2025-11-07,yangdx,Fix Gemini response parsing to avoid warnings from non-text parts,llm_cloud
924c8cb8,2025-11-07,yangdx,Merge branch 'main' into gemini-cot,llm_cloud
fc40a369,2025-11-07,yangdx,Add timeout support to Gemini LLM and improve parameter handling,llm_cloud
3d9de5ed,2025-11-08,yangdx,feat: improve Gemini client error handling and retry logic,llm_cloud
55274dde,2025-11-08,yangdx,Add LLM cache migration tool for KV storage backends,llm_cloud
57ee7d5a,2025-11-08,yangdx,Merge branch 'main' into llm-cache-migrate,llm_cloud
6b9f13c7,2025-11-08,yangdx,Enhance LLM cache migration tool with streaming and improved UX,llm_cloud
6fc54d36,2025-11-08,yangdx,Move LLM cache migration tool to lightrag.tools module,llm_cloud
85bb98b3,2025-11-08,Daniel.y,Merge pull request #2331 from danielaskdd/gemini-retry,llm_cloud
987bc09c,2025-11-08,yangdx,Update LLM cache migration docs and improve UX prompts,llm_cloud
d0d31e92,2025-11-08,yangdx,Improve LLM cache migration tool configuration and messaging,llm_cloud
f83ea339,2025-11-08,yangdx,Add section header comment for Gemini binding options,llm_cloud
1485cb82,2025-11-09,yangdx,Add LLM query cache cleanup tool for KV storage backends,llm_cloud
3110ca51,2025-11-09,Daniel.y,Merge pull request #2335 from danielaskdd/llm-cache-cleanup,llm_cloud
754d2ad2,2025-11-09,yangdx,Add documentation for LLM cache migration between storage types,llm_cloud
88ab73f6,2025-11-09,yangdx,HotFix: Restore streaming response in OpenAI LLM,llm_cloud
8adf3180,2025-11-09,Daniel.y,Merge pull request #2330 from danielaskdd/llm-cache-migrate,llm_cloud
18893015,2025-11-13,yangdx,Merge branch 'feat/add_cloud_ollama_support',llm_cloud
680e36c6,2025-11-14,yangdx,Improve Bedrock error handling with retry logic and custom exceptions,llm_cloud
f5b48587,2025-11-14,yangdx,Improve Bedrock error handling with retry logic and custom exceptions,llm_cloud
95e1fb16,2025-11-17,yangdx,Remove final_namespace attribute for in-memory storage and use namespace in clean_llm_query_cache.py,llm_cloud
a990c1d4,2025-11-17,BukeLy,fix: Correct Mock LLM output format in E2E test,llm_cloud
021b637d,2025-11-21,Daniel.y,Merge pull request #2403 from danielaskdd/azure-cot-handling,llm_cloud
02fdceb9,2025-11-21,yangdx,Update OpenAI client to use stable API and bump minimum version to 2.0.0,llm_cloud
1e477e95,2025-11-21,yangdx,Add lightrag-clean-llmqc console script entry point,llm_cloud
45f4f823,2025-11-21,yangdx,Refactor Azure OpenAI client creation to support client_configs merging,llm_cloud
8777895e,2025-11-21,Daniel.y,Merge pull request #2401 from danielaskdd/fix-openai-keyword-extraction,llm_cloud
9f69c5bf,2025-11-21,yangdx,feat: Support structured output `parsed` from OpenAI,llm_cloud
ac9f2574,2025-11-21,yangdx,Improve Azure OpenAI wrapper functions with full parameter support,llm_cloud
b709f8f8,2025-11-21,yangdx,Consolidate Azure OpenAI implementation into main OpenAI module,llm_cloud
fafa1791,2025-11-21,yangdx,Fix Azure OpenAI model parameter to use deployment name consistently,llm_cloud
ffd8da51,2025-11-21,yangdx,Improve Azure OpenAI compatibility and error handling,llm_cloud
49fb11e2,2025-11-22,yangdx,Update Azure OpenAI configuration examples,llm_cloud
5f53de88,2025-11-22,yangdx,Fix Azure configuration examples and correct typos in env.example,llm_cloud
a898f054,2025-11-25,palanisd,Merge branch 'HKUDS:main' into cohere-rerank,rerank
8e50eef5,2025-12-02,yangdx,Merge branch 'main' into cohere-rerank,rerank
f0d67f16,2025-12-03,yangdx,Merge branch 'cohere-rerank',rerank