LightRAG/lightrag/llm
yangdx 77221564b0 Add max_token_size parameter to embedding function decorators
- Add max_token_size=8192 to all embed funcs
- Move siliconcloud to deprecated folder
- Import wrap_embedding_func_with_attrs
- Update EmbeddingFunc docstring
- Fix langfuse import type annotation
2025-11-17 12:54:32 +08:00
..
deprecated Add max_token_size parameter to embedding function decorators 2025-11-17 12:54:32 +08:00
__init__.py Separated llms from the main llm.py file and fixed some deprication bugs 2025-01-25 00:11:00 +01:00
anthropic.py Add Deepseek Style Chain of Thought (CoT) Support for OpenAI Compatible LLM providers 2025-09-09 22:34:36 +08:00
azure_openai.py Add Deepseek Style Chain of Thought (CoT) Support for OpenAI Compatible LLM providers 2025-09-09 22:34:36 +08:00
bedrock.py Add max_token_size parameter to embedding function decorators 2025-11-17 12:54:32 +08:00
binding_options.py Add Gemini embedding support 2025-11-08 03:34:30 +08:00
gemini.py Add max_token_size parameter to embedding function decorators 2025-11-17 12:54:32 +08:00
hf.py Add max_token_size parameter to embedding function decorators 2025-11-17 12:54:32 +08:00
jina.py Add max_token_size parameter to embedding function decorators 2025-11-17 12:54:32 +08:00
llama_index_impl.py Add max_token_size parameter to embedding function decorators 2025-11-17 12:54:32 +08:00
lmdeploy.py Add Deepseek Style Chain of Thought (CoT) Support for OpenAI Compatible LLM providers 2025-09-09 22:34:36 +08:00
lollms.py Add max_token_size parameter to embedding function decorators 2025-11-17 12:54:32 +08:00
nvidia_openai.py Add max_token_size parameter to embedding function decorators 2025-11-17 12:54:32 +08:00
ollama.py Add max_token_size parameter to embedding function decorators 2025-11-17 12:54:32 +08:00
openai.py Add max_token_size parameter to embedding function decorators 2025-11-17 12:54:32 +08:00
zhipu.py Add Deepseek Style Chain of Thought (CoT) Support for OpenAI Compatible LLM providers 2025-09-09 22:34:36 +08:00