LightRAG/lightrag/llm
Pankaj Kaushal 0b94117848 Add LlamaIndex LLM implementation module
- Implemented LlamaIndex interface for language model interactions
- Added async chat completion support
- Included embedding generation functionality
- Implemented retry mechanisms for API calls
- Added configuration and message formatting utilities
- Supports OpenAI-style message handling and external settings
2025-02-20 10:23:01 +01:00
..
__init__.py Separated llms from the main llm.py file and fixed some deprication bugs 2025-01-25 00:11:00 +01:00
azure_openai.py clean comments and unused libs 2025-02-18 21:12:06 +01:00
bedrock.py clean comments and unused libs 2025-02-18 21:12:06 +01:00
hf.py removed torch from requirement lightrag server 2025-02-18 20:05:51 +01:00
jina.py clean comments and unused libs 2025-02-18 21:12:06 +01:00
llama_index_impl.py Add LlamaIndex LLM implementation module 2025-02-20 10:23:01 +01:00
lmdeploy.py clean comments and unused libs 2025-02-18 21:12:06 +01:00
lollms.py clean comments and unused libs 2025-02-18 21:12:06 +01:00
nvidia_openai.py clean comments and unused libs 2025-02-18 21:12:06 +01:00
ollama.py remove tqdm and cleaned readme and ollama 2025-02-18 19:58:03 +01:00
openai.py clean comments and unused libs 2025-02-18 21:12:06 +01:00
siliconcloud.py clean comments and unused libs 2025-02-18 21:12:06 +01:00
zhipu.py clean comments and unused libs 2025-02-18 21:12:06 +01:00