LightRAG/lightrag
Nick French df69d386c5 Fixes #596 - Hardcoded model deployment name in azure_openai_complete
Fixes #596

Update `azure_openai_complete` function to accept a model parameter with a default value of 'gpt-4o-mini'.

* Modify the function signature of `azure_openai_complete` to include a `model` parameter with a default value of 'gpt-4o-mini'.
* Pass the `model` parameter to the `azure_openai_complete_if_cache` function instead of the hardcoded model name 'conversation-4o-mini'.

---

For more details, open the [Copilot Workspace session](https://copilot-workspace.githubnext.com/HKUDS/LightRAG/issues/596?shareId=XXXX-XXXX-XXXX-XXXX).
2025-01-17 12:10:26 -05:00
..
api Merge pull request #592 from danielaskdd/yangdx 2025-01-17 14:29:31 +08:00
kg Merge pull request #590 from jin38324/main 2025-01-16 14:20:08 +08:00
__init__.py Update __init__.py 2025-01-16 14:24:29 +08:00
base.py Add custom function with separate keyword extraction for user's query and a separate prompt 2025-01-14 22:10:47 +05:30
lightrag.py Merge pull request #590 from jin38324/main 2025-01-16 14:20:08 +08:00
llm.py Fixes #596 - Hardcoded model deployment name in azure_openai_complete 2025-01-17 12:10:26 -05:00
operate.py Merge pull request #590 from jin38324/main 2025-01-16 14:20:08 +08:00
prompt.py feat: 增强知识图谱关系的时序性支持 2024-12-29 15:25:57 +08:00
storage.py fix linting errors 2024-12-31 17:32:04 +08:00
utils.py support pipeline mode 2025-01-16 12:58:15 +08:00