ragflow/rag
utopia2077 2d4a60cae6
Fix: Reduce excessive IO operations by loading LLM factory configurations (#6047)
…ions

### What problem does this PR solve?

This PR fixes an issue where the application was repeatedly reading the
llm_factories.json file from disk in multiple places, which could lead
to "Too many open files" errors under high load conditions. The fix
centralizes the file reading operation in the settings.py module and
stores the data in a global variable that can be accessed by other
modules.

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
- [ ] New Feature (non-breaking change which adds functionality)
- [ ] Documentation Update
- [ ] Refactoring
- [x] Performance Improvement
- [ ] Other (please describe):
2025-03-14 09:54:38 +08:00
..
app Fix: optimize OCR garbage identification to reduce unnecessary filtering (#6027) 2025-03-13 18:48:32 +08:00
llm 0.17.1 release notes (#6021) 2025-03-13 14:43:24 +08:00
nlp Fix: encode detect error. (#6006) 2025-03-13 10:47:58 +08:00
res Format file format from Windows/dos to Unix (#1949) 2024-08-15 09:17:36 +08:00
svr Optimize graphrag cache get entity (#6018) 2025-03-13 14:37:59 +08:00
utils Feat: apply LLM to optimize citations. (#5935) 2025-03-11 19:56:21 +08:00
__init__.py Update comments (#4569) 2025-01-21 20:52:28 +08:00
benchmark.py Refactor embedding batch_size (#3825) 2024-12-03 16:22:39 +08:00
prompts.py Fix: Reduce excessive IO operations by loading LLM factory configurations (#6047) 2025-03-14 09:54:38 +08:00
raptor.py Refactor graphrag to remove redis lock (#5828) 2025-03-10 15:15:06 +08:00
settings.py Feat: Accessing Alibaba Cloud OSS with Amazon S3 SDK (#5438) 2025-02-27 17:02:42 +08:00