cognee/cognee/eval_framework/evaluation
lxobr 8207dc8643
feat: make graph creation prompt configurable (#686)
<!-- .github/pull_request_template.md -->

## Description
<!-- Provide a clear description of the changes in this PR -->
- Added new graph creation prompts
- Exposed graph creation prompts in .cognify via get_default tasks
- Exposed graph creation prompts in eval framework
## DCO Affirmation
I affirm that all code in every commit of this pull request conforms to
the terms of the Topoteretes Developer Certificate of Origin.

---------

Co-authored-by: hajdul88 <52442977+hajdul88@users.noreply.github.com>
2025-04-03 11:14:33 +02:00
..
metrics Feat: evaluate retrieved context against golden context [cog-1481] (#619) 2025-03-10 15:27:48 +01:00
__init__.py Feature/cog 1312 integrating evaluation framework into dreamify (#562) 2025-03-03 19:55:47 +01:00
base_eval_adapter.py Feature/cog 1312 integrating evaluation framework into dreamify (#562) 2025-03-03 19:55:47 +01:00
deep_eval_adapter.py feat: make graph creation prompt configurable (#686) 2025-04-03 11:14:33 +02:00
direct_llm_eval_adapter.py Feature/cog 1312 integrating evaluation framework into dreamify (#562) 2025-03-03 19:55:47 +01:00
evaluation_executor.py Feat: evaluate retrieved context against golden context [cog-1481] (#619) 2025-03-10 15:27:48 +01:00
evaluator_adapters.py Feature/cog 1312 integrating evaluation framework into dreamify (#562) 2025-03-03 19:55:47 +01:00
run_evaluation_module.py fix: human readable logs (#658) 2025-03-25 11:54:40 +01:00