cognee/evals/eval_framework/evaluation
lxobr bee04cad86
Feat/cog 1331 modal run eval (#576)
<!-- .github/pull_request_template.md -->

## Description
- Split metrics dashboard into two modules: calculator (statistics) and
generator (visualization)
- Added aggregate metrics as a new phase in evaluation pipeline
- Created modal example to run multiple evaluations in parallel and
collect results into a single combined output
## DCO Affirmation
I affirm that all code in every commit of this pull request conforms to
the terms of the Topoteretes Developer Certificate of Origin


<!-- This is an auto-generated comment: release notes by coderabbit.ai
-->
## Summary by CodeRabbit

- **New Features**
- Enhanced metrics reporting with improved visualizations, including
histogram and confidence interval plots.
- Introduced an asynchronous evaluation process that supports parallel
execution and streamlined result aggregation.
- Added new configuration options to control metrics calculation and
aggregated output storage.

- **Refactor**
- Restructured dashboard generation and evaluation workflows into a more
modular, maintainable design.
- Improved error handling and logging for better feedback during
evaluation processes.

- **Bug Fixes**
- Updated test cases to ensure accurate validation of the new dashboard
generation and metrics calculation functionalities.
<!-- end of auto-generated comment: release notes by coderabbit.ai -->
2025-03-03 14:22:32 +01:00
..
metrics feat: Cognee evaluation framework development (#498) 2025-02-11 16:31:54 +01:00
__init__.py feat: Cognee evaluation framework development (#498) 2025-02-11 16:31:54 +01:00
base_eval_adapter.py feat: Cognee evaluation framework development (#498) 2025-02-11 16:31:54 +01:00
deep_eval_adapter.py feat: Cognee evaluation framework development (#498) 2025-02-11 16:31:54 +01:00
direct_llm_eval_adapter.py feat: add direct llm eval adapter (#591) 2025-03-01 19:50:20 +01:00
evaluation_executor.py feat: Cognee evaluation framework development (#498) 2025-02-11 16:31:54 +01:00
evaluator_adapters.py feat: add direct llm eval adapter (#591) 2025-03-01 19:50:20 +01:00
run_evaluation_module.py Feat/cog 1331 modal run eval (#576) 2025-03-03 14:22:32 +01:00