cognee/cognee/eval_framework/evaluation/metrics
alekszievr 7b5bd7897f
Feat: evaluate retrieved context against golden context [cog-1481] (#619)
<!-- .github/pull_request_template.md -->

## Description
- Compare retrieved context to golden context using deepeval's
summarization metric
- Display relevant fields to each metric on metrics dashboard

Example output:

![image](https://github.com/user-attachments/assets/9facf716-b2ab-4573-bfdf-7b343d2a57c5)


## DCO Affirmation
I affirm that all code in every commit of this pull request conforms to
the terms of the Topoteretes Developer Certificate of Origin


<!-- This is an auto-generated comment: release notes by coderabbit.ai
-->
## Summary by CodeRabbit

- **New Features**
- Enhanced context handling in answer generation and corpus building to
include extended details.
- Introduced a new context coverage metric for deeper evaluation
insights.
- Upgraded the evaluation dashboard with dynamic presentation of metric
details.
- Added a new parameter to support loading golden context in corpus
loading methods.

- **Bug Fixes**
- Improved clarity in how answers are structured and appended in the
answer generation process.
<!-- end of auto-generated comment: release notes by coderabbit.ai -->
2025-03-10 15:27:48 +01:00
..
__init__.py Feature/cog 1312 integrating evaluation framework into dreamify (#562) 2025-03-03 19:55:47 +01:00
context_coverage.py Feat: evaluate retrieved context against golden context [cog-1481] (#619) 2025-03-10 15:27:48 +01:00
exact_match.py Feature/cog 1312 integrating evaluation framework into dreamify (#562) 2025-03-03 19:55:47 +01:00
f1.py Feature/cog 1312 integrating evaluation framework into dreamify (#562) 2025-03-03 19:55:47 +01:00