cognee/evals
lxobr 4b7c21d7d8
feat: retrieve golden contexts [COG-1364] (#579)
<!-- .github/pull_request_template.md -->

## Description
<!-- Provide a clear description of the changes in this PR -->
• Added load_golden_context parameter to BaseBenchmarkAdapter's abstract
load_corpus method, establishing a common interface for retrieving
supporting evidence
• Refactored HotpotQAAdapter with a modular design: introduced
_get_metadata_field_name method to handle dataset-specific fields
(making it extensible for child classes), implemented get golden context
functionality.
• Refactored TwoWikiMultihopAdapter to inherit from HotpotQAAdapter,
overriding only the necessary methods while reusing parent's
functionality
• Added golden context support to MusiqueQAAdapter with their
decomposition-based format
## DCO Affirmation
I affirm that all code in every commit of this pull request conforms to
the terms of the Topoteretes Developer Certificate of Origin


<!-- This is an auto-generated comment: release notes by coderabbit.ai
-->
## Summary by CodeRabbit

- **New Features**
- Introduced an option to include additional context during corpus
loading, enhancing the quality and flexibility of generated QA pairs.
- **Refactor**
- Streamlined and modularized the processing workflow across different
adapters for improved consistency and maintainability.
- Updated metadata extraction to refine the display of contextual
information.
- Shifted focus in the `TwoWikiMultihopAdapter` from corpus loading to
context extraction.
<!-- end of auto-generated comment: release notes by coderabbit.ai -->
2025-02-27 13:25:47 +01:00
..
cloud Add -y to setup_ubuntu_instance.sh commands and update EC2_README 2024-11-29 11:30:39 +01:00
eval_framework feat: retrieve golden contexts [COG-1364] (#579) 2025-02-27 13:25:47 +01:00
test_datasets/initial_test Updated evals, added falkordb 2024-05-20 14:41:08 +02:00
__init__.py add test for linter 2024-05-25 22:18:07 +02:00
deepeval_metrics.py Feat/cog 950 improve metric selection (#435) 2025-01-15 10:45:55 +01:00
EC2_README.md Add code formating to usermod command 2024-11-29 11:30:39 +01:00
eval_on_hotpot.py feat: Add gemini support [COG-1023] (#485) 2025-01-31 18:03:23 +01:00
eval_swe_bench.py Feat/cog 1365 unify retrievers (#572) 2025-02-27 12:13:21 +01:00
eval_utils.py ruff format 2025-01-05 19:09:08 +01:00
generate_test_set.py Fix linter issues 2025-01-05 19:48:35 +01:00
multimetric_qa_eval_run.py Feat: [COG-1074] fix multimetric eval bug (#463) 2025-01-28 13:05:22 +01:00
official_hotpot_metrics.py Incremental eval of cognee pipeline (#445) 2025-01-17 14:16:48 +01:00
promptfoo_config_template.yaml Feat/cog 950 improve metric selection (#435) 2025-01-15 10:45:55 +01:00
promptfoo_metrics.py Feat: Save and load contexts and answers for eval (#462) 2025-01-22 16:17:01 +01:00
promptfoo_wrapper.py Feat/cog 950 improve metric selection (#435) 2025-01-15 10:45:55 +01:00
promptfooprompt.json Feat/cog 950 improve metric selection (#435) 2025-01-15 10:45:55 +01:00
qa_context_provider_utils.py Feat/cog 1365 unify retrievers (#572) 2025-02-27 12:13:21 +01:00
qa_dataset_utils.py Feat/cog 950 improve metric selection (#435) 2025-01-15 10:45:55 +01:00
qa_eval_parameters.json feat: Add gemini support [COG-1023] (#485) 2025-01-31 18:03:23 +01:00
qa_eval_utils.py Feat: [COG-1074] fix multimetric eval bug (#463) 2025-01-28 13:05:22 +01:00
qa_metrics_utils.py Run eval on a set of parameters and save them as png and json (#443) 2025-01-17 00:18:51 +01:00
run_qa_eval.py Feat: Save and load contexts and answers for eval (#462) 2025-01-22 16:17:01 +01:00
simple_rag_vs_cognee_eval.py refactor: Refactor search so graph completion is used by default (#505) 2025-02-07 17:16:34 +01:00