cognee/evals
lxobr ca2cbfab91
feat: add direct llm eval adapter (#591)
<!-- .github/pull_request_template.md -->

## Description
<!-- Provide a clear description of the changes in this PR -->
• Created DirectLLMEvalAdapter - a lightweight alternative to DeepEval
for answer evaluation
• Added evaluation prompt files defining scoring criteria and format
• Made adapter selectable via evaluation_engine = "DirectLLM" in config,
supports "correctness" metric only
## DCO Affirmation
I affirm that all code in every commit of this pull request conforms to
the terms of the Topoteretes Developer Certificate of Origin


<!-- This is an auto-generated comment: release notes by coderabbit.ai
-->

## Summary by CodeRabbit

- **New Features**
- Introduced a new evaluation method that compares model responses
against a reference answer using structured prompt templates. This
approach enables automated scoring (ranging from 0 to 1) along with
brief justifications.
  
- **Enhancements**
- Updated the configuration to clearly distinguish between evaluation
options, providing end-users with a more transparent and reliable
assessment process.

<!-- end of auto-generated comment: release notes by coderabbit.ai -->
2025-03-01 19:50:20 +01:00
..
cloud Add -y to setup_ubuntu_instance.sh commands and update EC2_README 2024-11-29 11:30:39 +01:00
eval_framework feat: add direct llm eval adapter (#591) 2025-03-01 19:50:20 +01:00
test_datasets/initial_test Updated evals, added falkordb 2024-05-20 14:41:08 +02:00
__init__.py add test for linter 2024-05-25 22:18:07 +02:00
deepeval_metrics.py Feat/cog 950 improve metric selection (#435) 2025-01-15 10:45:55 +01:00
EC2_README.md Add code formating to usermod command 2024-11-29 11:30:39 +01:00
eval_on_hotpot.py feat: Add gemini support [COG-1023] (#485) 2025-01-31 18:03:23 +01:00
eval_swe_bench.py Feat/cog 1365 unify retrievers (#572) 2025-02-27 12:13:21 +01:00
eval_utils.py ruff format 2025-01-05 19:09:08 +01:00
generate_test_set.py Fix linter issues 2025-01-05 19:48:35 +01:00
multimetric_qa_eval_run.py Feat: [COG-1074] fix multimetric eval bug (#463) 2025-01-28 13:05:22 +01:00
official_hotpot_metrics.py Incremental eval of cognee pipeline (#445) 2025-01-17 14:16:48 +01:00
promptfoo_config_template.yaml Feat/cog 950 improve metric selection (#435) 2025-01-15 10:45:55 +01:00
promptfoo_metrics.py Feat: Save and load contexts and answers for eval (#462) 2025-01-22 16:17:01 +01:00
promptfoo_wrapper.py Feat/cog 950 improve metric selection (#435) 2025-01-15 10:45:55 +01:00
promptfooprompt.json Feat/cog 950 improve metric selection (#435) 2025-01-15 10:45:55 +01:00
qa_context_provider_utils.py Transition to new retrievers, update searches (#585) 2025-02-27 15:25:24 +01:00
qa_dataset_utils.py Feat/cog 950 improve metric selection (#435) 2025-01-15 10:45:55 +01:00
qa_eval_parameters.json feat: Add gemini support [COG-1023] (#485) 2025-01-31 18:03:23 +01:00
qa_eval_utils.py Feat: [COG-1074] fix multimetric eval bug (#463) 2025-01-28 13:05:22 +01:00
qa_metrics_utils.py Run eval on a set of parameters and save them as png and json (#443) 2025-01-17 00:18:51 +01:00
run_qa_eval.py Feat: Save and load contexts and answers for eval (#462) 2025-01-22 16:17:01 +01:00
simple_rag_vs_cognee_eval.py refactor: Refactor search so graph completion is used by default (#505) 2025-02-07 17:16:34 +01:00