cognee/evals/eval_framework
lxobr ca2cbfab91
feat: add direct llm eval adapter (#591)
<!-- .github/pull_request_template.md -->

## Description
<!-- Provide a clear description of the changes in this PR -->
• Created DirectLLMEvalAdapter - a lightweight alternative to DeepEval
for answer evaluation
• Added evaluation prompt files defining scoring criteria and format
• Made adapter selectable via evaluation_engine = "DirectLLM" in config,
supports "correctness" metric only
## DCO Affirmation
I affirm that all code in every commit of this pull request conforms to
the terms of the Topoteretes Developer Certificate of Origin


<!-- This is an auto-generated comment: release notes by coderabbit.ai
-->

## Summary by CodeRabbit

- **New Features**
- Introduced a new evaluation method that compares model responses
against a reference answer using structured prompt templates. This
approach enables automated scoring (ranging from 0 to 1) along with
brief justifications.
  
- **Enhancements**
- Updated the configuration to clearly distinguish between evaluation
options, providing end-users with a more transparent and reliable
assessment process.

<!-- end of auto-generated comment: release notes by coderabbit.ai -->
2025-03-01 19:50:20 +01:00
..
answer_generation feat: Cognee evaluation framework development (#498) 2025-02-11 16:31:54 +01:00
benchmark_adapters feat: retrieve golden contexts [COG-1364] (#579) 2025-02-27 13:25:47 +01:00
corpus_builder feat: add experimental cognify pipeline [COG-1293] (#541) 2025-02-25 16:14:27 +01:00
evaluation feat: add direct llm eval adapter (#591) 2025-03-01 19:50:20 +01:00
__init__.py feat: Cognee evaluation framework development (#498) 2025-02-11 16:31:54 +01:00
eval_config.py feat: add direct llm eval adapter (#591) 2025-03-01 19:50:20 +01:00
metrics_dashboard.py feat: Cognee evaluation framework development (#498) 2025-02-11 16:31:54 +01:00
run_eval.py feat: Cognee evaluation framework development (#498) 2025-02-11 16:31:54 +01:00