cognee/evals
alekszievr 8ec1e48ff6
Run eval on a set of parameters and save them as png and json (#443)
* QA eval dataset as argument, with hotpot and 2wikimultihop as options. Json schema validation for datasets.

* Load dataset file by filename, outsource utilities

* restructure metric selection

* Add comprehensiveness, diversity and empowerment metrics

* add promptfoo as an option

* refactor RAG solution in eval;2C

* LLM as a judge metrics implemented in a uniform way

* Use requests.get instead of wget

* clean up promptfoo config template

* minor fixes

* get promptfoo path instead of hardcoding

* minor fixes

* Add LLM as a judge prompts

* Support 4 different rag options in eval

* Minor refactor and logger usage

* Run eval on a set of parameters and save results as json and png

* script for running all param combinations

* bugfix in simple rag

* potential fix: single asyncio run

* temp fix: exclude insights

* Remove insights, have single asyncio run, refactor

---------

Co-authored-by: lxobr <122801072+lxobr@users.noreply.github.com>
2025-01-17 00:18:51 +01:00
..
cloud
test_datasets/initial_test
__init__.py
deepeval_metrics.py
EC2_README.md
eval_on_hotpot.py
eval_swe_bench.py
eval_utils.py
generate_test_set.py
official_hotpot_metrics.py
promptfoo_config_template.yaml
promptfoo_metrics.py
promptfoo_wrapper.py
promptfooprompt.json
qa_context_provider_utils.py
qa_dataset_utils.py
qa_eval_parameters.json
qa_eval_utils.py
qa_metrics_utils.py
run_qa_eval.py
simple_rag_vs_cognee_eval.py