* QA eval dataset as argument, with hotpot and 2wikimultihop as options. Json schema validation for datasets. * Load dataset file by filename, outsource utilities * restructure metric selection * Add comprehensiveness, diversity and empowerment metrics * add promptfoo as an option * refactor RAG solution in eval;2C * LLM as a judge metrics implemented in a uniform way * Use requests.get instead of wget * clean up promptfoo config template * minor fixes * get promptfoo path instead of hardcoding * minor fixes * Add LLM as a judge prompts * Support 4 different rag options in eval * Minor refactor and logger usage |
||
|---|---|---|
| .. | ||
| cloud | ||
| test_datasets/initial_test | ||
| __init__.py | ||
| deepeval_metrics.py | ||
| EC2_README.md | ||
| eval_on_hotpot.py | ||
| eval_swe_bench.py | ||
| eval_utils.py | ||
| generate_test_set.py | ||
| official_hotpot_metrics.py | ||
| promptfoo_config_template.yaml | ||
| promptfoo_metrics.py | ||
| promptfoo_wrapper.py | ||
| promptfooprompt.json | ||
| qa_context_provider_utils.py | ||
| qa_dataset_utils.py | ||
| qa_metrics_utils.py | ||
| simple_rag_vs_cognee_eval.py | ||