* QA eval dataset as argument, with hotpot and 2wikimultihop as options. Json schema validation for datasets. * Load dataset file by filename, outsource utilities * restructure metric selection * Add comprehensiveness, diversity and empowerment metrics * add promptfoo as an option * refactor RAG solution in eval;2C * LLM as a judge metrics implemented in a uniform way * Use requests.get instead of wget * clean up promptfoo config template * minor fixes * get promptfoo path instead of hardcoding * minor fixes * Add LLM as a judge prompts * Support 4 different rag options in eval * Minor refactor and logger usage * Run eval on a set of parameters and save results as json and png * script for running all param combinations * bugfix in simple rag * potential fix: single asyncio run * temp fix: exclude insights * Remove insights, have single asyncio run, refactor --------- Co-authored-by: lxobr <122801072+lxobr@users.noreply.github.com>
18 lines
271 B
JSON
18 lines
271 B
JSON
{
|
|
"dataset": [
|
|
"hotpotqa"
|
|
],
|
|
"rag_option": [
|
|
"no_rag",
|
|
"cognee",
|
|
"simple_rag",
|
|
"brute_force"
|
|
],
|
|
"num_samples": [
|
|
2
|
|
],
|
|
"metric_names": [
|
|
"Correctness",
|
|
"Comprehensiveness"
|
|
]
|
|
}
|