<!-- .github/pull_request_template.md --> ## Description - Enable custom tasks in corpus building ## DCO Affirmation I affirm that all code in every commit of this pull request conforms to the terms of the Topoteretes Developer Certificate of Origin <!-- This is an auto-generated comment: release notes by coderabbit.ai --> ## Summary by CodeRabbit - **New Features** - Introduced a configurable option to specify the task retrieval strategy during corpus building. - Enhanced the workflow with integrated task fetching, featuring a default retrieval mechanism. - Updated evaluation configuration to support customizable task selection for more flexible operations. - Added a new abstract base class for defining various task retrieval strategies. - Introduced a new enumeration to map task getter types to their corresponding classes. - **Dependencies** - Added a new dependency for downloading files from Google Drive. <!-- end of auto-generated comment: release notes by coderabbit.ai --> |
||
|---|---|---|
| .. | ||
| cloud | ||
| eval_framework | ||
| test_datasets/initial_test | ||
| __init__.py | ||
| deepeval_metrics.py | ||
| EC2_README.md | ||
| eval_on_hotpot.py | ||
| eval_swe_bench.py | ||
| eval_utils.py | ||
| generate_test_set.py | ||
| multimetric_qa_eval_run.py | ||
| official_hotpot_metrics.py | ||
| promptfoo_config_template.yaml | ||
| promptfoo_metrics.py | ||
| promptfoo_wrapper.py | ||
| promptfooprompt.json | ||
| qa_context_provider_utils.py | ||
| qa_dataset_utils.py | ||
| qa_eval_parameters.json | ||
| qa_eval_utils.py | ||
| qa_metrics_utils.py | ||
| run_qa_eval.py | ||
| simple_rag_vs_cognee_eval.py | ||