<!-- .github/pull_request_template.md --> ## Description - Compare retrieved context to golden context using deepeval's summarization metric - Display relevant fields to each metric on metrics dashboard Example output:  ## DCO Affirmation I affirm that all code in every commit of this pull request conforms to the terms of the Topoteretes Developer Certificate of Origin <!-- This is an auto-generated comment: release notes by coderabbit.ai --> ## Summary by CodeRabbit - **New Features** - Enhanced context handling in answer generation and corpus building to include extended details. - Introduced a new context coverage metric for deeper evaluation insights. - Upgraded the evaluation dashboard with dynamic presentation of metric details. - Added a new parameter to support loading golden context in corpus loading methods. - **Bug Fixes** - Improved clarity in how answers are structured and appended in the answer generation process. <!-- end of auto-generated comment: release notes by coderabbit.ai --> |
||
|---|---|---|
| .. | ||
| api | ||
| eval_framework | ||
| exceptions | ||
| infrastructure | ||
| modules | ||
| shared | ||
| tasks | ||
| tests | ||
| __init__.py | ||
| base_config.py | ||
| fetch_secret.py | ||
| low_level.py | ||
| pipelines.py | ||
| root_dir.py | ||