HERBench / CITATION.cff
DanBenAmi's picture
Initial commit: HERBench dataset core files
9e54dc1
cff-version: 1.2.0
title: 'HERBench: A Benchmark for Multi-Evidence Integration in Video Question Answering'
message: >-
If you use this dataset, please cite it using the metadata from this file.
type: dataset
authors:
- family-names: Ben-Ami
given-names: Dan
email: [email protected]
- family-names: Serussi
given-names: Gabriele
email: [email protected]
- family-names: Cohen
given-names: Kobi
- family-names: Baskin
given-names: Chaim
repository-code: 'https://github.com/DanBenAmi/HERBench'
url: 'https://huggingface.co/datasets/DanBenAmi/HERBench'
abstract: >-
HERBench is a benchmark for evaluating multi-evidence integration in video question answering.
It contains 26,806 five-way multiple-choice questions across 337 unique videos with an average
length of 395 seconds. Each question enforces a High Evidential Requirement (ER), requiring
models to aggregate at least k ≥ 3 distinct, temporally separated visual cues. The benchmark
includes 12 compositional task types covering temporal reasoning, spatial reasoning, causal
reasoning, counting, comparison, and other multi-evidence challenges.
keywords:
- video understanding
- visual question answering
- multi-evidence reasoning
- temporal reasoning
- long video
- benchmark
license: CC-BY-NC-SA-4.0
date-released: '2025-01-01'
version: 1.0.0
preferred-citation:
type: article
authors:
- family-names: Ben-Ami
given-names: Dan
- family-names: Serussi
given-names: Gabriele
- family-names: Cohen
given-names: Kobi
- family-names: Baskin
given-names: Chaim
title: 'HERBench: A Benchmark for Multi-Evidence Integration in Video Question Answering'
year: 2025
journal: 'arXiv preprint'
notes: 'arXiv:XXXX.XXXXX (to be updated)'