Datasets:
Formats:
parquet
Languages:
English
Size:
10K - 100K
ArXiv:
Tags:
video-understanding
multi-evidence-reasoning
long-video
temporal-reasoning
spatial-reasoning
video-qa
License:
Commit
·
9e54dc1
0
Parent(s):
Initial commit: HERBench dataset core files
Browse files- .gitattributes +19 -0
- .gitignore +50 -0
- CITATION.cff +50 -0
- LICENSE +63 -0
- README.md +443 -0
- herbench_loader.py +333 -0
.gitattributes
ADDED
|
@@ -0,0 +1,19 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# Git LFS configuration for large files
|
| 2 |
+
|
| 3 |
+
# Videos and archives (to be uploaded via HF CLI)
|
| 4 |
+
*.tar.part.* filter=lfs diff=lfs merge=lfs -text
|
| 5 |
+
*.tar filter=lfs diff=lfs merge=lfs -text
|
| 6 |
+
*.mp4 filter=lfs diff=lfs merge=lfs -text
|
| 7 |
+
*.avi filter=lfs diff=lfs merge=lfs -text
|
| 8 |
+
*.mov filter=lfs diff=lfs merge=lfs -text
|
| 9 |
+
*.mkv filter=lfs diff=lfs merge=lfs -text
|
| 10 |
+
|
| 11 |
+
# Image files (to be uploaded via HF CLI)
|
| 12 |
+
*.png filter=lfs diff=lfs merge=lfs -text
|
| 13 |
+
*.jpg filter=lfs diff=lfs merge=lfs -text
|
| 14 |
+
*.jpeg filter=lfs diff=lfs merge=lfs -text
|
| 15 |
+
|
| 16 |
+
# Large annotation files (to be uploaded via HF CLI)
|
| 17 |
+
data/herbench_annotations.json filter=lfs diff=lfs merge=lfs -text
|
| 18 |
+
data/herbench_annotations_lite.json filter=lfs diff=lfs merge=lfs -text
|
| 19 |
+
data/*.json filter=lfs diff=lfs merge=lfs -text
|
.gitignore
ADDED
|
@@ -0,0 +1,50 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# Videos directory (uploaded separately via HF CLI)
|
| 2 |
+
videos/
|
| 3 |
+
|
| 4 |
+
# Data directory (uploaded separately via HF CLI)
|
| 5 |
+
data/
|
| 6 |
+
|
| 7 |
+
# Assets directory (uploaded separately via HF CLI)
|
| 8 |
+
assets/
|
| 9 |
+
|
| 10 |
+
# Scripts directory (not uploaded to HF)
|
| 11 |
+
scripts/
|
| 12 |
+
|
| 13 |
+
# Build artifacts
|
| 14 |
+
__pycache__/
|
| 15 |
+
*.pyc
|
| 16 |
+
*.pyo
|
| 17 |
+
*.egg-info/
|
| 18 |
+
*.egg
|
| 19 |
+
dist/
|
| 20 |
+
build/
|
| 21 |
+
|
| 22 |
+
# Temporary files
|
| 23 |
+
*.tmp
|
| 24 |
+
*.temp
|
| 25 |
+
*.swp
|
| 26 |
+
*.swo
|
| 27 |
+
*~
|
| 28 |
+
.DS_Store
|
| 29 |
+
._.DS_Store
|
| 30 |
+
Thumbs.db
|
| 31 |
+
|
| 32 |
+
# Local video directory (source videos)
|
| 33 |
+
/videos_source/
|
| 34 |
+
|
| 35 |
+
# Editor directories
|
| 36 |
+
.vscode/
|
| 37 |
+
.idea/
|
| 38 |
+
*.sublime-*
|
| 39 |
+
|
| 40 |
+
# Local testing
|
| 41 |
+
test_*.py
|
| 42 |
+
scratch/
|
| 43 |
+
|
| 44 |
+
# Documentation and summary files (not needed in HF repo)
|
| 45 |
+
CHANGES_SUMMARY.md
|
| 46 |
+
FIXES_APPLIED.md
|
| 47 |
+
PROJECT_SUMMARY.md
|
| 48 |
+
QUICK_START.txt
|
| 49 |
+
USAGE_INSTRUCTIONS.md
|
| 50 |
+
upload_videos.sh
|
CITATION.cff
ADDED
|
@@ -0,0 +1,50 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
cff-version: 1.2.0
|
| 2 |
+
title: 'HERBench: A Benchmark for Multi-Evidence Integration in Video Question Answering'
|
| 3 |
+
message: >-
|
| 4 |
+
If you use this dataset, please cite it using the metadata from this file.
|
| 5 |
+
type: dataset
|
| 6 |
+
authors:
|
| 7 |
+
- family-names: Ben-Ami
|
| 8 |
+
given-names: Dan
|
| 9 |
+
email: [email protected]
|
| 10 |
+
- family-names: Serussi
|
| 11 |
+
given-names: Gabriele
|
| 12 |
+
email: [email protected]
|
| 13 |
+
- family-names: Cohen
|
| 14 |
+
given-names: Kobi
|
| 15 |
+
- family-names: Baskin
|
| 16 |
+
given-names: Chaim
|
| 17 |
+
repository-code: 'https://github.com/DanBenAmi/HERBench'
|
| 18 |
+
url: 'https://huggingface.co/datasets/DanBenAmi/HERBench'
|
| 19 |
+
abstract: >-
|
| 20 |
+
HERBench is a benchmark for evaluating multi-evidence integration in video question answering.
|
| 21 |
+
It contains 26,806 five-way multiple-choice questions across 337 unique videos with an average
|
| 22 |
+
length of 395 seconds. Each question enforces a High Evidential Requirement (ER), requiring
|
| 23 |
+
models to aggregate at least k ≥ 3 distinct, temporally separated visual cues. The benchmark
|
| 24 |
+
includes 12 compositional task types covering temporal reasoning, spatial reasoning, causal
|
| 25 |
+
reasoning, counting, comparison, and other multi-evidence challenges.
|
| 26 |
+
keywords:
|
| 27 |
+
- video understanding
|
| 28 |
+
- visual question answering
|
| 29 |
+
- multi-evidence reasoning
|
| 30 |
+
- temporal reasoning
|
| 31 |
+
- long video
|
| 32 |
+
- benchmark
|
| 33 |
+
license: CC-BY-NC-SA-4.0
|
| 34 |
+
date-released: '2025-01-01'
|
| 35 |
+
version: 1.0.0
|
| 36 |
+
preferred-citation:
|
| 37 |
+
type: article
|
| 38 |
+
authors:
|
| 39 |
+
- family-names: Ben-Ami
|
| 40 |
+
given-names: Dan
|
| 41 |
+
- family-names: Serussi
|
| 42 |
+
given-names: Gabriele
|
| 43 |
+
- family-names: Cohen
|
| 44 |
+
given-names: Kobi
|
| 45 |
+
- family-names: Baskin
|
| 46 |
+
given-names: Chaim
|
| 47 |
+
title: 'HERBench: A Benchmark for Multi-Evidence Integration in Video Question Answering'
|
| 48 |
+
year: 2025
|
| 49 |
+
journal: 'arXiv preprint'
|
| 50 |
+
notes: 'arXiv:XXXX.XXXXX (to be updated)'
|
LICENSE
ADDED
|
@@ -0,0 +1,63 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International Public License
|
| 2 |
+
|
| 3 |
+
By exercising the Licensed Rights (defined below), You accept and agree to be bound
|
| 4 |
+
by the terms and conditions of this Creative Commons Attribution-NonCommercial-ShareAlike
|
| 5 |
+
4.0 International Public License ("Public License"). To the extent this Public License may
|
| 6 |
+
be interpreted as a contract, You are granted the Licensed Rights in consideration of Your
|
| 7 |
+
acceptance of these terms and conditions, and the Licensor grants You such rights in
|
| 8 |
+
consideration of benefits the Licensor receives from making the Licensed Material available
|
| 9 |
+
under these terms and conditions.
|
| 10 |
+
|
| 11 |
+
Full License Text: https://creativecommons.org/licenses/by-nc-sa/4.0/legalcode
|
| 12 |
+
|
| 13 |
+
===================================================================================================
|
| 14 |
+
|
| 15 |
+
TERMS OF USE FOR HERBENCH DATASET
|
| 16 |
+
|
| 17 |
+
Copyright (c) 2025 Dan Ben-Ami, Gabriele Serussi, Kobi Cohen, Chaim Baskin
|
| 18 |
+
|
| 19 |
+
This dataset is licensed under the Creative Commons Attribution-NonCommercial-ShareAlike 4.0
|
| 20 |
+
International License (CC BY-NC-SA 4.0).
|
| 21 |
+
|
| 22 |
+
You are free to:
|
| 23 |
+
- Share: copy and redistribute the material in any medium or format
|
| 24 |
+
- Adapt: remix, transform, and build upon the material
|
| 25 |
+
|
| 26 |
+
Under the following terms:
|
| 27 |
+
- Attribution: You must give appropriate credit, provide a link to the license, and indicate
|
| 28 |
+
if changes were made. You may do so in any reasonable manner, but not in any way that
|
| 29 |
+
suggests the licensor endorses you or your use.
|
| 30 |
+
|
| 31 |
+
- NonCommercial: You may not use the material for commercial purposes.
|
| 32 |
+
|
| 33 |
+
- ShareAlike: If you remix, transform, or build upon the material, you must distribute your
|
| 34 |
+
contributions under the same license as the original.
|
| 35 |
+
|
| 36 |
+
- No additional restrictions: You may not apply legal terms or technological measures that
|
| 37 |
+
legally restrict others from doing anything the license permits.
|
| 38 |
+
|
| 39 |
+
CITATION:
|
| 40 |
+
If you use this dataset in your research, please cite:
|
| 41 |
+
|
| 42 |
+
@article{herbench2025,
|
| 43 |
+
title={HERBench: A Benchmark for Multi-Evidence Integration in Video Question Answering},
|
| 44 |
+
author={Ben-Ami, Dan and Serussi, Gabriele and Cohen, Kobi and Baskin, Chaim},
|
| 45 |
+
journal={arXiv preprint arXiv:XXXX.XXXXX},
|
| 46 |
+
year={2025}
|
| 47 |
+
}
|
| 48 |
+
|
| 49 |
+
VIDEO SOURCES:
|
| 50 |
+
The videos in this dataset are sourced from:
|
| 51 |
+
- WildTrack Dataset (public dataset)
|
| 52 |
+
- HD-EPIC Dataset (first-person egocentric videos)
|
| 53 |
+
- PersonPath22 Dataset (person tracking dataset)
|
| 54 |
+
- Movie Trailers (public promotional content)
|
| 55 |
+
|
| 56 |
+
All rights to the original videos belong to their respective copyright holders. This dataset
|
| 57 |
+
is provided for academic research purposes only.
|
| 58 |
+
|
| 59 |
+
CONTACT:
|
| 60 |
+
- Dan Ben-Ami: [email protected]
|
| 61 |
+
- Gabriele Serussi: [email protected]
|
| 62 |
+
|
| 63 |
+
For more information, visit: https://github.com/DanBenAmi/HERBench
|
README.md
ADDED
|
@@ -0,0 +1,443 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
---
|
| 2 |
+
language:
|
| 3 |
+
- en
|
| 4 |
+
license: cc-by-nc-sa-4.0
|
| 5 |
+
task_categories:
|
| 6 |
+
- visual-question-answering
|
| 7 |
+
- multiple-choice
|
| 8 |
+
tags:
|
| 9 |
+
- video-understanding
|
| 10 |
+
- multi-evidence-reasoning
|
| 11 |
+
- long-video
|
| 12 |
+
- temporal-reasoning
|
| 13 |
+
- spatial-reasoning
|
| 14 |
+
- video-qa
|
| 15 |
+
size_categories:
|
| 16 |
+
- 10K<n<100K
|
| 17 |
+
pretty_name: HERBench
|
| 18 |
+
---
|
| 19 |
+
|
| 20 |
+
# HERBench: Multi-Evidence Integration in Video Question Answering
|
| 21 |
+
|
| 22 |
+
<div align="center">
|
| 23 |
+
|
| 24 |
+
<img src="assets/HERBench_logo.png" alt="HERBench Logo" width="400"/>
|
| 25 |
+
|
| 26 |
+
[](https://arxiv.org/abs/XXXX.XXXXX)
|
| 27 |
+
[](https://github.com/DanBenAmi/HERBench)
|
| 28 |
+
[](https://danbenami.github.io/herbench)
|
| 29 |
+
[](https://creativecommons.org/licenses/by-nc-sa/4.0/)
|
| 30 |
+
[](https://huggingface.co/datasets/DanBenAmi/HERBench)
|
| 31 |
+
|
| 32 |
+
*A challenging benchmark for evaluating multi-evidence integration capabilities of vision-language models*
|
| 33 |
+
|
| 34 |
+
</div>
|
| 35 |
+
|
| 36 |
+
---
|
| 37 |
+
|
| 38 |
+
## 📋 Dataset Summary
|
| 39 |
+
|
| 40 |
+
**HERBench** is a challenging benchmark designed to evaluate vision-language models on **multi-evidence integration** in long videos. Unlike existing benchmarks where questions can often be answered from single frames, HERBench enforces a **High Evidential Requirement (ER)** where each question requires aggregating at least **k ≥ 3 distinct, temporally separated visual cues**.
|
| 41 |
+
|
| 42 |
+
<div align="center">
|
| 43 |
+
<img src="assets/Teaser_plot.jpg" alt="HERBench Teaser" width="800"/>
|
| 44 |
+
</div>
|
| 45 |
+
|
| 46 |
+
### Key Statistics
|
| 47 |
+
|
| 48 |
+
| Metric | Full Version | Lite Version |
|
| 49 |
+
|--------|--------------|--------------|
|
| 50 |
+
| 📊 **Total Questions** | 27,936 five-way multiple-choice | ~5,600 questions (20%) |
|
| 51 |
+
| 🎬 **Videos** | 335 unique videos | ~67 videos (20%) |
|
| 52 |
+
| ⏱️ **Avg. Video Length** | 395 seconds (6.6 minutes) ||
|
| 53 |
+
| 📈 **Mean MRFS (Minimum Required Frame-Set)** | ~5.5 ||
|
| 54 |
+
| 📁 **Total Size** | ~161 GB | ~35 GB |
|
| 55 |
+
|
| 56 |
+
### Why HERBench?
|
| 57 |
+
|
| 58 |
+
Current video QA benchmarks often allow models to answer questions using single frames or limited context, failing to test true multi-evidence reasoning. HERBench addresses this by:
|
| 59 |
+
|
| 60 |
+
✅ **Enforcing multi-evidence integration** - Each question requires k ≥ 3 temporally separated frames
|
| 61 |
+
✅ **Preventing single-frame shortcuts** - Questions cannot be answered from isolated frames
|
| 62 |
+
✅ **Testing compositional reasoning** - Combines temporal, spatial, and causal reasoning
|
| 63 |
+
✅ **Evaluating long-video understanding** - Average video length of 6.6 minutes
|
| 64 |
+
|
| 65 |
+
### 🎯 Choose Your Version
|
| 66 |
+
|
| 67 |
+
HERBench is available in two versions to accommodate different storage and computational constraints:
|
| 68 |
+
|
| 69 |
+
#### Full Version (~161 GB)
|
| 70 |
+
- **27,936 questions** across **335 videos**
|
| 71 |
+
- Complete benchmark for comprehensive evaluation
|
| 72 |
+
- Recommended for: Final paper results, thorough model evaluation, benchmarking
|
| 73 |
+
|
| 74 |
+
#### Lite Version (~35 GB) 🚀
|
| 75 |
+
- **~5,600 questions** across **~67 videos** (20% subset)
|
| 76 |
+
- Same task distribution and difficulty as full version
|
| 77 |
+
- Videos sampled to maintain diversity across all 12 tasks
|
| 78 |
+
- Recommended for: Quick prototyping, limited storage, initial experiments, development
|
| 79 |
+
|
| 80 |
+
**Both versions maintain the same quality standards and high evidential requirements!**
|
| 81 |
+
|
| 82 |
+
---
|
| 83 |
+
|
| 84 |
+
## 📊 Leaderboard
|
| 85 |
+
|
| 86 |
+
Current state-of-the-art results on HERBench (Full benchmark):
|
| 87 |
+
|
| 88 |
+
| Model | Bench Version | # Frames | TR&C | R&T | GC&V | ME&N | Overall Avg. |
|
| 89 |
+
|-------|---------------|----------|------|-----|------|------|--------------|
|
| 90 |
+
| **Random Baseline** | Full | 16 | 20.0 | 20.0 | 20.0 | 20.0 | **20.0** |
|
| 91 |
+
| **GPT-4.1** | Full | 16 | 25.4 | 66.0 | 37.1 | 29.0 | **39.4** |
|
| 92 |
+
| **Gemini-2.5-Flash** | Full | 16 | 29.7 | 69.9 | 34.9 | 26.8 | **40.3** |
|
| 93 |
+
| **Qwen2.5-VL-72B** | Full | 16 | 26.9 | 70.9 | 36.6 | 24.4 | **39.7** |
|
| 94 |
+
| **Gemma-3-27B** | Full | 16 | 32.0 | 58.4 | 21.5 | 23.5 | **33.8** |
|
| 95 |
+
| **LLaMA-4-Scout-17B** | Full | 16 | 18.8 | 57.3 | 25.5 | 24.2 | **31.4** |
|
| 96 |
+
| **InternVL3.5-14B** | Full | 16 | 37.7 | 69.3 | 31.1 | 27.8 | **41.5** |
|
| 97 |
+
| **Ovis-2.5-9B** | Full | 16 | 18.9 | 73.5 | 46.8 | 29.2 | **42.1** |
|
| 98 |
+
| **InternVL3.5-8B** | Full | 16 | 33.6 | 70.2 | 29.7 | 30.8 | **41.1** |
|
| 99 |
+
| **LLaVA-OneVision1.5-8B** | Full | 16 | 26.1 | 67.7 | 33.6 | 24.9 | **38.1** |
|
| 100 |
+
| **Qwen3-VL-8B** | Full | 16 | 19.0 | 68.7 | 40.6 | 25.2 | **38.3** |
|
| 101 |
+
| **MiniCPM-V4.5-8B** | Full | 16 | 23.8 | 71.1 | 39.7 | 24.9 | **39.9** |
|
| 102 |
+
| **Qwen2.5-VL-7B** | Full | 16 | 21.8 | 60.6 | 38.7 | 22.6 | **35.9** |
|
| 103 |
+
| **LLaVA-OneVision-7B** | Full | 16 | 27.3 | 59.1 | 30.1 | 26.0 | **35.6** |
|
| 104 |
+
|
| 105 |
+
*TR&C = Temporal Reasoning & Chronology, R&T = Referring & Tracking, GC&V = Global Consistency & Verification, ME&N = Multi-Entity Aggregation & Numeracy*
|
| 106 |
+
|
| 107 |
+
**Key Findings:**
|
| 108 |
+
|
| 109 |
+
- 🔍 **Referring & Tracking is easier**: Models perform best on R&T tasks (avg. 66.8%) compared to other categories
|
| 110 |
+
- 🧩 **Multi-evidence is challenging**: Overall accuracy of 38.2% shows substantial room for improvement
|
| 111 |
+
- 📏 **Top performers**: Ovis-2.5-9B (42.1%) and InternVL3.5-14B (41.5%) lead the benchmark
|
| 112 |
+
- ⚖️ **Task variance**: Performance varies significantly across task families, with GC&V and ME&N being most challenging
|
| 113 |
+
|
| 114 |
+
---
|
| 115 |
+
|
| 116 |
+
## 🎯 Dataset Features
|
| 117 |
+
|
| 118 |
+
### High Evidential Requirement (ER)
|
| 119 |
+
|
| 120 |
+
Each question in HERBench is designed to require:
|
| 121 |
+
|
| 122 |
+
1. **Multiple evidence pieces** (k ≥ 3 frames minimum)
|
| 123 |
+
2. **Temporal separation** between evidence frames
|
| 124 |
+
3. **Compositional reasoning** across evidence
|
| 125 |
+
4. **Integration** of visual information from different moments
|
| 126 |
+
|
| 127 |
+
### 12 Compositional Task Types
|
| 128 |
+
|
| 129 |
+
#### Temporal Reasoning & Chronology
|
| 130 |
+
|
| 131 |
+
| Task Name | Abilities Tested | Example |
|
| 132 |
+
|-----------|------------------|---------|
|
| 133 |
+
| **[TSO] Temporal Shot Ordering** | Understanding event order, high-level scene transitions, chronological reconstruction using content cues | "The following 4 shots take place in the video: [Shot 1-4 descriptions]. Select the option that correctly reflects the order in which these shots occur in the video." |
|
| 134 |
+
| **[MPDR] Multi-Person Duration Reasoning** | Fine-grained time-span contrasts, interval statistics, comparing appearance durations across individuals | "These people were in the video: [Person 1-3 descriptions]. Who stayed in the frame FOV for the longest time?" |
|
| 135 |
+
| **[ASII] Action Sequence Integrity & Identification** | Micro-level task sequencing, action ordering, temporal understanding of fine-grained activities | "What is the correct temporal order of the 5 narrated events? (e.g., 1. slide coffee capsule -> 2. close lid -> 3. turn off processor -> 4. place orange -> 5. put down sponge)" |
|
| 136 |
+
|
| 137 |
+
#### Referring & Tracking
|
| 138 |
+
|
| 139 |
+
| Task Name | Abilities Tested | Example |
|
| 140 |
+
|-----------|------------------|---------|
|
| 141 |
+
| **[AGBI] Appearance-Grounded Behavior Interactions** | Social and relational cues, identity maintenance across time, interaction recognition | "In the video there is exactly one individual that fits the following description: [Appearance]. Who is accompanying the person as they walk across the frame?" |
|
| 142 |
+
| **[AGAR] Appearance-Grounded Attribute Recognition** | Moment-specific attribute extraction, target tracking, reading contextual details from specific individuals | "In the video there is exactly one individual that fits the following description: [Appearance]. What color is the jacket worn by the individual who remains seated as the main subject walks past?" |
|
| 143 |
+
| **[AGLT] Appearance-Grounded Localization Trajectory** | Global path-level motion reasoning, trajectory tracking, spatial exit/entry point identification | "In the video there is exactly one individual that fits the following description: [Appearance]. How does the person exit the frame at the end of their path?" |
|
| 144 |
+
|
| 145 |
+
#### Global Consistency & Verification
|
| 146 |
+
|
| 147 |
+
| Task Name | Abilities Tested | Example |
|
| 148 |
+
|-----------|------------------|---------|
|
| 149 |
+
| **[FAM] False Action Memory** | Action-level absence detection, exhaustive video-wide verification, distinguishing what did not occur | "Which of the following actions did NOT occur in the video? (A) open drawer (B) open up fridge (C) turn on tap..." |
|
| 150 |
+
| **[SVA] Scene Verification Arrangement** | Shot-level fidelity checking, chronology verification, distinguishing real from fabricated descriptions | "From the correctly described shots, which is the one that appears first in the video? [Multiple shot descriptions provided]" |
|
| 151 |
+
| **[FOM] False Object Memory** | Object-level absence detection, interaction verification, identifying non-interacted objects | "Which object did the camera wearer NOT interact with? (A) Cutting board (B) Sponge (C) Dish soap (D) Garlic presser..." |
|
| 152 |
+
|
| 153 |
+
#### Multi-Entity Aggregation & Numeracy
|
| 154 |
+
|
| 155 |
+
| Task Name | Abilities Tested | Example |
|
| 156 |
+
|-----------|------------------|---------|
|
| 157 |
+
| **[MEGL] Multi-Entities Grounding & Localization** | Set membership verification, identity deduplication, exact-match appearance verification | "Which of the following people appeared in the video (the person description must match exactly): [Person 1-3 descriptions] - A) only 1 and 3" |
|
| 158 |
+
| **[AC] Action Counting** | Event-accumulation across dispersed moments, counting repeated actions, temporal aggregation | "How many times does the action-object pair 'close tap' occur? A) 3 B) 5 C) 7..." |
|
| 159 |
+
| **[RLPC] Region-Localized People Counting** | Region-conditioned identity aggregation, spatial partitioning, counting with spatial constraints | "How many people entered the frame through the top edge? Select the range that includes the correct count." |
|
| 160 |
+
|
| 161 |
+
### Video Sources
|
| 162 |
+
|
| 163 |
+
Videos are sourced from diverse, high-quality datasets:
|
| 164 |
+
|
| 165 |
+
- **WildTrack** (56 segments): Multi-camera pedestrian tracking scenes
|
| 166 |
+
- **HD-EPIC** (176 videos): First-person egocentric daily activities
|
| 167 |
+
- **PersonPath22** (24 videos): Person tracking scenarios
|
| 168 |
+
- **Movie Trailers** (81 videos): Narrative storytelling content
|
| 169 |
+
|
| 170 |
+
---
|
| 171 |
+
|
| 172 |
+
## 📥 Dataset Structure
|
| 173 |
+
|
| 174 |
+
```
|
| 175 |
+
HERBench/
|
| 176 |
+
├── data/
|
| 177 |
+
│ ├── herbench_annotations.json # Full: 27,936 questions
|
| 178 |
+
│ ├── herbench_annotations_lite.json # Lite: ~5,600 questions
|
| 179 |
+
│ ├── task_metadata.json # Task descriptions (shared)
|
| 180 |
+
│ ├── video_metadata.json # Video information (shared)
|
| 181 |
+
│ └── README_DATA.md # Data format documentation
|
| 182 |
+
├── videos/
|
| 183 |
+
│ ├── videos.tar.part.00 # Lite videos start here
|
| 184 |
+
│ ├── videos.tar.part.01 # |
|
| 185 |
+
│ ├── videos.tar.part.02 # | Lite: parts 00-03 (~35GB)
|
| 186 |
+
│ ├── videos.tar.part.03 # |
|
| 187 |
+
│ ├── videos.tar.part.04 # |
|
| 188 |
+
│ ├── ... # | Full: all parts 00-XX (~161GB)
|
| 189 |
+
│ ├── videos.tar.part.XX # |
|
| 190 |
+
│ ├── videos.tar.checksums.txt # SHA256 checksums
|
| 191 |
+
│ └── videos_lite_info.txt # Info about archive structure
|
| 192 |
+
└── herbench_loader.py # Python dataloader (supports both)
|
| 193 |
+
```
|
| 194 |
+
|
| 195 |
+
**Archive Structure:** Videos are organized so that Lite videos are in the first archive parts (00-03), and Full-only videos are in the remaining parts. This allows efficient downloading of either version without duplication.
|
| 196 |
+
|
| 197 |
+
---
|
| 198 |
+
|
| 199 |
+
### Annotation Format
|
| 200 |
+
|
| 201 |
+
Each sample contains:
|
| 202 |
+
|
| 203 |
+
```json
|
| 204 |
+
{
|
| 205 |
+
"question_id": "HER_001234",
|
| 206 |
+
"video_id": "cam2_segment_4_180s_240s",
|
| 207 |
+
"video_path": "videos/WildTrack/cam2_segment_4_180s_240s.mp4",
|
| 208 |
+
"question": "What is the main activity happening throughout the video?",
|
| 209 |
+
"choices": [
|
| 210 |
+
"A. People walking across the scene",
|
| 211 |
+
"B. People standing and talking",
|
| 212 |
+
"C. People running in the same direction",
|
| 213 |
+
"D. People sitting on benches",
|
| 214 |
+
"E. People cycling through the area"
|
| 215 |
+
],
|
| 216 |
+
"answer": "A",
|
| 217 |
+
"answer_index": 0,
|
| 218 |
+
"answer_text": "People walking across the scene",
|
| 219 |
+
"task_type": "activity_recognition",
|
| 220 |
+
"metadata": {
|
| 221 |
+
"source_dataset": "WildTrack",
|
| 222 |
+
"duration": 60.0,
|
| 223 |
+
"resolution": "1920x1080",
|
| 224 |
+
"difficulty": "medium"
|
| 225 |
+
}
|
| 226 |
+
}
|
| 227 |
+
```
|
| 228 |
+
|
| 229 |
+
For detailed format documentation, see [data/README_DATA.md](data/README_DATA.md).
|
| 230 |
+
|
| 231 |
+
---
|
| 232 |
+
|
| 233 |
+
## 🚀 Quick Start
|
| 234 |
+
|
| 235 |
+
### 1. Download the Dataset
|
| 236 |
+
|
| 237 |
+
#### Option A: Using Hugging Face CLI (Recommended)
|
| 238 |
+
|
| 239 |
+
```bash
|
| 240 |
+
# Install Hugging Face CLI
|
| 241 |
+
pip install huggingface-hub
|
| 242 |
+
|
| 243 |
+
# Download FULL version (27,936 questions, ~161 GB)
|
| 244 |
+
huggingface-cli download DanBenAmi/HERBench --repo-type dataset --local-dir HERBench
|
| 245 |
+
|
| 246 |
+
# Download LITE version only (5,600 questions, ~35 GB)
|
| 247 |
+
huggingface-cli download DanBenAmi/HERBench \
|
| 248 |
+
--include "data/herbench_annotations_lite.json" \
|
| 249 |
+
--include "data/*metadata.json" \
|
| 250 |
+
--include "videos/videos.tar.part.00" \
|
| 251 |
+
--include "videos/videos.tar.part.01" \
|
| 252 |
+
--include "videos/videos.tar.part.02" \
|
| 253 |
+
--include "videos/videos.tar.part.03" \
|
| 254 |
+
--include "videos/videos_lite_info.txt" \
|
| 255 |
+
--include "videos/videos.tar.checksums.txt" \
|
| 256 |
+
--local-dir HERBench
|
| 257 |
+
|
| 258 |
+
# Or download only annotations (no videos)
|
| 259 |
+
huggingface-cli download DanBenAmi/HERBench --include "data/*" --local-dir HERBench
|
| 260 |
+
```
|
| 261 |
+
|
| 262 |
+
#### Option B: Using Python
|
| 263 |
+
|
| 264 |
+
```python
|
| 265 |
+
from datasets import load_dataset
|
| 266 |
+
|
| 267 |
+
# Load FULL version (default)
|
| 268 |
+
dataset_full = load_dataset("DanBenAmi/HERBench", name="full")
|
| 269 |
+
print(f"Total questions: {len(dataset_full['test'])}")
|
| 270 |
+
|
| 271 |
+
# Load LITE version
|
| 272 |
+
dataset_lite = load_dataset("DanBenAmi/HERBench", name="lite")
|
| 273 |
+
print(f"Total questions: {len(dataset_lite['test'])}")
|
| 274 |
+
```
|
| 275 |
+
|
| 276 |
+
### 2. Extract Videos
|
| 277 |
+
|
| 278 |
+
#### For Full Version:
|
| 279 |
+
|
| 280 |
+
```bash
|
| 281 |
+
cd HERBench/videos
|
| 282 |
+
|
| 283 |
+
# Concatenate all split archives
|
| 284 |
+
cat videos.tar.part.* > videos_full.tar
|
| 285 |
+
|
| 286 |
+
# Extract videos
|
| 287 |
+
tar -xvf videos_full.tar
|
| 288 |
+
|
| 289 |
+
# Verify checksums (optional)
|
| 290 |
+
sha256sum -c videos.tar.checksums.txt
|
| 291 |
+
|
| 292 |
+
# Clean up tar file (optional)
|
| 293 |
+
rm videos_full.tar
|
| 294 |
+
```
|
| 295 |
+
|
| 296 |
+
#### For Lite Version:
|
| 297 |
+
|
| 298 |
+
```bash
|
| 299 |
+
cd HERBench/videos
|
| 300 |
+
|
| 301 |
+
# Concatenate only lite archives (parts 00-03)
|
| 302 |
+
cat videos.tar.part.{00..03} > videos_lite.tar
|
| 303 |
+
|
| 304 |
+
# Extract videos
|
| 305 |
+
tar -xvf videos_lite.tar
|
| 306 |
+
|
| 307 |
+
# Clean up tar file (optional)
|
| 308 |
+
rm videos_lite.tar
|
| 309 |
+
```
|
| 310 |
+
|
| 311 |
+
> **Note:** The archive is structured so lite videos are in the first parts (00-03). This means if you download the full version, you automatically have the lite videos too!
|
| 312 |
+
|
| 313 |
+
### 3. Load and Use the Data
|
| 314 |
+
|
| 315 |
+
```python
|
| 316 |
+
from datasets import load_dataset
|
| 317 |
+
|
| 318 |
+
# Load the dataset (choose version)
|
| 319 |
+
dataset = load_dataset("DanBenAmi/HERBench", name="full") # or name="lite"
|
| 320 |
+
|
| 321 |
+
# Access a sample
|
| 322 |
+
sample = dataset['test'][0]
|
| 323 |
+
print(f"Question: {sample['question']}")
|
| 324 |
+
print(f"Choices: {sample['choices']}")
|
| 325 |
+
print(f"Answer: {sample['answer']}")
|
| 326 |
+
print(f"Video: {sample['video_path']}")
|
| 327 |
+
print(f"Task: {sample['task_type']}")
|
| 328 |
+
|
| 329 |
+
# Filter by task type
|
| 330 |
+
temporal_questions = [
|
| 331 |
+
q for q in dataset['test']
|
| 332 |
+
if q['task_type'] == 'temporal_reasoning'
|
| 333 |
+
]
|
| 334 |
+
print(f"Temporal reasoning questions: {len(temporal_questions)}")
|
| 335 |
+
|
| 336 |
+
# Compare versions
|
| 337 |
+
dataset_full = load_dataset("DanBenAmi/HERBench", name="full")
|
| 338 |
+
dataset_lite = load_dataset("DanBenAmi/HERBench", name="lite")
|
| 339 |
+
print(f"Full: {len(dataset_full['test'])} questions")
|
| 340 |
+
print(f"Lite: {len(dataset_lite['test'])} questions")
|
| 341 |
+
```
|
| 342 |
+
|
| 343 |
+
### 4. Run Evaluation
|
| 344 |
+
|
| 345 |
+
```bash
|
| 346 |
+
# Clone the evaluation code
|
| 347 |
+
git clone https://github.com/DanBenAmi/HERBench.git
|
| 348 |
+
cd HERBench
|
| 349 |
+
|
| 350 |
+
# Install dependencies
|
| 351 |
+
pip install -r requirements.txt
|
| 352 |
+
|
| 353 |
+
# Run evaluation on your model
|
| 354 |
+
python evaluation/run_evaluation.py \
|
| 355 |
+
model=your_model \
|
| 356 |
+
data_path=./HERBench \
|
| 357 |
+
output_path=./results
|
| 358 |
+
```
|
| 359 |
+
|
| 360 |
+
|
| 361 |
+
---
|
| 362 |
+
|
| 363 |
+
## 📜 Citation
|
| 364 |
+
|
| 365 |
+
If you use HERBench in your research, please cite:
|
| 366 |
+
|
| 367 |
+
```bibtex
|
| 368 |
+
@article{herbench2025,
|
| 369 |
+
title={HERBench: A Benchmark for Multi-Evidence Integration in Video Question Answering},
|
| 370 |
+
author={Ben-Ami, Dan and Serussi, Gabriele and Cohen, Kobi and Baskin, Chaim},
|
| 371 |
+
journal={arXiv preprint arXiv:XXXX.XXXXX},
|
| 372 |
+
year={2025}
|
| 373 |
+
}
|
| 374 |
+
```
|
| 375 |
+
|
| 376 |
+
---
|
| 377 |
+
|
| 378 |
+
## 📄 License
|
| 379 |
+
|
| 380 |
+
This dataset is released under the [Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License (CC BY-NC-SA 4.0)](https://creativecommons.org/licenses/by-nc-sa/4.0/).
|
| 381 |
+
|
| 382 |
+
### Terms of Use
|
| 383 |
+
|
| 384 |
+
**Research Use Only.**
|
| 385 |
+
HERBench is released strictly for non-commercial research and educational purposes.
|
| 386 |
+
The benchmark is constructed using videos originating from existing datasets and platforms, including WildTrack, HD-EPIC, PersonPath22, and publicly available online videos (e.g., YouTube trailers). All rights to the original video content remain with their respective owners and licensors.
|
| 387 |
+
|
| 388 |
+
HERBench does not claim ownership of any underlying video content. The use of such materials is intended solely for academic evaluation and analysis, in accordance with the terms of the respective source datasets and platforms.
|
| 389 |
+
|
| 390 |
+
**Removal upon request.**
|
| 391 |
+
If any content owner or rights holder believes that their material has been included in HERBench in a manner that violates applicable terms or rights, please contact us. Upon notification, we will promptly investigate the request and remove the relevant content as appropriate.
|
| 392 |
+
|
| 393 |
+
---
|
| 394 |
+
|
| 395 |
+
## 🙏 Acknowledgments
|
| 396 |
+
|
| 397 |
+
We thank the creators of the original video datasets (WildTrack, HD-EPIC, PersonPath22) for making their data publicly available. We also acknowledge the movie studios for releasing promotional trailers.
|
| 398 |
+
|
| 399 |
+
This work was supported by [Institution/Grant acknowledgments to be added].
|
| 400 |
+
|
| 401 |
+
---
|
| 402 |
+
|
| 403 |
+
## 📧 Contact
|
| 404 |
+
|
| 405 |
+
### Authors
|
| 406 |
+
|
| 407 |
+
- **Dan Ben-Ami** - [email protected]
|
| 408 |
+
- **Gabriele Serussi** - [email protected]
|
| 409 |
+
- **Kobi Cohen**
|
| 410 |
+
- **Chaim Baskin**
|
| 411 |
+
|
| 412 |
+
### Support
|
| 413 |
+
|
| 414 |
+
- **Issues**: [GitHub Issues](https://github.com/DanBenAmi/HERBench/issues)
|
| 415 |
+
- **Discussions**: [HF Discussions](https://huggingface.co/datasets/DanBenAmi/HERBench/discussions)
|
| 416 |
+
- **Email**: [email protected]
|
| 417 |
+
|
| 418 |
+
---
|
| 419 |
+
|
| 420 |
+
## 🔄 Updates
|
| 421 |
+
|
| 422 |
+
- **v1.0.0** (January 2025): Initial release with 27,936 questions across 335 videos
|
| 423 |
+
|
| 424 |
+
|
| 425 |
+
---
|
| 426 |
+
|
| 427 |
+
## 🔗 Links
|
| 428 |
+
|
| 429 |
+
- 📄 **Paper**: https://arxiv.org/abs/XXXX.XXXXX *(coming soon)*
|
| 430 |
+
- 💻 **Code**: https://github.com/DanBenAmi/HERBench
|
| 431 |
+
- 🌐 **Project Page**: https://danbenami.github.io/herbench *(coming soon)*
|
| 432 |
+
- 🤗 **Dataset**: https://huggingface.co/datasets/DanBenAmi/HERBench
|
| 433 |
+
- 📊 **Leaderboard**: *(coming soon)*
|
| 434 |
+
|
| 435 |
+
---
|
| 436 |
+
|
| 437 |
+
<div align="center">
|
| 438 |
+
|
| 439 |
+
**Built with ❤️ for advancing video understanding research**
|
| 440 |
+
|
| 441 |
+
*If you find HERBench useful, please ⭐ star our [GitHub repository](https://github.com/DanBenAmi/HERBench)!*
|
| 442 |
+
|
| 443 |
+
</div>
|
herbench_loader.py
ADDED
|
@@ -0,0 +1,333 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
"""
|
| 2 |
+
HERBench Dataset Loader for Hugging Face
|
| 3 |
+
|
| 4 |
+
This module provides a Hugging Face datasets loader for HERBench, a benchmark for
|
| 5 |
+
multi-evidence integration in video question answering.
|
| 6 |
+
|
| 7 |
+
Usage:
|
| 8 |
+
# Option 1: Load via Hugging Face datasets library
|
| 9 |
+
from datasets import load_dataset
|
| 10 |
+
dataset = load_dataset("DanBenAmi/HERBench")
|
| 11 |
+
print(dataset['test'][0])
|
| 12 |
+
|
| 13 |
+
# Option 2: Load locally
|
| 14 |
+
from datasets import load_dataset
|
| 15 |
+
dataset = load_dataset("path/to/HERBench/herbench_loader.py")
|
| 16 |
+
|
| 17 |
+
Example:
|
| 18 |
+
>>> from datasets import load_dataset
|
| 19 |
+
>>> dataset = load_dataset("DanBenAmi/HERBench")
|
| 20 |
+
>>> sample = dataset['test'][0]
|
| 21 |
+
>>> print(sample['question'])
|
| 22 |
+
>>> print(sample['choices'])
|
| 23 |
+
>>> print(sample['answer'])
|
| 24 |
+
|
| 25 |
+
For more information, visit:
|
| 26 |
+
- GitHub: https://github.com/DanBenAmi/HERBench
|
| 27 |
+
- Paper: https://arxiv.org/abs/XXXX.XXXXX (coming soon)
|
| 28 |
+
- Project Page: https://danbenami.github.io/herbench (coming soon)
|
| 29 |
+
"""
|
| 30 |
+
|
| 31 |
+
import json
|
| 32 |
+
from pathlib import Path
|
| 33 |
+
from typing import Dict, List, Optional
|
| 34 |
+
|
| 35 |
+
import datasets
|
| 36 |
+
|
| 37 |
+
|
| 38 |
+
_DESCRIPTION = """\
|
| 39 |
+
HERBench: A Benchmark for Multi-Evidence Integration in Video Question Answering
|
| 40 |
+
|
| 41 |
+
HERBench is a challenging benchmark designed to evaluate vision-language models on
|
| 42 |
+
multi-evidence integration in long videos. Unlike existing benchmarks where questions
|
| 43 |
+
can often be answered from single frames, HERBench enforces a High Evidential Requirement
|
| 44 |
+
(ER) where each question requires aggregating at least k ≥ 3 distinct, temporally
|
| 45 |
+
separated visual cues.
|
| 46 |
+
|
| 47 |
+
Key Features:
|
| 48 |
+
- 27,936 five-way multiple-choice questions (Full) or ~5,600 questions (Lite)
|
| 49 |
+
- 335 unique videos (Full) or ~67 videos (Lite)
|
| 50 |
+
- Average video length of 395 seconds (6.6 minutes)
|
| 51 |
+
- 12 compositional task types covering temporal, spatial, and causal reasoning
|
| 52 |
+
- Mean Minimum Required Frame-Set (MRFS) of 5.49
|
| 53 |
+
- Questions designed to prevent single-frame shortcuts
|
| 54 |
+
- Comprehensive evaluation of multi-evidence reasoning capabilities
|
| 55 |
+
|
| 56 |
+
Available in two versions:
|
| 57 |
+
- Full: 27,936 questions, 335 videos (~161 GB) - Complete benchmark
|
| 58 |
+
- Lite: ~5,600 questions, ~67 videos (~35 GB) - 20% subset for quick prototyping
|
| 59 |
+
|
| 60 |
+
The benchmark includes videos from diverse sources:
|
| 61 |
+
- WildTrack: Multi-camera pedestrian tracking scenes
|
| 62 |
+
- HD-EPIC: First-person egocentric videos of daily activities
|
| 63 |
+
- PersonPath22: Person tracking in various environments
|
| 64 |
+
- Movie Trailers: Narrative story understanding
|
| 65 |
+
|
| 66 |
+
Each question is carefully designed to require:
|
| 67 |
+
1. Multiple pieces of evidence (k ≥ 3 frames)
|
| 68 |
+
2. Temporal separation between evidence frames
|
| 69 |
+
3. Compositional reasoning across evidence
|
| 70 |
+
4. Integration of visual information from different moments
|
| 71 |
+
"""
|
| 72 |
+
|
| 73 |
+
_HOMEPAGE = "https://github.com/DanBenAmi/HERBench"
|
| 74 |
+
|
| 75 |
+
_LICENSE = "CC-BY-NC-SA-4.0"
|
| 76 |
+
|
| 77 |
+
_CITATION = """\
|
| 78 |
+
@article{herbench2025,
|
| 79 |
+
title={HERBench: A Benchmark for Multi-Evidence Integration in Video Question Answering},
|
| 80 |
+
author={Ben-Ami, Dan and Serussi, Gabriele and Cohen, Kobi and Baskin, Chaim},
|
| 81 |
+
journal={arXiv preprint arXiv:XXXX.XXXXX},
|
| 82 |
+
year={2025}
|
| 83 |
+
}
|
| 84 |
+
"""
|
| 85 |
+
|
| 86 |
+
_VERSION = "1.0.0"
|
| 87 |
+
|
| 88 |
+
|
| 89 |
+
class HERBenchConfig(datasets.BuilderConfig):
|
| 90 |
+
"""BuilderConfig for HERBench."""
|
| 91 |
+
|
| 92 |
+
def __init__(self, **kwargs):
|
| 93 |
+
"""BuilderConfig for HERBench.
|
| 94 |
+
|
| 95 |
+
Args:
|
| 96 |
+
**kwargs: keyword arguments forwarded to super.
|
| 97 |
+
"""
|
| 98 |
+
super(HERBenchConfig, self).__init__(**kwargs)
|
| 99 |
+
|
| 100 |
+
|
| 101 |
+
class HERBench(datasets.GeneratorBasedBuilder):
|
| 102 |
+
"""HERBench Dataset: Multi-Evidence Integration in Video QA."""
|
| 103 |
+
|
| 104 |
+
VERSION = datasets.Version(_VERSION)
|
| 105 |
+
|
| 106 |
+
BUILDER_CONFIGS = [
|
| 107 |
+
HERBenchConfig(
|
| 108 |
+
name="full",
|
| 109 |
+
version=VERSION,
|
| 110 |
+
description="Full HERBench dataset with all 27,936 questions and 335 videos (~161GB)",
|
| 111 |
+
),
|
| 112 |
+
HERBenchConfig(
|
| 113 |
+
name="lite",
|
| 114 |
+
version=VERSION,
|
| 115 |
+
description="HERBench-Lite: 20% subset with ~5,600 questions and ~67 videos (~35GB)",
|
| 116 |
+
),
|
| 117 |
+
]
|
| 118 |
+
|
| 119 |
+
DEFAULT_CONFIG_NAME = "full"
|
| 120 |
+
|
| 121 |
+
def _info(self):
|
| 122 |
+
"""Specify the datasets.DatasetInfo object."""
|
| 123 |
+
features = datasets.Features({
|
| 124 |
+
"question_id": datasets.Value("string"),
|
| 125 |
+
"video_id": datasets.Value("string"),
|
| 126 |
+
"video_path": datasets.Value("string"),
|
| 127 |
+
"question": datasets.Value("string"),
|
| 128 |
+
"choices": datasets.Sequence(datasets.Value("string")),
|
| 129 |
+
"answer": datasets.Value("string"),
|
| 130 |
+
"answer_index": datasets.Value("int32"),
|
| 131 |
+
"answer_text": datasets.Value("string"),
|
| 132 |
+
"task_type": datasets.Value("string"),
|
| 133 |
+
"metadata": datasets.Features({
|
| 134 |
+
"source_dataset": datasets.Value("string"),
|
| 135 |
+
"duration": datasets.Value("float32"),
|
| 136 |
+
"resolution": datasets.Value("string"),
|
| 137 |
+
"evidence_count": datasets.Value("int32"),
|
| 138 |
+
"difficulty": datasets.Value("string"),
|
| 139 |
+
}),
|
| 140 |
+
})
|
| 141 |
+
|
| 142 |
+
return datasets.DatasetInfo(
|
| 143 |
+
description=_DESCRIPTION,
|
| 144 |
+
features=features,
|
| 145 |
+
homepage=_HOMEPAGE,
|
| 146 |
+
license=_LICENSE,
|
| 147 |
+
citation=_CITATION,
|
| 148 |
+
version=self.VERSION,
|
| 149 |
+
)
|
| 150 |
+
|
| 151 |
+
def _split_generators(self, dl_manager):
|
| 152 |
+
"""Return SplitGenerators."""
|
| 153 |
+
# Determine which annotation file to use based on config
|
| 154 |
+
if self.config.name == "lite":
|
| 155 |
+
annotations_file = "data/herbench_annotations_lite.json"
|
| 156 |
+
else:
|
| 157 |
+
annotations_file = "data/herbench_annotations.json"
|
| 158 |
+
|
| 159 |
+
# Download and extract data files
|
| 160 |
+
data_files = dl_manager.download({
|
| 161 |
+
"annotations": annotations_file,
|
| 162 |
+
"task_metadata": "data/task_metadata.json",
|
| 163 |
+
"video_metadata": "data/video_metadata.json",
|
| 164 |
+
})
|
| 165 |
+
|
| 166 |
+
return [
|
| 167 |
+
datasets.SplitGenerator(
|
| 168 |
+
name=datasets.Split.TEST,
|
| 169 |
+
gen_kwargs={
|
| 170 |
+
"annotations_file": data_files["annotations"],
|
| 171 |
+
"task_metadata_file": data_files["task_metadata"],
|
| 172 |
+
"video_metadata_file": data_files["video_metadata"],
|
| 173 |
+
},
|
| 174 |
+
),
|
| 175 |
+
]
|
| 176 |
+
|
| 177 |
+
def _generate_examples(self, annotations_file, task_metadata_file, video_metadata_file):
|
| 178 |
+
"""Yield examples as (key, example) tuples."""
|
| 179 |
+
# Load annotations
|
| 180 |
+
with open(annotations_file, encoding="utf-8") as f:
|
| 181 |
+
annotations = json.load(f)
|
| 182 |
+
|
| 183 |
+
# Yield each annotation
|
| 184 |
+
for idx, annotation in enumerate(annotations):
|
| 185 |
+
# Ensure metadata exists
|
| 186 |
+
if "metadata" not in annotation:
|
| 187 |
+
annotation["metadata"] = {
|
| 188 |
+
"source_dataset": "unknown",
|
| 189 |
+
"duration": 0.0,
|
| 190 |
+
"resolution": "unknown",
|
| 191 |
+
"evidence_count": 0,
|
| 192 |
+
"difficulty": "unknown"
|
| 193 |
+
}
|
| 194 |
+
else:
|
| 195 |
+
# Ensure required metadata fields exist
|
| 196 |
+
metadata = annotation["metadata"]
|
| 197 |
+
if "source_dataset" not in metadata:
|
| 198 |
+
metadata["source_dataset"] = "unknown"
|
| 199 |
+
if "duration" not in metadata:
|
| 200 |
+
metadata["duration"] = 0.0
|
| 201 |
+
if "resolution" not in metadata:
|
| 202 |
+
metadata["resolution"] = "unknown"
|
| 203 |
+
if "evidence_count" not in metadata:
|
| 204 |
+
metadata["evidence_count"] = 0
|
| 205 |
+
if "difficulty" not in metadata:
|
| 206 |
+
metadata["difficulty"] = "unknown"
|
| 207 |
+
|
| 208 |
+
yield idx, {
|
| 209 |
+
"question_id": annotation.get("question_id", f"HER_{idx:06d}"),
|
| 210 |
+
"video_id": annotation.get("video_id", ""),
|
| 211 |
+
"video_path": annotation.get("video_path", ""),
|
| 212 |
+
"question": annotation.get("question", ""),
|
| 213 |
+
"choices": annotation.get("choices", []),
|
| 214 |
+
"answer": annotation.get("answer", ""),
|
| 215 |
+
"answer_index": int(annotation.get("answer_index", 0)),
|
| 216 |
+
"answer_text": annotation.get("answer_text", ""),
|
| 217 |
+
"task_type": annotation.get("task_type", "unknown"),
|
| 218 |
+
"metadata": annotation["metadata"],
|
| 219 |
+
}
|
| 220 |
+
|
| 221 |
+
|
| 222 |
+
# Example usage and helper functions
|
| 223 |
+
def load_herbench(cache_dir: Optional[str] = None) -> datasets.DatasetDict:
|
| 224 |
+
"""
|
| 225 |
+
Load HERBench dataset using Hugging Face datasets library.
|
| 226 |
+
|
| 227 |
+
Args:
|
| 228 |
+
cache_dir: Optional directory to cache the dataset.
|
| 229 |
+
|
| 230 |
+
Returns:
|
| 231 |
+
DatasetDict with 'test' split containing all questions.
|
| 232 |
+
|
| 233 |
+
Example:
|
| 234 |
+
>>> dataset = load_herbench()
|
| 235 |
+
>>> print(f"Total questions: {len(dataset['test'])}")
|
| 236 |
+
>>> print(dataset['test'][0])
|
| 237 |
+
"""
|
| 238 |
+
return datasets.load_dataset(
|
| 239 |
+
"DanBenAmi/HERBench",
|
| 240 |
+
cache_dir=cache_dir
|
| 241 |
+
)
|
| 242 |
+
|
| 243 |
+
|
| 244 |
+
def get_questions_by_task(dataset, task_type: str) -> List[Dict]:
|
| 245 |
+
"""
|
| 246 |
+
Filter questions by task type.
|
| 247 |
+
|
| 248 |
+
Args:
|
| 249 |
+
dataset: HERBench dataset or test split.
|
| 250 |
+
task_type: Task type to filter (e.g., 'temporal_reasoning').
|
| 251 |
+
|
| 252 |
+
Returns:
|
| 253 |
+
List of questions matching the task type.
|
| 254 |
+
|
| 255 |
+
Example:
|
| 256 |
+
>>> dataset = load_herbench()
|
| 257 |
+
>>> temporal_qs = get_questions_by_task(dataset['test'], 'temporal_reasoning')
|
| 258 |
+
>>> print(f"Temporal reasoning questions: {len(temporal_qs)}")
|
| 259 |
+
"""
|
| 260 |
+
if isinstance(dataset, datasets.DatasetDict):
|
| 261 |
+
dataset = dataset['test']
|
| 262 |
+
|
| 263 |
+
return [q for q in dataset if q['task_type'] == task_type]
|
| 264 |
+
|
| 265 |
+
|
| 266 |
+
def get_questions_by_video(dataset, video_id: str) -> List[Dict]:
|
| 267 |
+
"""
|
| 268 |
+
Get all questions for a specific video.
|
| 269 |
+
|
| 270 |
+
Args:
|
| 271 |
+
dataset: HERBench dataset or test split.
|
| 272 |
+
video_id: Video identifier.
|
| 273 |
+
|
| 274 |
+
Returns:
|
| 275 |
+
List of questions for the specified video.
|
| 276 |
+
|
| 277 |
+
Example:
|
| 278 |
+
>>> dataset = load_herbench()
|
| 279 |
+
>>> video_qs = get_questions_by_video(dataset['test'], 'cam2_segment_4_180s_240s')
|
| 280 |
+
>>> print(f"Questions for video: {len(video_qs)}")
|
| 281 |
+
"""
|
| 282 |
+
if isinstance(dataset, datasets.DatasetDict):
|
| 283 |
+
dataset = dataset['test']
|
| 284 |
+
|
| 285 |
+
return [q for q in dataset if q['video_id'] == video_id]
|
| 286 |
+
|
| 287 |
+
|
| 288 |
+
def print_sample(sample: Dict) -> None:
|
| 289 |
+
"""
|
| 290 |
+
Pretty print a sample from the dataset.
|
| 291 |
+
|
| 292 |
+
Args:
|
| 293 |
+
sample: A single sample from HERBench.
|
| 294 |
+
|
| 295 |
+
Example:
|
| 296 |
+
>>> dataset = load_herbench()
|
| 297 |
+
>>> print_sample(dataset['test'][0])
|
| 298 |
+
"""
|
| 299 |
+
duration = sample['metadata'].get('duration', 0.0)
|
| 300 |
+
print(f"Question ID: {sample['question_id']}")
|
| 301 |
+
print(f"Video: {sample['video_id']} ({duration:.1f}s)")
|
| 302 |
+
print(f"Resolution: {sample['metadata'].get('resolution', 'unknown')}")
|
| 303 |
+
print(f"Task: {sample['task_type']}")
|
| 304 |
+
print(f"\nQuestion: {sample['question']}")
|
| 305 |
+
print(f"\nChoices:")
|
| 306 |
+
for i, choice in enumerate(sample['choices']):
|
| 307 |
+
marker = "→" if i == sample['answer_index'] else " "
|
| 308 |
+
print(f" {marker} {choice}")
|
| 309 |
+
print(f"\nCorrect Answer: {sample['answer']} (index: {sample['answer_index']})")
|
| 310 |
+
if sample.get('answer_text'):
|
| 311 |
+
print(f"Answer Text: {sample['answer_text']}")
|
| 312 |
+
print(f"Source: {sample['metadata']['source_dataset']}")
|
| 313 |
+
print("-" * 60)
|
| 314 |
+
|
| 315 |
+
|
| 316 |
+
if __name__ == "__main__":
|
| 317 |
+
# Example usage when run as script
|
| 318 |
+
print("Loading HERBench dataset...")
|
| 319 |
+
dataset = load_herbench()
|
| 320 |
+
|
| 321 |
+
print(f"\nDataset loaded successfully!")
|
| 322 |
+
print(f"Total questions: {len(dataset['test'])}")
|
| 323 |
+
|
| 324 |
+
print(f"\nFirst sample:")
|
| 325 |
+
print_sample(dataset['test'][0])
|
| 326 |
+
|
| 327 |
+
# Show task distribution
|
| 328 |
+
from collections import Counter
|
| 329 |
+
task_counts = Counter(q['task_type'] for q in dataset['test'])
|
| 330 |
+
|
| 331 |
+
print(f"\nTask distribution:")
|
| 332 |
+
for task, count in task_counts.most_common():
|
| 333 |
+
print(f" {task}: {count}")
|