|
|
--- |
|
|
license: apache-2.0 |
|
|
task_categories: |
|
|
- video-text-to-text |
|
|
- visual-question-answering |
|
|
language: |
|
|
- en |
|
|
tags: |
|
|
- video |
|
|
- long-video |
|
|
- reasoning |
|
|
- tool-calling |
|
|
- multimodal |
|
|
size_categories: |
|
|
- 100K<n<1M |
|
|
--- |
|
|
|
|
|
# LongVT-Source |
|
|
|
|
|
This repository contains the source video and image files for the [LongVT](https://github.com/EvolvingLMMs-Lab/LongVT) project. |
|
|
|
|
|
## Overview |
|
|
|
|
|
LongVT is an end-to-end agentic framework that enables "Thinking with Long Videos" via interleaved Multimodal Chain-of-Tool-Thought. This dataset provides the raw media files referenced by the training annotations in [LongVT-Parquet](https://huggingface.co/datasets/longvideotool/LongVT-Parquet). |
|
|
|
|
|
## Dataset Structure |
|
|
|
|
|
The source files are organized by dataset type and stored as zip archives: |
|
|
|
|
|
### Training Data |
|
|
|
|
|
| Source | Description | Files | |
|
|
|--------|-------------|-------| |
|
|
| `longvideoreason` | Long video reasoning data | 66 zips | |
|
|
| `videor1` | Video-R1 COT data | 13 zips | |
|
|
| `longvideoreflection` | Long video reflection data | 27 zips | |
|
|
| `selftrace` | Self-distilled iMCoTT traces | 6 zips | |
|
|
| `tvg` | Temporal video grounding data | 2 zips | |
|
|
| `geminicot` | Gemini-distilled COT data | 2 zips | |
|
|
| `llavacot` | LLaVA COT data | 1 zip | |
|
|
| `openvlthinker` | OpenVLThinker data | 1 zip | |
|
|
| `wemath` | WeMath data | 1 zip | |
|
|
| `selfqa` | Self-curated QA for RL | 1 zip | |
|
|
| `rl_val` | RL validation data | 1 zip | |
|
|
|
|
|
### Evaluation Data |
|
|
|
|
|
We have transferred the source videos of VideoSIAH-Eval to [longvideotool/VideoSIAH-Eval](https://huggingface.co/datasets/longvideotool/VideoSIAH-Eval). |
|
|
|
|
|
| Source | Description | Files | |
|
|
|--------|-------------|-------| |
|
|
| `videosiaheval` | VideoSIAH-Eval benchmark videos | 12 zips | |
|
|
|
|
|
## Download |
|
|
|
|
|
# Install huggingface_hub |
|
|
pip install huggingface_hub |
|
|
|
|
|
# Download all source files |
|
|
huggingface-cli download longvideotool/LongVT-Source --repo-type dataset --local-dir ./source |
|
|
|
|
|
# Or download specific files |
|
|
huggingface-cli download longvideotool/LongVT-Source longvideoreason_1.zip --repo-type dataset --local-dir ./source## Usage |
|
|
|
|
|
After downloading, extract the zip files to obtain the source media: |
|
|
|
|
|
cd source |
|
|
unzip "*.zip"The extracted paths will match those referenced in the [LongVT-Parquet](https://huggingface.co/datasets/longvideotool/LongVT-Parquet) annotations. |
|
|
|
|
|
## Related Resources |
|
|
|
|
|
- π **Paper**: [arXiv:2511.20785](https://arxiv.org/abs/2511.20785) |
|
|
- π **Project Page**: [LongVT Website](https://evolvinglmms-lab.github.io/LongVT/) |
|
|
- π» **Code**: [GitHub Repository](https://github.com/EvolvingLMMs-Lab/LongVT) |
|
|
- π **Annotations**: [LongVT-Parquet](https://huggingface.co/datasets/longvideotool/LongVT-Parquet) |
|
|
- π€ **Models**: [LongVT Collection](https://huggingface.co/collections/lmms-lab/longvt) |
|
|
|
|
|
## Citation |
|
|
|
|
|
If you find LongVT useful for your research and applications, please cite using this BibTeX: |
|
|
|
|
|
```bibtex |
|
|
@misc{yang2025longvtincentivizingthinkinglong, |
|
|
title={LongVT: Incentivizing "Thinking with Long Videos" via Native Tool Calling}, |
|
|
author={Zuhao Yang and Sudong Wang and Kaichen Zhang and Keming Wu and Sicong Leng and Yifan Zhang and Bo Li and Chengwei Qin and Shijian Lu and Xingxuan Li and Lidong Bing}, |
|
|
year={2025}, |
|
|
eprint={2511.20785}, |
|
|
archivePrefix={arXiv}, |
|
|
primaryClass={cs.CV}, |
|
|
url={https://arxiv.org/abs/2511.20785}, |
|
|
} |
|
|
``` |
|
|
|
|
|
## License |
|
|
|
|
|
This dataset is released under the Apache 2.0 License. |