Dataset Viewer
The dataset viewer is not available for this dataset.
Cannot get the config names for the dataset.
Error code:   ConfigNamesError
Exception:    RuntimeError
Message:      Dataset scripts are no longer supported, but found distilled-yodas-spanish.py
Traceback:    Traceback (most recent call last):
                File "/src/services/worker/src/worker/job_runners/dataset/config_names.py", line 66, in compute_config_names_response
                  config_names = get_dataset_config_names(
                                 ^^^^^^^^^^^^^^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/datasets/inspect.py", line 161, in get_dataset_config_names
                  dataset_module = dataset_module_factory(
                                   ^^^^^^^^^^^^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/datasets/load.py", line 1031, in dataset_module_factory
                  raise e1 from None
                File "/usr/local/lib/python3.12/site-packages/datasets/load.py", line 989, in dataset_module_factory
                  raise RuntimeError(f"Dataset scripts are no longer supported, but found {filename}")
              RuntimeError: Dataset scripts are no longer supported, but found distilled-yodas-spanish.py

Need help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.

Dataset Card for distilled-yodas-spanish

Dataset Summary

Distilled YODAS Spanish is a high-quality subset of the Spanish portion of the YouTube-Oriented Dataset for Audio and Speech (YODAS). While the full YODAS corpus contains over 37,000 hours of Spanish speech across 43 million files, this dataset provides a distilled version of approximately 8,000 validated hours.

To construct this resource, we applied filtering steps to retain only utterances between 2–30 seconds and with at least three words per transcription. These filtered segments were then validated using two dedicated Spanish verification models (Model A and Model B), alongside the automatic transcriptions originally provided by YODAS.

Consensus criteria were used to ensure transcription quality:

  • ABR: triple matches among Model A, Model B, and the YODAS reference
  • AB, AR, BR: two-source matches between the models and/or the reference

From the highest-confidence ABR subset, 30 hours were reserved for validation and another 30 hours for testing. The training splits combine all consensus categories, yielding a total of 7,997 hours of validated Spanish speech.

This corpus enables large-scale, high-quality training and evaluation for Automatic Speech Recognition (ASR) and related tasks in Spanish.

Example Usage

The Distilled YODAS Spanish Corpus is divided into 6 loadable splits: train_ABR, train_AB, train_AR, train_AB, test_ABR and validation_ABR. To load the whole dataset, do:

#It was tested with: datasets==2.14.6, huggingface_hub>=0.24 and pyarrow==12.0.1

from datasets import load_dataset
ds_dys = load_dataset("BSC-LT/distilled-yodas-spanish")

To load a specific split (for example, the split with the best quality train transcripts), do:

#It was tested with: datasets==2.14.6, huggingface_hub>=0.24 and pyarrow==12.0.1

from datasets import load_dataset
ds_dys_train_abr = load_dataset("BSC-LT/distilled-yodas-spanish",split="train_ABR")

Note on Dataset Scripts & Hugging Face datasets Compatibility

Recent versions of the Hugging Face datasets library introduced major changes in how datasets are loaded and inspected. In particular, dataset loading scripts (*.py) are no longer supported by the Dataset Viewer or by several internal inspection functions used by the Hub.

As a result, attempting to load this dataset directly from the Hub using:

load_dataset("BSC-LT/distilled-yodas-spanish", trust_remote_code=True)

may fail with errors such as:

RuntimeError: Dataset scripts are no longer supported, but found distilled-yodas-spanish.py
UnicodeDecodeError: 'utf-8' codec can't decode byte 0x85...

This is not an issue with this repository. It is a documented limitation and ongoing change in the Hugging Face ecosystem.

Official references:

Dataset scripts are no longer supported: https://discuss.huggingface.co/t/dataset-scripts-are-no-longer-supported/163891

GitHub issue confirming the same error: https://github.com/huggingface/datasets/issues/7693

Example from LibriSpeech ASR (which also had to migrate to Parquet formats): https://huggingface.co/datasets/openslr/librispeech_asr/discussions/17

What this means:

The Hugging Face Hub refuses to execute dataset loader scripts in the Dataset Viewer.

Newer versions of datasets (around 2.18–2.20) include internal checks that break when binary files (TAR, Parquet, WAV) exist in the repository, causing UnicodeDecodeError before the script is executed.

Older versions of datasets (≤2.14.x) continue to support dataset scripts, but do not accept the trust_remote_code argument and are being phased out.

Recommended workflows:

Local loading using the script (recommended for researchers):

from datasets import load_dataset
ds = load_dataset("/path/to/distilled-yodas-spanish.py", cache_dir="CACHE")

Hub loading (only with older versions of datasets):

pip install datasets==2.14.6
load_dataset("BSC-LT/distilled-yodas-spanish", cache_dir="CACHE")

Note: In datasets==2.14.6, the parameter trust_remote_code is not supported because dataset scripts were automatically executed.

Future direction:

To fully support the Dataset Viewer and modern versions of datasets, this dataset will eventually be migrated to a data-only format (for example, Parquet files plus dataset.yaml), with the Python loader script hosted separately. This approach matches the transition taken by other large datasets on the Hugging Face Hub.

Supported Tasks

automatic-speech-recognition: The dataset can be used to train a model for Automatic Speech Recognition (ASR). The model is presented with an audio file and asked to transcribe the audio file to written text. The most common evaluation metric is the word error rate (WER).

Languages

The audio is in Spanish.

Dataset Structure

Data Instances

{
  'audio_id': 'zxrz-rg_mNU-00022-00007184-00007544', 
  'audio': {
    'path': '/home/carlos/.cache/HuggingFace/datasets/downloads/extracted/b00e44a34f345f60469f5ae5a49881c7a339d900ce9509e39880be27109ca0fd/train-ABR-00045/zxrz-rg_mNU/zxrz-rg_mNU-00022-00007184-00007544.wav', 
    'array': array([ 0.04498291,  0.0585022 ,  0.0921936 , ..., -0.03372192,
       -0.1177063 , -0.17327881]), 
    'sampling_rate': 16000
  }, 
  'corpus_id': 'distilled-yodas-spanish', 
  'split': 'train-ABR', 
  'language': 'Spanish', 
  'duration': 3.5999999046325684, 
  'video_id': 'zxrz-rg_mNU', 
  'consensus': 'AR;BR;AB', 
  'normalized_text': 'escritura de este autor israelí', 
  'relative_path': 'corpus/speech/train/train-ABR/train-ABR-00045/zxrz-rg_mNU/zxrz-rg_mNU-00022-00007184-00007544.wav'
 }

Data Fields

  • audio_id (string) - Unique identifier for the audio segment.
  • audio (datasets.Audio) - A dictionary containing the path to the audio file, the decoded audio array, and the sampling rate. In non-streaming mode (default), the path points to the locally extracted audio file. In streaming mode, it corresponds to the relative path of the audio inside its archive, as files are not extracted locally.
  • corpus_id (string) - id of the dataset
  • split (string) - Indicates both the dataset split (train, validation, or test) and the consensus category (ABR, AB, AR, BR).
  • language (string) - Language of the speech segment. In this dataset, all segments are in Spanish.
  • duration (float32) - Duration of the audio file in seconds.
  • video_id (string) - YouTube video identifier. When appended to https://www.youtube.com/watch?v=, it leads to the original video.
  • consensus (string) - Indicates which systems agreed on the number of words. For instance, AB means that Model A and Model B both generated the same transcription.
  • normalized_text (string) - Final transcription after normalization (e.g., lowercasing, punctuation removal, etc.).
  • relative_path (string) - Relative path with respecto the directory named as "corpus" in this repository.

Data Splits

The corpus is divided into the following splits.

Split Size (h) Files Consensus
Test-ABR 30h42m 33,757 ABR
Validation-ABR 30h00m 33,005 ABR
Train-ABR 1,512h15m 1,703,686 ABR
Train-AB 3,550h38m 4,004,768 AB
Train-AR 1,071h09m 1,219,697 AR
Train-BR 1,803h28m 2,194,531 BR

To load a specific split, please check the above section "Example Usage".

Dataset Creation

Curation Rationale

The motivation for curating this dataset stems from the need for high-quality ASR training data in Spanish. While the original YODAS corpus provides large quantities of speech, its transcriptions vary in quality.

To distill the most reliable segments, we trained two independent ASR systems (verification models A and B) and selected transcriptions based on system agreement. Perfect agreement was used as a strong indicator of correctness.

This approach enables the creation of a high-confidence dataset with minimal human effort, making it especially valuable for training robust ASR models in under-resourced languages.

Source Data

Initial Data Collection and Normalization

The audio data in this corpus was sourced directly from the YODAS dataset developed by ESPnet. No additional recordings were collected.

We did not alter the original segmentation or reprocess the audio files. Instead, we focused on curating the transcriptions by applying an automatic verification strategy.

Specifically, the corpus was processed using Verification Models A and B, trained independently on different datasets. Segments were retained based on model agreement—either through identical transcriptions. This process produced a high-confidence subset of the original corpus.

Annotations

Annotation process

To further evaluate the effectiveness of our automatic validation protocol, we conducted human verification on a randomly selected subset of utterances from each consensus type (AB, AR, BR, ABR). One hundred audio segments for each consensus, totaling 400 samples, was randomly selected. Each sample was reviewed independently by three members of our annotation team. The annotators had to listen to the audio recordings and mark the transcription as correct or incorrect. Any deviation in words, missing content, or misrecognized speech was flagged as incorrect.

Who are the annotators?

The annotators are part of the annotation team of the Barcelona Supercomputing Center and they were led by Carme Armentano-Oller. The annotation team consists of Paula Arnas, Marc Casadesús, Núria Poch, Carles-Andreu Rodríguez, and Carla Sanjuan.

Personal and Sensitive Information

The dataset consists of public YouTube videos with a CC license. You agree not to attempt to determine the identity of speakers in this dataset.

Considerations for Using the Data

Social Impact of Dataset

The Distilled YODAS Spanish Corpus is a source of spontaneous speech data that will be valuable in the development of speech technologies for Spanish.

Discussion of Biases

The language is limited to the YouTube videos used to create the corpus and may not be representative of all domains.

Other Known Limitations

While the Distilled YODAS Spanish dataset provides nearly 8,000 hours of validated speech, a few limitations should be noted:

  • Automatic transcription origin: The original YODAS transcriptions are automatically generated, which means that some residual errors may persist even after consensus validation.
  • Domain bias: Since the source data comes from YouTube, the speech may not be fully representative of other domains such as telephone conversations, formal meetings, or broadcast news.
  • Consensus filtering: The dataset only includes segments where at least two transcriptions matched. This improves reliability but may also discard useful speech segments where models disagreed.
  • Language variety: The dataset focuses on Spanish, but it may not equally represent all dialectal varieties across the Spanish-speaking world.
  • Speaker diarization: No speaker diarization or speaker count verification was performed. Some audio segments may feature multiple speakers.
  • Background conditions: Noise levels, music presence, and overlapping speech were not systematically assessed or annotated.
  • Code-switching: No explicit detection or annotation of code-switching was applied. Some segments may include speech in other languages (e.g., English, Catalan, etc.), without estimation of frequency or distribution.

Additional Information

Dataset Curators

The corpus was curated by Carlos Daniel Hernández Mena in 2025 in the Language Technologies Laboratory of the Barcelona Supercomputing Center under the supervision of Cristina España-Bonet.

Contact

For further information, please email [email protected].

Licensing Information

CC-BY-3.0 (Same as source).

Citation Information

@misc{BSC-disyodases-2025,
  title        = {The Distilled YODAS Spanish Corpus},
  author       = {Hern{\'a}ndez Mena, Carlos Daniel and Armentano-Oller, Carme and Espa{\~n}a-Bonet, Cristina},
  year         = {2025},
  howpublished = {https://huggingface.co/datasets/BSC-LT/distilled-yodas-spanish},
  publisher    = {Barcelona Supercomputing Center}
}

Funding

This work is funded by the Ministerio para la Transformación Digital y de la Función Pública - Funded by EU – NextGenerationEU within the framework of the project Desarrollo de Modelos ALIA.

The training of the model was possible thanks to the computing time provided by Barcelona Supercomputing Center through MareNostrum 5.

We acknowledge the EuroHPC Joint Undertaking (with project ID: EHPC-DEV-2025D04-131) for awarding us access to MareNostrum5 as BSC, Spain.

Special thanks to Irene Baucells and Joan Llop who conducted experiments with two Speech-LLMs that will be reported in the paper presenting this dataset.

Downloads last month
242