--- license: cc-by-sa-4.0 language: - fi task_categories: - text-classification - question-answering - text-generation dataset_info: features: - name: id dtype: string - name: title dtype: string - name: context dtype: string - name: question dtype: string - name: answers sequence: - name: text dtype: string - name: answer_start dtype: int32 splits: - name: train num_bytes: 123795424 num_examples: 128186 - name: validation num_bytes: 12424029 num_examples: 11789 download_size: 19275230 dataset_size: 136219453 configs: - config_name: default data_files: - split: train path: data/train-* - split: validation path: data/validation-* --- ### Dataset Summary This is a Finnish SQuAD question answering dataset used in [FIN-bench-v2: A Unified and Robust Benchmark Suite for Evaluating Finnish Large Language Models](https://huggingface.co/papers/2512.13330). It is a DeepL-based machine translation of the English SQuAD2.0 dataset which combines the 100,000 questions in SQuAD1.1 with over 50,000 unanswerable questions written adversarially by crowdworkers to look similar to answerable ones. To do well on SQuAD2.0, systems must not only answer questions when possible, but also determine when no answer is supported by the paragraph and abstain from answering. Project page: https://huggingface.co/TurkuNLP Code: https://github.com/LumiOpen/lm-evaluation-harness ### Considerations for Using the Data Due to DeepL terms and conditions, this dataset **must not be used for any machine translation work**, namely machine translation system development and evaluation of any kind. In general, we wish you do not pair the original English data with the translations except when working on research unrelated to machine translation, so as not to infringe on the terms and conditions.