Datasets:

Modalities:
Text
Formats:
json
ArXiv:
Libraries:
Datasets
Dask
License:
File size: 9,826 Bytes
89a8aa4
4537acd
 
89a8aa4
bd79b44
9199547
 
8cce1cc
fdc0ad9
 
4537acd
 
 
 
 
 
 
89a8aa4
218162b
 
 
 
 
 
 
b6d61ba
218162b
 
 
b6d61ba
218162b
 
 
b6d61ba
218162b
 
 
b6d61ba
218162b
4537acd
 
 
89a8aa4
4537acd
89a8aa4
4537acd
 
 
 
 
89a8aa4
4537acd
89a8aa4
4537acd
 
 
 
 
89a8aa4
 
4537acd
89a8aa4
4537acd
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
---
annotations_creators:
- human-annotated
language_creators: []
language:
- cmn
- deu
- eng
- fra
- rus
license: unknown
multilinguality: translated
source_datasets:
- mteb/bucc-bitext-mining
task_categories:
- translation
task_ids: []
pretty_name: MTEB Benchmark
configs:
- config_name: default
  data_files:
  - path: test/*.jsonl.gz
    split: test
- config_name: fr-en
  data_files:
  - path: test/fr-en.jsonl.gz
    split: test
- config_name: ru-en
  data_files:
  - path: test/ru-en.jsonl.gz
    split: test
- config_name: de-en
  data_files:
  - path: test/de-en.jsonl.gz
    split: test
- config_name: zh-en
  data_files:
  - path: test/zh-en.jsonl.gz
    split: test
tags:
- mteb
- text
---
<!-- adapted from https://github.com/huggingface/huggingface_hub/blob/v0.30.2/src/huggingface_hub/templates/datasetcard_template.md -->

<div align="center" style="padding: 40px 20px; background-color: white; border-radius: 12px; box-shadow: 0 2px 10px rgba(0, 0, 0, 0.05); max-width: 600px; margin: 0 auto;">
  <h1 style="font-size: 3.5rem; color: #1a1a1a; margin: 0 0 20px 0; letter-spacing: 2px; font-weight: 700;">BUCC.v2</h1>
  <div style="font-size: 1.5rem; color: #4a4a4a; margin-bottom: 5px; font-weight: 300;">An <a href="https://github.com/embeddings-benchmark/mteb" style="color: #2c5282; font-weight: 600; text-decoration: none;" onmouseover="this.style.textDecoration='underline'" onmouseout="this.style.textDecoration='none'">MTEB</a> dataset</div>
  <div style="font-size: 0.9rem; color: #2c5282; margin-top: 10px;">Massive Text Embedding Benchmark</div>
</div>

BUCC bitext mining dataset

|               |                                             |
|---------------|---------------------------------------------|
| Task category | t2t                              |
| Domains       | Written                               |
| Reference     | https://comparable.limsi.fr/bucc2018/bucc2018-task.html |


## How to evaluate on this task

You can evaluate an embedding model on this dataset using the following code:

```python
import mteb

task = mteb.get_tasks(["BUCC.v2"])
evaluator = mteb.MTEB(task)

model = mteb.get_model(YOUR_MODEL)
evaluator.run(model)
```

<!-- Datasets want link to arxiv in readme to autolink dataset with paper -->
To learn more about how to run models on `mteb` task check out the [GitHub repitory](https://github.com/embeddings-benchmark/mteb). 

## Citation

If you use this dataset, please cite the dataset as well as [mteb](https://github.com/embeddings-benchmark/mteb), as this dataset likely includes additional processing as a part of the [MMTEB Contribution](https://github.com/embeddings-benchmark/mteb/tree/main/docs/mmteb).

```bibtex

@inproceedings{zweigenbaum-etal-2017-overview,
  abstract = {This paper presents the BUCC 2017 shared task on parallel sentence extraction from comparable corpora. It recalls the design of the datasets, presents their final construction and statistics and the methods used to evaluate system results. 13 runs were submitted to the shared task by 4 teams, covering three of the four proposed language pairs: French-English (7 runs), German-English (3 runs), and Chinese-English (3 runs). The best F-scores as measured against the gold standard were 0.84 (German-English), 0.80 (French-English), and 0.43 (Chinese-English). Because of the design of the dataset, in which not all gold parallel sentence pairs are known, these are only minimum values. We examined manually a small sample of the false negative sentence pairs for the most precise French-English runs and estimated the number of parallel sentence pairs not yet in the provided gold standard. Adding them to the gold standard leads to revised estimates for the French-English F-scores of at most +1.5pt. This suggests that the BUCC 2017 datasets provide a reasonable approximate evaluation of the parallel sentence spotting task.},
  address = {Vancouver, Canada},
  author = {Zweigenbaum, Pierre  and
Sharoff, Serge  and
Rapp, Reinhard},
  booktitle = {Proceedings of the 10th Workshop on Building and Using Comparable Corpora},
  doi = {10.18653/v1/W17-2512},
  editor = {Sharoff, Serge  and
Zweigenbaum, Pierre  and
Rapp, Reinhard},
  month = aug,
  pages = {60--67},
  publisher = {Association for Computational Linguistics},
  title = {Overview of the Second {BUCC} Shared Task: Spotting Parallel Sentences in Comparable Corpora},
  url = {https://aclanthology.org/W17-2512},
  year = {2017},
}


@article{enevoldsen2025mmtebmassivemultilingualtext,
  title={MMTEB: Massive Multilingual Text Embedding Benchmark},
  author={Kenneth Enevoldsen and Isaac Chung and Imene Kerboua and Márton Kardos and Ashwin Mathur and David Stap and Jay Gala and Wissam Siblini and Dominik Krzemiński and Genta Indra Winata and Saba Sturua and Saiteja Utpala and Mathieu Ciancone and Marion Schaeffer and Gabriel Sequeira and Diganta Misra and Shreeya Dhakal and Jonathan Rystrøm and Roman Solomatin and Ömer Çağatan and Akash Kundu and Martin Bernstorff and Shitao Xiao and Akshita Sukhlecha and Bhavish Pahwa and Rafał Poświata and Kranthi Kiran GV and Shawon Ashraf and Daniel Auras and Björn Plüster and Jan Philipp Harries and Loïc Magne and Isabelle Mohr and Mariya Hendriksen and Dawei Zhu and Hippolyte Gisserot-Boukhlef and Tom Aarsen and Jan Kostkan and Konrad Wojtasik and Taemin Lee and Marek Šuppa and Crystina Zhang and Roberta Rocca and Mohammed Hamdy and Andrianos Michail and John Yang and Manuel Faysse and Aleksei Vatolin and Nandan Thakur and Manan Dey and Dipam Vasani and Pranjal Chitale and Simone Tedeschi and Nguyen Tai and Artem Snegirev and Michael Günther and Mengzhou Xia and Weijia Shi and Xing Han Lù and Jordan Clive and Gayatri Krishnakumar and Anna Maksimova and Silvan Wehrli and Maria Tikhonova and Henil Panchal and Aleksandr Abramov and Malte Ostendorff and Zheng Liu and Simon Clematide and Lester James Miranda and Alena Fenogenova and Guangyu Song and Ruqiya Bin Safi and Wen-Ding Li and Alessia Borghini and Federico Cassano and Hongjin Su and Jimmy Lin and Howard Yen and Lasse Hansen and Sara Hooker and Chenghao Xiao and Vaibhav Adlakha and Orion Weller and Siva Reddy and Niklas Muennighoff},
  publisher = {arXiv},
  journal={arXiv preprint arXiv:2502.13595},
  year={2025},
  url={https://arxiv.org/abs/2502.13595},
  doi = {10.48550/arXiv.2502.13595},
}

@article{muennighoff2022mteb,
  author = {Muennighoff, Niklas and Tazi, Nouamane and Magne, Lo{\"\i}c and Reimers, Nils},
  title = {MTEB: Massive Text Embedding Benchmark},
  publisher = {arXiv},
  journal={arXiv preprint arXiv:2210.07316},
  year = {2022}
  url = {https://arxiv.org/abs/2210.07316},
  doi = {10.48550/ARXIV.2210.07316},
}
```

# Dataset Statistics
<details>
  <summary> Dataset Statistics</summary>

The following code contains the descriptive statistics from the task. These can also be obtained using:

```python
import mteb

task = mteb.get_task("BUCC.v2")

desc_stats = task.metadata.descriptive_stats
```

```json
{
    "test": {
        "num_samples": 35000,
        "number_of_characters": 6640032,
        "unique_pairs": 34978,
        "min_sentence1_length": 16,
        "average_sentence1_length": 99.10931428571429,
        "max_sentence1_length": 204,
        "unique_sentence1": 34978,
        "min_sentence2_length": 42,
        "average_sentence2_length": 90.60588571428572,
        "max_sentence2_length": 159,
        "unique_sentence2": 25306,
        "hf_subset_descriptive_stats": {
            "de-en": {
                "num_samples": 9580,
                "number_of_characters": 1919197,
                "unique_pairs": 9573,
                "min_sentence1_length": 50,
                "average_sentence1_length": 109.07974947807934,
                "max_sentence1_length": 204,
                "unique_sentence1": 9573,
                "min_sentence2_length": 46,
                "average_sentence2_length": 91.25396659707724,
                "max_sentence2_length": 155,
                "unique_sentence2": 9570
            },
            "fr-en": {
                "num_samples": 9086,
                "number_of_characters": 1677545,
                "unique_pairs": 9081,
                "min_sentence1_length": 43,
                "average_sentence1_length": 99.31785163988553,
                "max_sentence1_length": 174,
                "unique_sentence1": 9081,
                "min_sentence2_length": 42,
                "average_sentence2_length": 85.3117983711204,
                "max_sentence2_length": 159,
                "unique_sentence2": 9076
            },
            "ru-en": {
                "num_samples": 14435,
                "number_of_characters": 2808206,
                "unique_pairs": 14425,
                "min_sentence1_length": 40,
                "average_sentence1_length": 101.6593003117423,
                "max_sentence1_length": 186,
                "unique_sentence1": 14425,
                "min_sentence2_length": 45,
                "average_sentence2_length": 92.88216141323173,
                "max_sentence2_length": 159,
                "unique_sentence2": 14424
            },
            "zh-en": {
                "num_samples": 1899,
                "number_of_characters": 235084,
                "unique_pairs": 1899,
                "min_sentence1_length": 16,
                "average_sentence1_length": 28.429699842022117,
                "max_sentence1_length": 40,
                "unique_sentence1": 1899,
                "min_sentence2_length": 48,
                "average_sentence2_length": 95.3638757240653,
                "max_sentence2_length": 159,
                "unique_sentence2": 1899
            }
        }
    }
}
```

</details>

---
*This dataset card was automatically generated using [MTEB](https://github.com/embeddings-benchmark/mteb)*