Update README.md
Browse files
README.md
CHANGED
|
@@ -670,12 +670,12 @@ The datasets are built from the Wikipedia dump
|
|
| 670 |
contains the content of one full Wikipedia article with cleaning to strip
|
| 671 |
markdown and unwanted sections (references, etc.).
|
| 672 |
|
| 673 |
-
The articles are parsed using the ``mwparserfromhell`` tool.
|
| 674 |
|
| 675 |
-
To load this dataset you need to install
|
| 676 |
|
| 677 |
```
|
| 678 |
-
pip install mwparserfromhell
|
| 679 |
```
|
| 680 |
|
| 681 |
Then, you can load any subset of Wikipedia per language and per date this way:
|
|
|
|
| 670 |
contains the content of one full Wikipedia article with cleaning to strip
|
| 671 |
markdown and unwanted sections (references, etc.).
|
| 672 |
|
| 673 |
+
The articles are parsed using the ``mwparserfromhell`` tool, and we use ``multiprocess`` for parallelization.
|
| 674 |
|
| 675 |
+
To load this dataset you need to install these first:
|
| 676 |
|
| 677 |
```
|
| 678 |
+
pip install mwparserfromhell==0.6.4 multiprocess==0.70.13
|
| 679 |
```
|
| 680 |
|
| 681 |
Then, you can load any subset of Wikipedia per language and per date this way:
|