Full Dataset

#3
by JamshidJDMY - opened

May I kindly ask when the complete version will be ready?

Hi there @JamshidJDMY - absolutely, and happy to ping you when it's complete. Should be up in the next few days (Ai2 is currently on holiday break, but working to get this finished). For now, I can point you to our data mix for 32B, which is complete:https://huggingface.co/datasets/allenai/dolma3_mix-5.5T-1125. Thanks for your patience!

Hey @JamshidJDMY , just wanted to give an update. This mix is complete now, except for olmocr_science_pdfs. Due to redactions that we had to make one some of these olmocr PDF texts, we are still working out how best to represent them in this data mix. For this reason, the exact full set of olmocr PDFs that we trained on will not be available to the public, as the redactions happened after training and we cannot release them. I should be able to provide the set of the non redacted PDFs in this mix (probably next week, after Ai2 returns from break). However, I'd recommend using the 32B mix PDFs as a supplement so that the mix is more complete with all sources. Thank you!

Hi @baileyk , thanks for the update! Just to confirm—would it be okay to use the 32B mix PDFs as a supplement alongside the other 7B sources to make the dataset more complete?

JamshidJDMY changed discussion status to closed
JamshidJDMY changed discussion status to open

Hi, kindly ask if the dataset for 7B model has been complete? Just saw some updates 2 days ago on both 7B and 32B.

Ai2 org

@JamshidJDMY actually, I think even easier would be to just use the 32B mix directly (all sources), unless you are specifically trying to reproduce the 7B model. For all other use cases (e.g. training from scratch, etc.), I would recommend using the 32B mix, as it's actually the same as our 7B mix except for the PDFs!

Hi @Jianxiao0203 , the messages above in this discussion might help guide you. If you specifically want to reproduce our 7B model, then I can ping you in the next day or two when our olmOCR PDFs are complete for this mix -- all other sources for the 7B mix are complete except that source (see comment above for context). For all other use cases, you can simply use our 32B mix. I will be adding some clear notes about this to the README of this repo so that this is clear!

Hi @Jianxiao0203 , the messages above in this discussion might help guide you. If you specifically want to reproduce our 7B model, then I can ping you in the next day or two when our olmOCR PDFs are complete for this mix -- all other sources for the 7B mix are complete except that source (see comment above for context). For all other use cases, you can simply use our 32B mix. I will be adding some clear notes about this to the README of this repo so that this is clear!

Thank you! I have actually downloaded the 7B dataset, but I found that there are only 1.6B docs, which was just roughly about 2T tokens (I thought it should be around 5.7T). I found that the shard index sequence under most common-crawl subfolders are non-continuous, does that meet your expectation?

@Jianxiao0203 thanks for the info! It is expected that the shards are non-continuous for common crawl due to the way we filter / upsample. I'm currently double checking on the docs though - it should be more around 3B documents after upsampling, so I'm checking on how the upsampling documents were reconstructed. Will keep you posted, but 32B repo should be good to use.

Sign up or log in to comment