Tools to download and clean Common Crawl
Project description
cc_net
Tools to download and clean Common Crawl as introduced in our paper CCNet.
If you found these resources useful, please consider citing:
@article{wenzek2019ccnet,
title={Ccnet: Extracting high quality monolingual datasets from web crawl data},
author={Wenzek, Guillaume and Lachaux, Marie-Anne and Conneau, Alexis and Chaudhary, Vishrav and Guzman, Francisco and Joulin, Armand and Grave, Edouard},
journal={arXiv preprint arXiv:1911.00359},
year={2019}
}
Installation
We only tried this on Linux but installation should be possible on MacOS too.
-
Create or simlink a
data
folder to where you want to download the corpus. -
Run
make install
. This will download some resources and install required packages. -
If you have a C++ 17 compiler you can also run
pip install .[getpy]
, it provides more memory efficient hashset. -
Install the following tools manually if
make install
failed:
lmplz
andbuild_binary
from KenLMspm_train
andspm_encode
from Sentence Piece
Training Language Models
The Makefile
is used to train Sentence Piece and LM on Wikipedia data.
make help
shows helpmake lang=de lm
trains a Sentence Piece and a LM on German Wikipediamake all_lm
trains the same model than in the papermake lang=de dl_lm
downloads the LM trained for the papermake dl_all_lm
downloads all of them
Pipeline overview
The full mining pipeline is divided in 3 steps:
hashes
downloads one Common-Crawl snapshot, and compute hashes for each paragraphmine
removes duplicates, detects language, run the LM and split by lang/perplexity bucketsregroup
regroup the files created bymine
in chunks of 4Gb
Each step needs the previous step to be over before starting.
You can launch the full pipeline using python -m cc_net
.
python -m cc_net --help
shows helppython -m cc_net --dump 2019-13
treats a specific snapshotpython -m cc_net -l my -l gu
restricts to specific languagespython -m cc_net --lm_dir my_lms/
uses custom LMspython -m cc_net --lang_threshold 0.3
set a specific field inmine.Config
python -m cc_net --config test
runs on a tiny subset of a snapshotpython -m cc_net --config config/my_config.json
uses configuration from the given config file
Reproducing our work
Given the CPU required to run the full pipeline on such a big corpus we share a mapping from url to the information we computed. You can reconstruct the corpus used in the paper by using:
python -m cc_net --conf reproduce --dump 2019-09
Extract XLM-R data
Unsupervised Cross-lingual Representation Learning at Scale (XLM-RoBERTa) paper was trained on data extracted by an internal version of cc_net.
Due to the format being a little bit different please use the following command instead:
python cc_net/tools/dl_cc_100.py --help
python cc_net/tools/dl_cc_100.py --outdir data_cc100 --process 8
If you use this version of the data please also consider citing:
@article{conneau2019unsupervised,
title={Unsupervised Cross-lingual Representation Learning at Scale},
author={Conneau, Alexis and Khandelwal, Kartikay and Goyal, Naman and Chaudhary, Vishrav and Wenzek, Guillaume and Guzm{\'a}n, Francisco and Grave, Edouard and Ott, Myle and Zettlemoyer, Luke and Stoyanov, Veselin},
journal={arXiv preprint arXiv:1911.02116},
year={2019}
}
Adapting to your infrastructure
Given the computation cost of running the full pipeline we distributed the computation
on a Slurm cluster using submitit.
submitit
will default to spawning processes on your machine if Slurm cluster is found.
You should tweak --task_parallelism
to something adapated to your machine.
Defaults are 512 for mining and 20 for reproducing.
To run the tasks in-process use --execution debug
.
Output format
Generated files are compressed JSON files. There is one JSON object per line.
List of fields:
- url: webpage URL (part of CC)
- date_download: date of download (part of CC)
- digest: sha1 digest of the webpage (part of CC)
- length: number of chars
- nlines: number of lines
- source_domain: web domain of the webpage
- title: page title (part of CC)
- raw_content: webpage content after deduplication
- original_nlines: number of lines before deduplication
- original_length: number of chars before deduplication
- language: language detected by FastText LID
- language_score: language score
- perplexity: perplexity of a LM trained on Wikipedia
Sample JSON object:
{
"url": "http://www.pikespeakhospice.org/members/1420",
"date_download": "2019-02-15T18:40:25Z",
"digest": "sha1:VQW3KXUOALO543IJGTK2JLVEAN2XXKHI",
"length": 752,
"nlines": 5,
"source_domain": "www.pikespeakhospice.org",
"title": "LeeRoy Aragon",
"raw_content": "Date Honored: March 2017\nHe was a man of integrity, a hard worker, and a dedicated family man. He loved spending time with family camping, fishing, hunting, boating and just hanging out.\nHis Catholic faith was extremely important to him as he gave of his time and talents to the community. He had many friends through church and the Knights of Columbus. He was a meticulous handyman, and enjoyed building and fixing things and restoring antique furniture to perfection. He was a fan and supported his Colorado Rockies and Denver Broncos. Throughout the years he had devoted four-legged friends (his dogs and a horse named Sunny Boy).\nWe have many cherished memories of him that we will treasure until we are with him again.\n~ Family of LeeRoy F. Aragon",
"original_nlines": 7,
"original_length": 754,
"language": "en",
"language_score": 0.99,
"perplexity": 255.11,
}
You can peak at those files using UNIX tools zcat
and jq
, eg:
zcat data/mined/2019-09/en_head_0000.json.gz | head -1 | jq .
jq
can do some complicated filtering.
jsonql.py
provides a Python API with multiprocess support to do more complicated operations like LM scoring of the document.
License
By contributing to cc_net
, you agree that your contributions will be licensed
under the LICENSE file in the root directory of this source tree.
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
File details
Details for the file cc_net-1.0.0.tar.gz
.
File metadata
- Download URL: cc_net-1.0.0.tar.gz
- Upload date:
- Size: 81.3 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/3.2.0 pkginfo/1.5.0.1 requests/2.23.0 setuptools/41.4.0 requests-toolbelt/0.8.0 tqdm/4.36.1 CPython/3.7.4
File hashes
Algorithm | Hash digest | |
---|---|---|
SHA256 | 60131c23498bfa1428b4c6d311cceb60e8f298d5d4d1900eca4f8402543d75f3 |
|
MD5 | 7591c4493e2127f2971dee5fe32830c4 |
|
BLAKE2b-256 | bb46ca08b95f9164a01a01448cd41524c9b21c34b1b79b1cacb92ad7a14be608 |