Skip to main content

No project description provided

Project description

CmonCrawl Banner

CommonCrawl Extractor with great versatility

Build Tests Documentation

License Python Version PyPI

Unlock the full potential of CommonCrawl data with CmonCrawl, the most versatile extractor that offers unparalleled modularity and ease of use.

Why Choose CmonCrawl?

CmonCrawl stands out from the crowd with its unique features:

  • High Modularity: Easily create custom extractors tailored to your specific needs.
  • Comprehensive Access: Supports all CommonCrawl access methods, including AWS Athena and the CommonCrawl Index API for querying, and S3 and the CommonCrawl API for downloading.
  • Flexible Utility: Accessible via a Command Line Interface (CLI) or as a Software Development Kit (SDK), catering to your preferred workflow.
  • Type Safety: Built with type safety in mind, ensuring that your code is robust and reliable.

Getting Started

Installation

Install From PyPi

$ pip install cmoncrawl

Install From source

$ git clone https://github.com/hynky1999/CmonCrawl
$ cd CmonCrawl
$ pip install -r requirements.txt
$ pip install .

Usage Guide

Step 1: Extractor preparation

Begin by preparing your custom extractor. Obtain sample HTML files from the CommonCrawl dataset using the command:

$ cmon download --match_type=domain --limit=100 html_output example.com html

This will download a first 100 html files from example.com and save them in html_output.

Step 2: Extractor creation

Create a new Python file for your extractor, such as my_extractor.py, and place it in the extractors directory. Implement your extraction logic as shown below:

from bs4 import BeautifulSoup
from cmoncrawl.common.types import PipeMetadata
from cmoncrawl.processor.pipeline.extractor import BaseExtractor
class MyExtractor(BaseExtractor):
   def __init__(self):
      # you can force a specific encoding if you know it
      super().__init__(encoding=None)

   def extract_soup(self, soup: BeautifulSoup, metadata: PipeMetadata):
      # here you can extract the data you want from the soup
      # and return a dict with the data you want to save
      body = soup.select_one("body")
      if body is None:
        return None
      return {
         "body": body.get_text()
      }

   # You can also override the following methods to drop the files you don't want to extracti
   # Return True to keep the file, False to drop it
   def filter_raw(self, response: str, metadata: PipeMetadata) -> bool:
      return True
   def filter_soup(self, soup: BeautifulSoup, metadata: PipeMetadata) -> bool:
      return True

# Make sure to instantiate your extractor into extractor variable
# The name must match so that the framework can find it
extractor = MyExtractor()

Step 3: Config creation

Set up a configuration file, config.json, to specify the behavior of your extractor(s):

{
    "extractors_path": "./extractors",
    "routes": [
        {
            # Define which url match the extractor, use regex
            "regexes": [".*"],
            "extractors": [{
                "name": "my_extractor",
                # You can use since and to choose the extractor based
                on the date of the crawl
                # You can ommit either of them
                "since": "2009-01-01",
                "to": "2025-01-01"
            }]
        },
        # More routes here
    ]
}

Step: 4 Run the extractor

Test your extractor with the following command:

$ cmon extract config.json extracted_output html_output/*.html html

Step 5: Full crawl and extraction

After testing, start the full crawl and extraction process:

1. Retrieve a list of records to extract.

cmon download --match_type=domain --limit=100 dr_output example.com record

This will download the first 100 records from example.com and save them in dr_output. By default it saves 100_000 records per file, you can change this with the --max_crawls_per_file option.

2. Process the records using your custom extractor.

$ cmon extract --n_proc=4 config.json extracted_output dr_output/*.jsonl record

Note that you can use the --n_proc option to specify the number of processes to use for the extraction. Multiprocessing is done on file level, so if you have just one file it will not be used.

Handling CommonCrawl Errors

Encountering a high number of error responses usually indicates excessive request rates. To mitigate this, consider the following strategies in order:

  1. Switch to S3 Access: Instead of using the API Gateway, opt for S3 access which allows for higher request rates.

  2. Regulate Request Rate: The total requests per second are determined by the formula n_proc * max_requests_per_process. To reduce the request rate:

    • Decrease the number of processes (n_proc).
    • Reduce the maximum requests per process (max_requests_per_process).

    Aim to maintain the total request rate below 40 per second.

  3. Adjust Retry Settings: If errors persist:

    • Increase max_retry to ensure eventual data retrieval.
    • Set a higher sleep_base to prevent API overuse and to respect rate limits.

Advanced Usage

CmonCrawl was designed with flexibility in mind, allowing you to tailor the framework to your needs. For distributed extraction and more advanced scenarios, refer to our documentation and the CZE-NEC project.

Examples and Support

For practical examples and further assistance, visit our examples directory.

Contribute

Join our community of contributors on GitHub. Your contributions are welcome!

License

CmonCrawl is open-source software licensed under the MIT license.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

CmonCrawl-1.1.6.tar.gz (668.4 kB view details)

Uploaded Source

Built Distribution

CmonCrawl-1.1.6-py3-none-any.whl (53.0 kB view details)

Uploaded Python 3

File details

Details for the file CmonCrawl-1.1.6.tar.gz.

File metadata

  • Download URL: CmonCrawl-1.1.6.tar.gz
  • Upload date:
  • Size: 668.4 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/4.0.2 CPython/3.10.13

File hashes

Hashes for CmonCrawl-1.1.6.tar.gz
Algorithm Hash digest
SHA256 f1819b7d50b8dfe261400ecd9c48f00ef1279416b2ac1b848d140f39a27deff2
MD5 f2181487e41dc461e20b71ac6b4c2f41
BLAKE2b-256 2855af192494952d38843d8aeb372b1494f33b5d7b4602b866ce92356d4ef515

See more details on using hashes here.

File details

Details for the file CmonCrawl-1.1.6-py3-none-any.whl.

File metadata

  • Download URL: CmonCrawl-1.1.6-py3-none-any.whl
  • Upload date:
  • Size: 53.0 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/4.0.2 CPython/3.10.13

File hashes

Hashes for CmonCrawl-1.1.6-py3-none-any.whl
Algorithm Hash digest
SHA256 e430f4cfd1291d561049ce2c815b7f6f50c0513b93cd72736157eabc484e1dda
MD5 babfb65381a566854683c79c1584243c
BLAKE2b-256 9f7f2c126151caec07444b1009b0baffe0e9c256ee7f77d6866a9551b82fdaa9

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page