Skip to main content

Discarding duplicate URLs based on rules.

Project description

PyPI Version Supported Python Versions Build Status Coverage report Documentation Status

duplicate-url-discarder contains a Scrapy fingerprinter that uses customizable URL processors to canonicalize URLs before fingerprinting.

Quick Start

Installation

pip install duplicate-url-discarder

Alternatively, you can also include in the installation the predefined rules in duplicate-url-discarder-rules via:

pip install duplicate-url-discarder[rules]

If such rules are installed, they would automatically be used if the DUD_LOAD_RULE_PATHS setting is left empty (see configuration).

Requires Python 3.8+.

Using

If you use Scrapy >= 2.10 you can enable the fingerprinter by enabling the provided Scrapy add-on:

ADDONS = {
    "duplicate_url_discarder.Addon": 600,
}

If you are using other Scrapy add-ons that modify the request fingerprinter, such as the scrapy-zyte-api add-on, configure this add-on with a higher priority value so that the fallback fingerprinter is set to the correct value.

With older Scrapy versions you need to enable the fingerprinter directly:

REQUEST_FINGERPRINTER_CLASS = "duplicate_url_discarder.Fingerprinter"

If you were using a non-default request fingerprinter already, be it one you implemented or one from a Scrapy plugin like scrapy-zyte-api, set it as fallback:

DUD_FALLBACK_REQUEST_FINGERPRINTER_CLASS = "scrapy_zyte_api.ScrapyZyteAPIRequestFingerprinter"

duplicate_url_discarder.Fingerprinter will make canonical forms of the request URLs and get the fingerprints for those using the configured fallback fingerprinter (which is the default Scrapy one unless another one is configured in the DUD_FALLBACK_REQUEST_FINGERPRINTER_CLASS setting). Requests with the "dud" meta value set to False are processed directly, without making a canonical form.

URL Processors

duplicate-url-discarder utilizes URL processors to make canonical versions of URLs. The processors are configured with URL rules. Each URL rule specifies an URL pattern for which the processor applies, and specific processor arguments to use.

The following URL processors are currently available:

  • queryRemoval: removes query string parameters (i.e. key=value), wherein the keys are specified in the arguments. If a given key appears multiple times with different values in the URL, all of them are removed.

  • queryRemovalExcept: like queryRemoval, but the keys specified in the arguments are kept while all others are removed.

URL Rules

A URL rule is a dictionary specifying the url-matcher URL pattern(s), the URL processor name, the URL processor args and the order that is used to sort the rules. They are loaded from JSON files that contain arrays of serialized rules:

[
  {
    "args": [
      "foo",
      "bar",
    ],
    "order": 100,
    "processor": "queryRemoval",
    "urlPattern": {
      "include": [
        "foo.example"
      ]
    }
  },
  {
    "args": [
      "PHPSESSIONID"
    ],
    "order": 100,
    "processor": "queryRemoval",
    "urlPattern": {
      "include": []
    }
  }
]

All non-universal rules (ones that have non-empty include pattern) that match a request URL are applied according to their order field. If there are no non-universal rules that match the URL, the universal ones are applied.

Configuration

duplicate-url-discarder uses the following Scrapy settings:

DUD_LOAD_RULE_PATHS: it should be a list of file paths (str or pathlib.Path) pointing to JSON files with the URL rules to apply:

DUD_LOAD_RULE_PATHS = [
    "/home/user/project/custom_rules1.json",
]

The default value of this setting is empty. However, if the package duplicate-url-discarder-rules is installed and DUD_LOAD_RULE_PATHS has been left empty, the rules in the said package is automatically used.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

duplicate_url_discarder-0.1.0.tar.gz (12.7 kB view details)

Uploaded Source

Built Distribution

duplicate_url_discarder-0.1.0-py3-none-any.whl (10.5 kB view details)

Uploaded Python 3

File details

Details for the file duplicate_url_discarder-0.1.0.tar.gz.

File metadata

File hashes

Hashes for duplicate_url_discarder-0.1.0.tar.gz
Algorithm Hash digest
SHA256 0b73007d0eaae42363d5a7b4b959e219fe833b82a27664088cee02cbd66699f0
MD5 ddd6cc0170d391f03ebea5f99dad6154
BLAKE2b-256 97dff022bc3f74a174541095e72ae23bb308be03da5aef4425c6a019ad24ceb0

See more details on using hashes here.

File details

Details for the file duplicate_url_discarder-0.1.0-py3-none-any.whl.

File metadata

File hashes

Hashes for duplicate_url_discarder-0.1.0-py3-none-any.whl
Algorithm Hash digest
SHA256 b31aa296a343567003a02d43a9fba494fce46c47b81396ce47e1f25d0a833898
MD5 2216e5d7860256ec00d2fb6852bb1f5d
BLAKE2b-256 a70928e5f1f5bd9a5eb2c13c0eea3fa826404dcab2231b9d047422bf9e2a39cf

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page