Skip to main content

A component that tried to avoid downloading duplite content

Project description

Build Status Code Coverage

MaybeDont is a library that helps avoid downloading pages with duplicate content during crawling. It learns which URL components are important and which are not important during crawling, and tries to predict if the page will be duplicate based on it’s URL.

The idea is that if you have a crawler that just follows all links, it might download a lot of duplicate pages: for example, for a forum there might be pages like /view.php?topicId=10 and /view.php?topicId=10&start=0 - the only difference is added start=0, and the content of this pages is likely duplicate. If we knew that adding start=0 does not change content, then we would avoid downloading the page /view.php?topicId=10&start=0 if we have already fetched /view.php?topicId=10, and thus save time and bandwidth.

Duplicate detector

maybedont.DupePredictor collects statistics about page URLs and contents, and is able to predict if the new URL will bring any new content.

First, initialize a DupePredictor:

from maybedont import DupePredictor
dp = DupePredictor(
    texts_sample=[page_1, page_2, page_3],
    jaccard_threshold=0.9)  # default value

texts_sample is a list of page contents. It can be ommited, but it is recommended to provide it: it is used to learn which parts of the page are common for a lot of site’s pages, and excludes this parts from duplicate comparison. This helps with pages where the content is small relative to the site chrome (footer, header, etc.): without removing chrome all such pages would be considered duplicates, as only a tiny fraction of the content changes.

Next, we can update DupePredictor model with downloaded pages:

dp.update_model(url_4, text_4)
dp.update_model(url_5, text_5)

After a while, DupePredictor will learn which arguments in URLs are important, and which can be safely ignored. DupePredictor.get_dupe_prob returns the probability of url being a duplicate of some content that has already been seem:

dp.get_dupe_prob(url_6)

Runtime overhead should be not too large: on a crawl with < 100k pages, expected time to update the model is 1-5 ms, and below 1 ms to get the probability. All visited urls and hashes of content are stored in memory, along with some indexing structures.

Install

pip install MaybeDont

Spider middleware

If you have a Scrapy spider, or are looking for an inspiration for a spider middleware, check out maybedont.scrapy_middleware.AvoidDupContentMiddleware. First, it collects an queue of documents to know better which page elements are common on the site, in order to exclude them from content comparison. After that it builds it’s DupePredictor, updates it with crawled pages (only textual pages are taken into account), and starts dropping requests for duplicate content once it gets confident enough. Not all requests for duplicates are dropped: with a small probability (currenty 5%) requests are carried anyway. This makes duplicate detection more robust against changes in site URL or content structure as the crawl progresses.

To enable the middleware, the following settings are required:

AVOID_DUP_CONTENT_ENABLED = True
DOWNLOADER_MIDDLEWARES['maybedont.scrapy_middleware.AvoidDupContentMiddleware'] = 200

Middleware is only applied to requests with avoid_dup_content in request.meta.

Optional settings:

  • AVOID_DUP_CONTENT_THRESHOLD = 0.98 - minimal probability when requests are skipped.

  • AVOID_DUP_CONTENT_EXPLORATION = 0.05 - probability of still making a request that should be dropped

  • AVOID_DUP_CONTENT_INITIAL_QUEUE_LIMIT = 300 - number of pages that should be downloaded before DupePredictor is initialized

How it works

Duplicate detection is based on MinHashLSH from the datasketch library. Text 4-shingles of words are used for hashing, not spanning line breaks in the extracted text.

Several hypotheses about duplicates are tested:

  1. All URLs with a given URL path are the same (have the same content), regardless of query parameters;

  2. All URLs which only differ in a given URL query parameter are the same (e.g. session tokens can be detected this way);

  3. All URLs which have a given path and only differ in a given URL query parameter are the same;

  4. All URLs which have a given path and query string and only differ in a single given query parameter are the same;

  5. URLs are the same if they have same path and only differ in that some of them have a given param=value query argument added;

  6. URLs are the same if they have a given path and only differ in a given param=value query argument;

Bernoulli distribution is fit for each hypothesis.

License

License is MIT

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

MaybeDont-0.1.0.tar.gz (8.4 kB view details)

Uploaded Source

Built Distribution

MaybeDont-0.1.0-py2.py3-none-any.whl (12.3 kB view details)

Uploaded Python 2 Python 3

File details

Details for the file MaybeDont-0.1.0.tar.gz.

File metadata

  • Download URL: MaybeDont-0.1.0.tar.gz
  • Upload date:
  • Size: 8.4 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No

File hashes

Hashes for MaybeDont-0.1.0.tar.gz
Algorithm Hash digest
SHA256 f5fb4d38b38c58dd3b042eed74fca5d83c3c121f0be5c73d58583b877ab96231
MD5 474eb0fdbbef202401d741051b9db741
BLAKE2b-256 3488bb3f248695834a6e19e3f0fc97b1286cfff044a5ea45dafcd4bad97de336

See more details on using hashes here.

File details

Details for the file MaybeDont-0.1.0-py2.py3-none-any.whl.

File metadata

File hashes

Hashes for MaybeDont-0.1.0-py2.py3-none-any.whl
Algorithm Hash digest
SHA256 4b0a7dc68bc8e472dedd0d1ed0348f0ba69707bc60de474a2c9f77b5508dbe68
MD5 6c17a630ed51f41cd2f35c5aef0a096b
BLAKE2b-256 1cc12000ffa75bb93c935128e1ab8ace93c5c36189f20a544d5d62822f6da3e0

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page