Skip to main content

An alternative to the built-in ItemLoader of Scrapy which focuses on maintainability of fallback parsers.

Project description

https://img.shields.io/pypi/pyversions/scrapy-loader-upkeep.svg https://img.shields.io/pypi/v/scrapy-loader-upkeep.svg https://travis-ci.org/BurnzZ/scrapy-loader-upkeep.svg?branch=master https://codecov.io/gh/BurnzZ/scrapy-loader-upkeep/branch/master/graph/badge.svg

Overview

This improves over the built-in ItemLoader of Scrapy by adding features that focuses on the maintainability of the spider over time.

This allows developers to keep track of how often parsers are being used on a crawl, allowing to safely remove obsolete css/xpath fallback rules.

Motivation

Scrapy supports adding multiple css/xpath rules in its ItemLoader by default in order to provide a convenient way for developers to keep up with site changes.

However, some sites change layouts more often than others, while some perform A/B tests for weeks/months where developers need to accommodate those changes.

These fallback css/xpath rules gets obsolete quickly and fills up the project with potentially dead code, posing a threat to the spiders’ long term maintenance.

Original idea proposal: https://github.com/scrapy/scrapy/issues/3795

Usage

from scrapy_loader_upkeep import ItemLoader

class SiteItemLoader(ItemLoader):
    pass

Using it inside a spider callback would look like:

def parse(self, response):
    loader = SiteItemLoader(response=response, stats=self.crawler.stats)

Nothing would change in the usage of this ItemLoader except for the part on injecting stat dependency to it, which is necessary to keep track of the usage of the parser rules.

This only works for the following ItemLoader methods:

  • add_css()

  • replace_css()

  • add_xpath()

  • replace_xpath()

Basic Spider Example

This is taken from the examples/ directory.

$ scrapy crawl quotestoscrape_simple_has_missing

This should output in the stats:

2019-06-16 14:32:32 [scrapy.statscollectors] INFO: Dumping Scrapy stats:
{ ...
  'parser/QuotesItemLoader/author/css/1': 10,
  'parser/QuotesItemLoader/quote/css/1/missing': 10,
  'parser/QuotesItemLoader/quote/css/2': 10
  ...
}

In this example, we could see that the 1st css rule for the quote field has had instances of not being matched at all during the scrape.

New Feature

As with the example above, we’re limited only to the positional context of when the add_css(), add_xpath(), etc were called during the execution.

There will be cases where developers will be maintaining a large spider with a lot of different parsers to handle varying layouts in the site. It would make sense to have a better context to what a parser does or is for.

A new optional name parameter is supported to provide more context around a given parser. This supports the two (2) main types of creating fallback parsers:

  1. multiple calls

loader.add_css('NAME', 'h1::text', name='Name from h1')
loader.add_css('NAME', 'meta[value="title"]::attr(content)', name="Name from meta tag")

would result in something like:

{ ...
  'parser/QuotesItemLoader/NAME/css/1/Name from h1': 8,
  'parser/QuotesItemLoader/NAME/css/1/Name from h1/missing': 2,
  'parser/QuotesItemLoader/NAME/css/2/Name from meta tag': 7,
  'parser/QuotesItemLoader/NAME/css/2/Name from meta tag/missing': 3,
  ...
}
  1. grouped parsers in a single call

loader.add_css(
    'NAME',
    [
        'h1::text',
        'meta[value="title"]::attr(content)',
    ],
    name='NAMEs at the main content')
loader.add_css(
    'NAME',
    [
        'footer .name::text',
        'div.page-end span.name::text',
    ],
    name='NAMEs at the bottom of the page')

would result in something like:

{ ...
  'parser/QuotesItemLoader/NAME/css/1/NAMEs at the main content': 8,
  'parser/QuotesItemLoader/NAME/css/1/NAMEs at the main content/missing': 2,
  'parser/QuotesItemLoader/NAME/css/2/NAMEs at the main content': 7,
  'parser/QuotesItemLoader/NAME/css/2/NAMEs at the main content/missing': 3,
  'parser/QuotesItemLoader/NAME/css/3/NAMEs at the bottom of the page': 8,
  'parser/QuotesItemLoader/NAME/css/3/NAMEs at the bottom of the page/missing': 2,
  'parser/QuotesItemLoader/NAME/css/4/NAMEs at the bottom of the page': 7,
  'parser/QuotesItemLoader/NAME/css/4/NAMEs at the bottom of the page/missing': 3,
  ...
}

The latter is useful in grouping fallback parsers together if they are quite related in terms of layout/arrangement in the page.

Requirements

Python 3.6+

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

scrapy-loader-upkeep-0.1.0.tar.gz (5.2 kB view details)

Uploaded Source

Built Distribution

scrapy_loader_upkeep-0.1.0-py3-none-any.whl (6.4 kB view details)

Uploaded Python 3

File details

Details for the file scrapy-loader-upkeep-0.1.0.tar.gz.

File metadata

  • Download URL: scrapy-loader-upkeep-0.1.0.tar.gz
  • Upload date:
  • Size: 5.2 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/1.13.0 pkginfo/1.5.0.1 requests/2.22.0 setuptools/41.0.1 requests-toolbelt/0.9.1 tqdm/4.32.2 CPython/3.6.4

File hashes

Hashes for scrapy-loader-upkeep-0.1.0.tar.gz
Algorithm Hash digest
SHA256 6c1ce5e220b6c698e402ad4cb459a2fd8a9289a7ec49cd58f60b8b103a61ef68
MD5 2ff758f7f4740719515248394eaae2ec
BLAKE2b-256 9cc52711fa0d11022b2e48ac2015d820cf3afb272343acd73845397b62f3aac1

See more details on using hashes here.

Provenance

File details

Details for the file scrapy_loader_upkeep-0.1.0-py3-none-any.whl.

File metadata

  • Download URL: scrapy_loader_upkeep-0.1.0-py3-none-any.whl
  • Upload date:
  • Size: 6.4 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/1.13.0 pkginfo/1.5.0.1 requests/2.22.0 setuptools/41.0.1 requests-toolbelt/0.9.1 tqdm/4.32.2 CPython/3.6.4

File hashes

Hashes for scrapy_loader_upkeep-0.1.0-py3-none-any.whl
Algorithm Hash digest
SHA256 e4307d949841ab6c01d122c7bff1f6e564fed54728c783fec9673d528090f609
MD5 6b8217f64684474c57d1dd4b87f066c5
BLAKE2b-256 9acbc7d5146acabeb40de1cb334541c083c4c1eee80966d4e7a53f7bd5ff5ab0

See more details on using hashes here.

Provenance

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page