Scrapinghub AutoExtract API integration for Scrapy
Project description
This library integrates ScrapingHub’s AI Enabled Automatic Data Extraction into a Scrapy spider using a downloader middleware. The middleware adds the result of AutoExtract to response.meta['autoextract'] for consumption by the spider.
Installation
pip install scrapy-autoextract
scrapy-autoextract requires Python 3.5+
Configuration
Add the AutoExtract downloader middleware in the settings file:
DOWNLOADER_MIDDLEWARES = { 'scrapy_autoextract.AutoExtractMiddleware': 543, }
Note that this should be the last downloader middleware to be executed.
Usage
The middleware is opt-in and can be explicitly enabled per request, with the {'autoextract': {'enabled': True}} request meta. All the options below can be set either in the project settings file, or just for specific spiders, in the custom_settings dict.
Available settings:
AUTOEXTRACT_USER [mandatory] is your AutoExtract API key
AUTOEXTRACT_URL [optional] the AutoExtract service url. Defaults to autoextract.scrapinghub.com.
AUTOEXTRACT_TIMEOUT [optional] sets the response timeout from AutoExtract. Defaults to 660 seconds. Can also be defined by setting the “download_timeout” in the request.meta.
AUTOEXTRACT_PAGE_TYPE [mandatory] defines the kind of document to be extracted. Current available options are “product” and “article”. Can also be defined on spider.page_type, or {'autoextract': {'pageType': '...'}} request meta. This is required for the AutoExtract classifier to know what kind of page needs to be extracted.
extra [optional] allows sending extra payload data to your AutoExtract request. Must be specified as {'autoextract': {'extra': {}}} request meta and must be a dict.
Within the spider, consuming the AutoExtract result is as easy as:
def parse(self, response): yield response.meta['autoextract']
Limitations
When using the AutoExtract middleware, there are some limitations.
The incoming spider request is rendered by AutoExtract, not just downloaded by Scrapy, which can change the result - the IP is different, headers are different, etc.
Only GET requests are supported
Custom headers and cookies are not supported (i.e. Scrapy features to set them don’t work)
Proxies are not supported (they would work incorrectly, sitting between Scrapy and AutoExtract, instead of AutoExtract and website)
AutoThrottle extension can work incorrectly for AutoExtract requests, because AutoExtract timing can be much larger than time required to download a page, so it’s best to use AUTHTHROTTLE_ENABLED=False in the settings.
Redirects are handled by AutoExtract, not by Scrapy, so these kinds of middlewares might have no effect
Retries should be disabled, because AutoExtract handles them internally (use RETRY_ENABLED=False in the settings) There is an exception, if there are too many requests sent in a short amount of time and AutoExtract returns HTTP code 429. For that case it’s best to use RETRY_HTTP_CODES=[429].
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
File details
Details for the file scrapy-autoextract-0.2.tar.gz
.
File metadata
- Download URL: scrapy-autoextract-0.2.tar.gz
- Upload date:
- Size: 7.6 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/2.0.0 pkginfo/1.5.0.1 requests/2.22.0 setuptools/41.6.0 requests-toolbelt/0.9.1 tqdm/4.23.4 CPython/3.6.9
File hashes
Algorithm | Hash digest | |
---|---|---|
SHA256 | dfc7863a9f18ac0523684c4ccf7504089c9b66787e88d7371f6a7a07762a580e |
|
MD5 | a0ecd4473d3649f8f14f8a1a6e336563 |
|
BLAKE2b-256 | 87375a8a3a00ff1b6e6e70e9c49815a09587ee2fd7ff012a187e9ce8fd50fb0f |
Provenance
File details
Details for the file scrapy_autoextract-0.2-py2.py3-none-any.whl
.
File metadata
- Download URL: scrapy_autoextract-0.2-py2.py3-none-any.whl
- Upload date:
- Size: 8.6 kB
- Tags: Python 2, Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/2.0.0 pkginfo/1.5.0.1 requests/2.22.0 setuptools/41.6.0 requests-toolbelt/0.9.1 tqdm/4.23.4 CPython/3.6.9
File hashes
Algorithm | Hash digest | |
---|---|---|
SHA256 | 1c55d448e4bc6cdb008661d60ab7345c3982c97859d0d8aad6f1aebaf37da252 |
|
MD5 | b1176fa0612af452bff99c2dda03fadd |
|
BLAKE2b-256 | 1b74982850957d4e5aeaf8cb9f75568cfa913d61eab263b23a0f86bde8f0474d |