a library for scraping things
Project description
scrapelib is a library for making requests to less-than-reliable websites, it is implemented (as of 0.7) as a wrapper around requests.
scrapelib originated as part of the Open States project to scrape the websites of all 50 state legislatures and as a result was therefore designed with features desirable when dealing with sites that have intermittent errors or require rate-limiting.
Advantages of using scrapelib over alternatives like httplib2 simply using requests as-is:
All of the power of the suberb requests library.
HTTP, HTTPS, and FTP requests via an identical API
support for simple caching with pluggable cache backends
request throttling
configurable retries for non-permanent site failures
Written by James Turk <james.p.turk@gmail.com>, thanks to Michael Stephens for initial urllib2/httplib2 version
See https://github.com/jamesturk/scrapelib/graphs/contributors for contributors.
Requirements
python 2.7, 3.3, 3.4
requests >= 2.0 (earlier versions may work but aren’t tested)
Example Usage
Documentation: http://scrapelib.readthedocs.org/en/latest/
import scrapelib s = scrapelib.Scraper(requests_per_minute=10) # Grab Google front page s.get('http://google.com') # Will be throttled to 10 HTTP requests per minute while True: s.get('http://example.com')
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
File details
Details for the file scrapelib-1.0.2.tar.gz
.
File metadata
- Download URL: scrapelib-1.0.2.tar.gz
- Upload date:
- Size: 13.8 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
File hashes
Algorithm | Hash digest | |
---|---|---|
SHA256 | 668182c8451d95d871a7350fd9290353b652f6056a223bff0fa1f69f0b12860e |
|
MD5 | 3b7cf0216e3b043c3db2d4210f7050f2 |
|
BLAKE2b-256 | eb7a97cda577336caff5bb1716ce45ff93d94b09826e6e30d0ea5148b4335634 |