Skip to main content

A library for crawling websites

Project description

http-crawler is a library for crawling websites. It uses requests to speak HTTP.

Installation

Install with pip:

$ pip install http-crawler

Usage

The http_crawler module provides one generator function, crawl.

crawl is called with a URL, and yields instances of requests’s Response class.

crawl will request the page at the given URL, and will extract all URLs from the response. It will then make a request for each of those URLs, and will repeat the process until it has requested every URL linked to from pages on the original URL’s domain. It will not extract or process URLs from any page with a different domain to the original URL.

For instance, this is how you would use crawl to find and log any broken links on a site:

>>> from http_crawler import crawl
>>> for rsp in crawl('http://www.example.com'):
>>>     if rsp.status_code != 200:
>>>         print('Got {} at {}'.format(rsp.status_code, rsp.url))

crawl has a number of options:

  • follow_external_links (default True) If set, crawl will make a request for every URL it encounters, including ones with a different domain to the original URL. If not set, crawl will ignore all URLs that have a different domain to the original URL. In either case, crawl will not extract further URLs from a page with a different domain to the original URL.

  • ignore_fragments (default True) If set, crawl will ignore the fragment part of any URL. This means that if crawl encounters http://domain/path#anchor, it will make a request for http://domain/path. Moreover, it means that if crawl encounters http://domain/path#anchor1 and http://domain/path#anchor2, it will only make one request.

  • verify (default True) This option controls the behaviour of SSL certificate verification. See the requests documentation for more details.

Motivation

Why another crawling library? There are certainly lots of Python tools for crawling websites, but all that I could find were either too complex, too simple, or had too many dependencies.

http-crawler is designed to be a library and not a framework, so it should be straightforward to use in applications or other libraries.

Contributing

There are a handful of enhancements on the issue tracker that would be suitable for somebody looking to contribute to Open Source for the first time.

For instructions about making Pull Requests, see GitHub’s guide.

All contributions should include tests with 100% code coverage, and should comply with PEP 8. The project uses tox for running tests and checking code quality metrics.

To run the tests:

$ tox

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

http-crawler-0.2.1.tar.gz (3.7 kB view details)

Uploaded Source

File details

Details for the file http-crawler-0.2.1.tar.gz.

File metadata

File hashes

Hashes for http-crawler-0.2.1.tar.gz
Algorithm Hash digest
SHA256 50f83d3ef82bb2ba5562aaad78a8ed812adaceec4b8633a5d949058be73c53c2
MD5 fbc176eddb431c4dbf218d7bc9658f14
BLAKE2b-256 fcf4da694dacc99fe444002ff587fddb240aa1055149fc420352ae9d53b66cfb

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page