Skip to main content

A library for crawling websites

Project description

http-crawler is a library for crawling websites. It uses requests to speak HTTP.

Installation

Install with pip:

$ pip install http-crawler

Usage

The http_crawler module provides one generator function, crawl.

crawl is called with a URL, and yields instances of requests’s Response class.

crawl will request the page at the given URL, and will extract all URLs from the response. It will then make a request for each of those URLs, and will repeat the process until it has requested every URL linked to from pages on the original URL’s domain. It will not extract or process URLs from any page with a different domain to the original URL.

For instance, this is how you would use crawl to find and log any broken links on a site:

>>> from http_crawler import crawl
>>> for rsp in crawl('http://www.example.com'):
>>>     if rsp.status_code != 200:
>>>         print('Got {} at {}'.format(rsp.status_code, rsp.url))

Motivation

Why another crawling library? There are certainly lots of Python tools for crawling websites, but all that I could find were either too complex, too simple, or had too many dependencies.

http-crawler is designed to be a library and not a framework, so it should be straightforward to use in applications or other libraries.

Contributing

There are a handful of enhancements on the issue tracker that would be suitable for somebody looking to contribute to Open Source for the first time.

For instructions about making Pull Requests, see GitHub’s guide.

All contributions should include tests with 100% code coverage, and should comply with PEP 8. The project uses tox for running tests and checking code quality metrics.

To run the tests:

$ tox

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

http-crawler-0.1.2.tar.gz (3.2 kB view details)

Uploaded Source

File details

Details for the file http-crawler-0.1.2.tar.gz.

File metadata

File hashes

Hashes for http-crawler-0.1.2.tar.gz
Algorithm Hash digest
SHA256 ed24a401505df931445d9bc33c7e2e4324357f842c4fab840aae2a0a4ef22781
MD5 15a21f306847231d020d24e2bda207b1
BLAKE2b-256 1df1ac79bc35e51cbbeb1d6b9cafa792bddcec61b2071bb852b6da96ac312d24

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page