A high-level Web Crawling and Web Scraping framework
Project description
Overview
Scrapy is a fast high-level web crawling and web scraping framework, used to crawl websites and extract structured data from their pages. It can be used for a wide range of purposes, from data mining to monitoring and automated testing.
For more information including a list of features check the Scrapy homepage at: http://scrapy.org
Requirements
Python 2.7 or Python 3.3+
Works on Linux, Windows, Mac OSX, BSD
Install
The quick way:
pip install scrapy
For more details see the install section in the documentation: http://doc.scrapy.org/en/latest/intro/install.html
Releases
You can download the latest stable and development releases from: http://scrapy.org/download/
Documentation
Documentation is available online at http://doc.scrapy.org/ and in the docs directory.
Community (blog, twitter, mail list, IRC)
Contributing
See http://doc.scrapy.org/en/master/contributing.html
Code of Conduct
Please note that this project is released with a Contributor Code of Conduct (see https://github.com/scrapy/scrapy/blob/master/CODE_OF_CONDUCT.md).
By participating in this project you agree to abide by its terms. Please report unacceptable behavior to opensource@scrapinghub.com.
Companies using Scrapy
Commercial Support
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Hashes for Scrapy-1.3.1-py2.py3-none-any.whl
Algorithm | Hash digest | |
---|---|---|
SHA256 | 23605c4428c9d05560921726eef12356b79fa55b170ce0d4a044948ea4c46e12 |
|
MD5 | 6598cc2c224e9cfc98512066c18c7f7a |
|
BLAKE2b-256 | 6e32ef55c80c32854d497f1dd355c02c569961d069aa3ca198eab48aeb7b5130 |