Skip to main content

Web Scraping Framework

Project description

## IOWeb Framework

Python framework to build web crawlers.

What we have at the moment:

  • system designed to run large number network threads (like 100 or 500) on

    on CPU core

  • built-in feature to combine things in chunks and then doing something with

    chunks (like mongodb bulk write)

  • asynchronous things are powered by gevent

  • network requests are handled with urllib3

  • urllib3 monkey-patched to extract cert details

  • urllib3 monkey-patched to not do domain resolving if domain IP has been provided

  • built-in stat module to count events, built-in logging into influxdb

  • retrying on errors

  • no tests

  • no documentation

I am using ioweb to do bulk web scraping like crawling 500M pages in few days.

## Places to talk

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

ioweb-0.0.8.tar.gz (18.3 kB view details)

Uploaded Source

File details

Details for the file ioweb-0.0.8.tar.gz.

File metadata

  • Download URL: ioweb-0.0.8.tar.gz
  • Upload date:
  • Size: 18.3 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: Python-urllib/2.7

File hashes

Hashes for ioweb-0.0.8.tar.gz
Algorithm Hash digest
SHA256 44832e3489faf9e1bfb4c38b0b020fef92805a472f64cb8dee190334adece59e
MD5 ac8a47c1bc7398a1d10f8f3e470bd5fe
BLAKE2b-256 240b5184afc3d31bd26f2343932b7d198f05bab9a42bbce7911bac85595c2356

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page