Skip to main content

Web Scraping Framework

Project description

## IOWeb Framework

Python framework to build web crawlers.

What we have at the moment:

  • system designed to run large number network threads (like 100 or 500) on

    on CPU core

  • built-in feature to combine things in chunks and then doing something with

    chunks (like mongodb bulk write)

  • asynchronous things are powered by gevent

  • network requests are handled with urllib3

  • urllib3 monkey-patched to extract cert details

  • urllib3 monkey-patched to not do domain resolving if domain IP has been provided

  • built-in stat module to count events, built-in logging into influxdb

  • retrying on errors

  • no tests

  • no documentation

I am using ioweb to do bulk web scraping like crawling 500M pages in few days.

## Places to talk

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

ioweb-0.0.7.tar.gz (18.3 kB view details)

Uploaded Source

File details

Details for the file ioweb-0.0.7.tar.gz.

File metadata

  • Download URL: ioweb-0.0.7.tar.gz
  • Upload date:
  • Size: 18.3 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: Python-urllib/2.7

File hashes

Hashes for ioweb-0.0.7.tar.gz
Algorithm Hash digest
SHA256 8a969944e532f4c685c11101057ad0cf6b4f5e03ec86993ee8ae3262bc531e9f
MD5 1da01f65eb5c3449c774fa4504cc0791
BLAKE2b-256 7800770322b0ed1a521386222e58cfea3c609f25d1eed77e49fc95c2907c0a7d

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page