Skip to main content

feedparser but faster and worse

Project description

speedparser

Speedparser is a black-box “style” reimplementation of the Universal Feed Parser. It uses some feedparser code for date and authors, but mostly re-implements its data normalization algorithms based on feedparser output. It uses lxml for feed parsing and for optional HTML cleaning. Its compatibility with feedparser is very good for a strict subset of fields, but poor for fields outside that subset. See tests/speedparsertests.py for more information on which fields are more or less compatible and which are not.

On an Intel(R) Core(TM) i5 750, running only on one core, feedparser managed 2.5 feeds/sec on the test feed set (roughly 4200 “feeds” in tests/feeds.tar.bz2), while speedparser manages around 65 feeds/sec with HTML cleaning on and 200 feeds/sec with cleaning off.

installing

pip install speedparser

usage

Usage is similar to feedparser:

>>> import speedparser
>>> result = speedparser.parse(feed)
>>> result = speedparser.parse(feed, clean_html=False)

differences

There are a few interface differences and many result differences between speedparser and feedparser. The biggest similarity is that they both return a FeedParserDict() object (with keys accessible as attributes), they both set the bozo key when an error is encountered, and various aspects of the feed and entries keys are likely to be identical or very similar.

speedparser uses different (and in some cases less or none; buyer beware) data cleaning algorithms than feedparser. When it is enabled, lxml’s html.cleaner library will be used to clean HTML and give similar but not identical protection against various attributes and elements. If you supply your own Cleaner element to the “clean_html kwarg, it will be used by speedparser to clean the various attributes of the feed and entries.

speedparser does not attempt to fix character encoding by default because this processing can take a long time for large feeds. If the encoding value of the feed is wrong, or if you want this extra level of error tollerance, you can either use the chardet module to detect the encoding based on the document or pass encoding=True to speedparser.parse and it will fall back to encoding detection if it encounters encoding errors.

If your application is using feedparser to consume many feeds at once and CPU is becoming a bottleneck, you might want to try out speedparser as an alternative (using feedparser as a backup). If you are writing an application that does not ingest many feeds, or where CPU is not a problem, you should use feedparser as it is flexible with bad or malformed data and has a much better test suite.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

speedparser-0.2.0.tar.gz (17.9 kB view details)

Uploaded Source

File details

Details for the file speedparser-0.2.0.tar.gz.

File metadata

  • Download URL: speedparser-0.2.0.tar.gz
  • Upload date:
  • Size: 17.9 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No

File hashes

Hashes for speedparser-0.2.0.tar.gz
Algorithm Hash digest
SHA256 1074e230145b4d3fd44386c8f7c20ebc51c444b4e2f8efe811b107cfbb880b4c
MD5 8de5f1b0920307880ce402c079c0b435
BLAKE2b-256 9c3d74754b87cce30c790dc359f7e7c86eb20aea7317a22b133f52c1d6a080e5

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page