Skip to main content

Extract text from HTML

Project description

HTML to Text

PyPI Version Supported Python Versions Build Status Coverage report

Extract text from HTML

  • Free software: MIT license

How is html_text different from .xpath('//text()') from LXML or .get_text() from Beautiful Soup?

  • Text extracted with html_text does not contain inline styles, javascript, comments and other text that is not normally visible to users;

  • html_text normalizes whitespace, but in a way smarter than .xpath('normalize-space()), adding spaces around inline elements (which are often used as block elements in html markup), and trying to avoid adding extra spaces for punctuation;

  • html-text can add newlines (e.g. after headers or paragraphs), so that the output text looks more like how it is rendered in browsers.

Install

Install with pip:

pip install html-text

The package depends on lxml, so you might need to install additional packages: http://lxml.de/installation.html

Usage

Extract text from HTML:

>>> import html_text
>>> html_text.extract_text('<h1>Hello</h1> world!')
'Hello\n\nworld!'

>>> html_text.extract_text('<h1>Hello</h1> world!', guess_layout=False)
'Hello world!'

Passed html is first cleaned from invisible non-text content such as styles, and then text is extracted.

You can also pass an already parsed lxml.html.HtmlElement:

>>> import html_text
>>> tree = html_text.parse_html('<h1>Hello</h1> world!')
>>> html_text.extract_text(tree)
'Hello\n\nworld!'

If you want, you can handle cleaning manually; use lower-level html_text.etree_to_text in this case:

>>> import html_text
>>> tree = html_text.parse_html('<h1>Hello<style>.foo{}</style>!</h1>')
>>> cleaned_tree = html_text.cleaner.clean_html(tree)
>>> html_text.etree_to_text(cleaned_tree)
'Hello!'

parsel.Selector objects are also supported; you can define a parsel.Selector to extract text only from specific elements:

>>> import html_text
>>> sel = html_text.cleaned_selector('<h1>Hello</h1> world!')
>>> subsel = sel.xpath('//h1')
>>> html_text.selector_to_text(subsel)
'Hello'

NB parsel.Selector objects are not cleaned automatically, you need to call html_text.cleaned_selector first.

Main functions and objects:

  • html_text.extract_text accepts html and returns extracted text.

  • html_text.etree_to_text accepts parsed lxml Element and returns extracted text; it is a lower-level function, cleaning is not handled here.

  • html_text.cleaner is an lxml.html.clean.Cleaner instance which can be used with html_text.etree_to_text; its options are tuned for speed and text extraction quality.

  • html_text.cleaned_selector accepts html as text or as lxml.html.HtmlElement, and returns cleaned parsel.Selector.

  • html_text.selector_to_text accepts parsel.Selector and returns extracted text.

If guess_layout is True (default), a newline is added before and after newline_tags, and two newlines are added before and after double_newline_tags. This heuristic makes the extracted text more similar to how it is rendered in the browser. Default newline and double newline tags can be found in html_text.NEWLINE_TAGS and html_text.DOUBLE_NEWLINE_TAGS.

It is possible to customize how newlines are added, using newline_tags and double_newline_tags arguments (which are html_text.NEWLINE_TAGS and html_text.DOUBLE_NEWLINE_TAGS by default). For example, don’t add a newline after <div> tags:

>>> newline_tags = html_text.NEWLINE_TAGS - {'div'}
>>> html_text.extract_text('<div>Hello</div> world!',
...                        newline_tags=newline_tags)
'Hello world!'

Apart from just getting text from the page (e.g. for display or search), one intended usage of this library is for machine learning (feature extraction). If you want to use the text of the html page as a feature (e.g. for classification), this library gives you plain text that you can later feed into a standard text classification pipeline. If you feel that you need html structure as well, check out webstruct library.


define hyperiongray

History

0.6.0 (2024-04-04)

  • Moved the Git repository to https://github.com/zytedata/html-text.

  • Added official support for Python 3.9-3.12.

  • Removed support for Python 2.7 and 3.5-3.7.

  • Switched the lxml dependency to lxml[html_clean] to support lxml >= 5.2.0.

  • Switch from Travis CI to GitHub Actions.

  • CI improvements.

0.5.2 (2020-07-22)

0.5.1 (2019-05-27)

Fixed whitespace handling when guess_punct_space is False: html-text was producing unnecessary spaces after newlines.

0.5.0 (2018-11-19)

Parsel dependency is removed in this release, though parsel is still supported.

  • parsel package is no longer required to install and use html-text;

  • html_text.etree_to_text function allows to extract text from lxml Elements;

  • html_text.cleaner is an lxml.html.clean.Cleaner instance with options tuned for text extraction speed and quality;

  • test and documentation improvements;

  • Python 3.7 support.

0.4.1 (2018-09-25)

Fixed a regression in 0.4.0 release: text was empty when html_text.extract_text is called with a node with text, but without children.

0.4.0 (2018-09-25)

This is a backwards-incompatible release: by default html_text functions now add newlines after elements, if appropriate, to make the extracted text to look more like how it is rendered in a browser.

To turn it off, pass guess_layout=False option to html_text functions.

  • guess_layout option to to make extracted text look more like how it is rendered in browser.

  • Add tests of layout extraction for real webpages.

0.3.0 (2017-10-12)

  • Expose functions that operate on selectors, use .//text() to extract text from selector.

0.2.1 (2017-05-29)

  • Packaging fix (include CHANGES.rst)

0.2.0 (2017-05-29)

  • Fix unwanted joins of words with inline tags: spaces are added for inline tags too, but a heuristic is used to preserve punctuation without extra spaces.

  • Accept parsed html trees.

0.1.1 (2017-01-16)

  • Travis-CI and codecov.io integrations added

0.1.0 (2016-09-27)

  • First release on PyPI.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

html_text-0.6.0.tar.gz (53.3 kB view details)

Uploaded Source

Built Distribution

html_text-0.6.0-py2.py3-none-any.whl (7.7 kB view details)

Uploaded Python 2 Python 3

File details

Details for the file html_text-0.6.0.tar.gz.

File metadata

  • Download URL: html_text-0.6.0.tar.gz
  • Upload date:
  • Size: 53.3 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/5.0.0 CPython/3.12.2

File hashes

Hashes for html_text-0.6.0.tar.gz
Algorithm Hash digest
SHA256 ce0623354809fbb2ac59d45981633483f2c020847e8a17938460a4d375636f97
MD5 e1baa80815a452a0dd7678f6f66bcbb3
BLAKE2b-256 60e6542804b9cc3fc220beacfd07c1abb32e0a83754494e2cb175690222d52c9

See more details on using hashes here.

File details

Details for the file html_text-0.6.0-py2.py3-none-any.whl.

File metadata

  • Download URL: html_text-0.6.0-py2.py3-none-any.whl
  • Upload date:
  • Size: 7.7 kB
  • Tags: Python 2, Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/5.0.0 CPython/3.12.2

File hashes

Hashes for html_text-0.6.0-py2.py3-none-any.whl
Algorithm Hash digest
SHA256 1be28e213baf8903bf4c682358a6839d39c2a5b2ed2a9fde0a5709951ae705c9
MD5 6586359680e08f264e3d1c6b39aceade
BLAKE2b-256 c2fb23fbeb5b7a9480e6e1ab113c1d0ac392328255b691810e50d7e3b3b5f0a0

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page