Skip to main content

English word segmentation.

Project description

Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at

http://www.apache.org/licenses/LICENSE-2.0

Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
Description: Python Word Segmentation
========================

.. image:: https://api.travis-ci.org/grantjenks/wordsegment.svg
:target: http://www.grantjenks.com/blog/portfolio-post/english-word-segmentation-python/

`WordSegment`_ is an Apache2 licensed module for English word
segmentation, written in pure-Python, and based on a trillion-word corpus.

Based on code from the chapter "`Natural Language Corpus Data`_" by Peter
Norvig from the book "`Beautiful Data`_" (Segaran and Hammerbacher, 2009).

Data files are derived from the `Google Web Trillion Word Corpus`_, as
described by Thorsten Brants and Alex Franz, and `distributed`_ by the
Linguistic Data Consortium. This module contains only a subset of that
data. The unigram data includes only the most common 333,000 words. Similarly,
bigram data includes only the most common 250,000 phrases. Every word and
phrase is lowercased with punctuation removed.

.. _`WordSegment`: http://www.grantjenks.com/docs/wordsegment/
.. _`Natural Language Corpus Data`: http://norvig.com/ngrams/
.. _`Beautiful Data`: http://oreilly.com/catalog/9780596157111/
.. _`Google Web Trillion Word Corpus`: http://googleresearch.blogspot.com/2006/08/all-our-n-gram-are-belong-to-you.html
.. _`distributed`: https://catalog.ldc.upenn.edu/LDC2006T13

Features
--------

- Pure-Python
- Fully documented
- 100% Test Coverage
- Includes unigram and bigram data
- Command line interface for batch processing
- Easy to hack (e.g. different scoring, new data, different language)
- Developed on Python 2.7
- Tested on CPython 2.6, 2.7, 3.2, 3.3, 3.4 and PyPy 2.5+, PyPy3 2.4+

Quickstart
----------

Installing WordSegment is simple with
`pip <http://www.pip-installer.org/>`_::

$ pip install wordsegment

You can access documentation in the interpreter with Python's built-in help
function::

>>> import wordsegment
>>> help(wordsegment)

Tutorial
--------

In your own Python programs, you'll mostly want to use `segment` to divide a
phrase into a list of its parts::

>>> from wordsegment import segment
>>> segment('thisisatest')
['this', 'is', 'a', 'test']

WordSegment also provides a command-line interface for batch processing. This
interface accepts two arguments: in-file and out-file. Lines from in-file are
iteratively segmented, joined by a space, and written to out-file. Input and
output default to stdin and stdout respectively. ::

$ echo thisisatest | python -m wordsegment
this is a test

The maximum segmented word length is 24 characters. Neither the unigram nor
bigram data contain words exceeding that length. The corpus also excludes
punctuation and all letters have been lowercased. Before segmenting text,
`clean` is called to transform the input to a canonical form::

>>> from wordsegment import clean
>>> clean('She said, "Python rocks!"')
'shesaidpythonrocks'
>>> segment('She said, "Python rocks!"')
['she', 'said', 'python', 'rocks']

Sometimes its interesting to explore the unigram and bigram counts
themselves. These are stored in Python dictionaries mapping word to count. ::

>>> import wordsegment as ws
>>> ws.load()
>>> ws.UNIGRAMS['the']
23135851162.0
>>> ws.UNIGRAMS['gray']
21424658.0
>>> ws.UNIGRAMS['grey']
18276942.0

Above we see that the spelling `gray` is more common than the spelling `grey`.

Bigrams are joined by a space::

>>> import heapq
>>> from pprint import pprint
>>> from operator import itemgetter
>>> pprint(heapq.nlargest(10, ws.BIGRAMS.items(), itemgetter(1)))
[('of the', 2766332391.0),
('in the', 1628795324.0),
('to the', 1139248999.0),
('on the', 800328815.0),
('for the', 692874802.0),
('and the', 629726893.0),
('to be', 505148997.0),
('is a', 476718990.0),
('with the', 461331348.0),
('from the', 428303219.0)]

Some bigrams begin with `<s>`. This is to indicate the start of a bigram::

>>> ws.BIGRAMS['<s> where']
15419048.0
>>> ws.BIGRAMS['<s> what']
11779290.0

The unigrams and bigrams data is stored in the `wordsegment_data` directory in
the `unigrams.txt` and `bigrams.txt` files respectively.

Reference and Indices
---------------------

* `WordSegment Documentation`_
* `WordSegment at PyPI`_
* `WordSegment at Github`_
* `WordSegment Issue Tracker`_

.. _`WordSegment Documentation`: http://www.grantjenks.com/docs/wordsegment/
.. _`WordSegment at PyPI`: https://pypi-hypernode.com/pypi/wordsegment
.. _`WordSegment at Github`: https://github.com/grantjenks/wordsegment
.. _`WordSegment Issue Tracker`: https://github.com/grantjenks/wordsegment/issues

WordSegment License
-------------------

Copyright 2015 Grant Jenks

Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at

http://www.apache.org/licenses/LICENSE-2.0

Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.

Platform: any
Classifier: Development Status :: 4 - Beta
Classifier: Intended Audience :: Developers
Classifier: Natural Language :: English
Classifier: License :: OSI Approved :: Apache Software License
Classifier: Programming Language :: Python
Classifier: Programming Language :: Python :: 2
Classifier: Programming Language :: Python :: 2.6
Classifier: Programming Language :: Python :: 2.7
Classifier: Programming Language :: Python :: 3
Classifier: Programming Language :: Python :: 3.2
Classifier: Programming Language :: Python :: 3.3
Classifier: Programming Language :: Python :: 3.4
Classifier: Programming Language :: Python :: 3.5

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

wordsegment-0.7.1.tar.gz (4.4 MB view details)

Uploaded Source

Built Distribution

wordsegment-0.7.1-py2.py3-none-any.whl (4.4 MB view details)

Uploaded Python 2 Python 3

File details

Details for the file wordsegment-0.7.1.tar.gz.

File metadata

  • Download URL: wordsegment-0.7.1.tar.gz
  • Upload date:
  • Size: 4.4 MB
  • Tags: Source
  • Uploaded using Trusted Publishing? No

File hashes

Hashes for wordsegment-0.7.1.tar.gz
Algorithm Hash digest
SHA256 59a0e61d4b1be53ca214ca600271cc5711e53139e4d2e81e34925515e22a4b5f
MD5 3f28d7cfcba9ea6fb1f1d8af157a2300
BLAKE2b-256 bd49f80851a10cf2d83b9792b366feba1b5dcc162ae209234b4f2c5ff6af65ae

See more details on using hashes here.

Provenance

File details

Details for the file wordsegment-0.7.1-py2.py3-none-any.whl.

File metadata

File hashes

Hashes for wordsegment-0.7.1-py2.py3-none-any.whl
Algorithm Hash digest
SHA256 289f54579c92e738be8417526b2393ad937199bd45dc190d64b23a6f2ad0ab2b
MD5 d41ec1670e75e0626a22543d9095ccd0
BLAKE2b-256 b029ad37f58300e751dc2f5889b3101454f3a30042b9f5ec7657b5cf23eb8366

See more details on using hashes here.

Provenance

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page