Skip to main content

English word segmentation.

Project description

https://api.travis-ci.org/grantjenks/wordsegment.svg

WordSegment is an Apache2 licensed module for English word segmentation, written in pure-Python, and based on a trillion-word corpus.

Based on code from the chapter “Natural Language Corpus Data” by Peter Norvig from the book “Beautiful Data” (Segaran and Hammerbacher, 2009).

Data files are derived from the Google Web Trillion Word Corpus, as described by Thorsten Brants and Alex Franz, and distributed by the Linguistic Data Consortium. This module contains only a subset of that data. The unigram data includes only the most common 333,000 words. Similarly, bigram data includes only the most common 250,000 phrases. Every word and phrase is lowercased with punctuation removed.

Features

  • Pure-Python

  • Fully documented

  • 100% test coverage

  • Includes unigram and bigram data

  • Command line interface for batch processing

  • Easy to hack (e.g. different scoring, new data, different language)

  • Developed on Python 2.7

  • Tested on CPython 2.6, 2.7, 3.2, 3.3, 3.4 and PyPy 2.2

User Guide

Installing WordSegment is simple with pip:

> pip install wordsegment

You can access documentation in the interpreter with Python’s built-in help function:

>>> import wordsegment
>>> help(wordsegment)

In your own Python programs, you’ll mostly want to use segment to divide a phrase into a list of its parts:

>>> from wordsegment import segment
>>> segment('thisisatest')
['this', 'is', 'a', 'test']

WordSegment also provides a command-line interface for batch processing. This interface accepts two arguments: in-file and out-file. Lines from in-file are segmented iteratively, joined by a space, and written to out-file. Input and output default to stdin and stdout respectively.:

> echo thisisatest | python -m wordsegment
this is a test

API Documentation

  • segment(text)

    Return a list of words that is the best segmenation of text.

  • score(word, prev=None)

    Score a word in the context of the previous word, prev.

  • divide(text, limit=24)

    Yield (prefix, suffix) pairs from text with len(prefix) not exceeding limit.

  • unigram_counts

    Mapping of (unigram, count) pairs. Loaded from the file ‘unigrams.txt’.

  • bigram_counts

    Mapping of (bigram, count) pairs. Loaded from the file ‘bigrams.txt’.

WordSegment License

Copyright (c) 2014 Grant Jenks

Licensed under the Apache License, Version 2.0 (the “License”); you may not use this file except in compliance with the License. You may obtain a copy of the License at

http://www.apache.org/licenses/LICENSE-2.0

Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an “AS IS” BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

wordsegment-0.3.tar.gz (4.4 MB view details)

Uploaded Source

File details

Details for the file wordsegment-0.3.tar.gz.

File metadata

  • Download URL: wordsegment-0.3.tar.gz
  • Upload date:
  • Size: 4.4 MB
  • Tags: Source
  • Uploaded using Trusted Publishing? No

File hashes

Hashes for wordsegment-0.3.tar.gz
Algorithm Hash digest
SHA256 b3cc5157089ebc87ee2da62a013119811673079c76832fd5f1f013b3b4633c44
MD5 195f2de8f142355072f2405b2590957c
BLAKE2b-256 6f1c59e8bac37dae0e65f9ea5ad682687fa7adf1ebd92f54ecfc8629a5208d12

See more details on using hashes here.

Provenance

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page