Skip to main content

nlcodec is a collection of encoding schemes for natural language sequences

Project description

NLCodec

A set of (low level) Natural Language Encoder-Decoders (codecs), that are useful in preprocessing stage of NLP pipeline. These codecs include encoding of sequences into one of the following:

  1. Character
  2. Word
  3. BPE based subword

It provides python (so embed into your app) and CLI APIs (use it as stand alone tool).

There are many BPE implementations available already, but this one provides differs:

  1. Pure python implementation that is easy to modify anything to try new ideas. (other implementations require c++ expertise to modify the core)
  2. BPE model is a simple text that can be inspected with less or cut. It includes info on which pieces were put together and what frequencies etc.
  3. Reasonably faster than the other pure python implementations -- speed in python comes with the cost of extra memory due to indexing.

Installation

Please run only one of these

# Clone repo for development mode (preferred  mode)
git clone https://github.com/isi-nlp/nlcodec
cd nlcodec
pip install --editable . 

# Install from github, directly
$ pip install git+https://github.com/isi-nlp/nlcodec.git


# Install from pypi
$ pip install nlcodec

pip installer registers a cli tool named nlcodec in PATH which serves is the command line interface. You can always trigger either via python -m nlcodec or python path/to/nlcodec/__main__.py if you wish!

Usage

$ python -m nlcodec -h
usage: __main__.py [-h] [-i INP] [-o OUT] -m MODEL [-idx] [-vs VOCAB_SIZE]
                   [-l {char,word,bpe}] [-mf MIN_FREQ]
                   {learn,encode,decode,estimate}

positional arguments:
  {learn,encode,decode,estimate}
                        "task" or sub-command.
                            "learn" - learns vocabulary. use --level and vocab_size for type and size 
                            "encode" - encodes a dataset 
                            "decode" - decodes an already encoded dataset
                            "estimate" - estimates quality attributes of an encoding

optional arguments:
  -h, --help            show this help message and exit
  -i INP, --inp INP     Input file path (default: <_io.TextIOWrapper
                        name='<stdin>' mode='r' encoding='UTF-8'>)
  -o OUT, --out OUT     Output file path. Not valid for "learn" or "estimate"
                        task (default: <_io.TextIOWrapper name='<stdout>'
                        mode='w' encoding='UTF-8'>)
  -m MODEL, --model MODEL
                        Path to model aka vocabulary file (default: None)
  -idx, --indices       Indices instead of strings. Valid for task=encode and
                        task=decode (default: None)

args for task=learn:
  -vs VOCAB_SIZE, --vocab_size VOCAB_SIZE
                        Vocabulary size. Valid only for task=learn. This is
                        required for "bpe", but optional for "word" and "char"
                        models, specifying it will trim the vocabulary at
                        given top most frequent types. (default: -1)
  -l {char,word,bpe}, --level {char,word,bpe}
                        Encoding Level; Valid only for task=learn (default:
                        None)
  -mf MIN_FREQ, --min_freq MIN_FREQ
                        Minimum frequency of types for considering inclusion
                        in vocabulary. Types fewer than this frequency will be
                        ignored. For --level=word, freq is type freq and
                        default is 2.for --level=char or --level=bpe,
                        characters fewer than this value will be excluded.
                        default=20 (default: None)

Example:

# learn
head -2000 somefile.tok | nlcodec learn -l bpe -m bpe.model --vocab_size 2000

# encode  with text pieces
head  somefile.tok  | nlcodec encode -m bpe.model

# encode with indexes
head  somefile.tok  | nlcodec encode -m bpe.model -idx

# decode -- undo encoding
head  somefile.tok  | nlcodec decode -m bpe.model
head  somefile.tok  | nlcodec decode -m bpe.model -idx

# estimate quality 
head  somefile.tok  | nlcodec estimate -m bpe.model

Python API

Using a vocabulary

from nlcodec import  load_scheme
path = 'path/to/vocab.model'
vocab = load_scheme(path)

line = 'this is a sample sentence'
# encode a line of text into list of ids
vocab.encode(line)

# parallel encode a bunch of lines using multiple cpus
vocab.encode_parallel(seqs=[line], n_cpus=2)

# encode a line of text into pieces 
vocab.encode_str(line)

# decode
vocab.decode(vocab.encode(line))
vocab.decode_str(vocab.encode_str(line))

Creating a vocabulary

from nlcodec import learn_vocab
inp = ['line 1', 'line 2']
level = 'bpe' # other options = char, word
model = 'path/to/vocab.model'
learn_vocab(inp, level, model, vocab_size=8000, min_freq=1, char_coverage=0.9995)

BPE Subword sub optimal splits for regularization

from nlcodec import load_scheme, BPEScheme
path = 'path/to/bpe-vocab.model'
bpe: BPEScheme = load_scheme(path)
some_type = bpe.table[1000] # select some bpe piece type

# get stochastic split
some_type.get_stochastic_split(split_ratio=0.5, name=False)
# get all possible permutations 
some_type.get_permutations(name=False)

Scaling for Big data(sets)

For larger datasets, you may take advantage of PySpark to compute term-frequencies on a separate step. The precomputed term frequencies can be specified to nlcodec learn -tfs i.e. by setting -tfs flag.

To compute term frequencies

  • Install PySpark using pip install pyspark
  • Compute term frequencies
$ python -m nlcodec.term_freq -h
usage: term_freq.py [-h] [-i INP [INP ...]] [-wf WORD_FREQS] [-cf CHAR_FREQS]
                    [-dd] [-ndd]

optional arguments:
  -h, --help            show this help message and exit
  -i INP [INP ...], --inp INP [INP ...]
                        Input file paths (default: None)
  -wf WORD_FREQS, --word_freqs WORD_FREQS
                        Output file path for word frequencies (default: None)
  -cf CHAR_FREQS, --char_freqs CHAR_FREQS
                        Output file path for character frequencies (default:
                        None)
  -dd, --dedup          Deduplicate the sentences: use only unique sequences
                        (default: True)
  -ndd, --no-dedup      Do not deduplicate. (default: False)

Example

# use these environment vars
export SPARK_DRIVER_MEM="4g"
export SPARK_MATSER="local[*]"   # all CPU cores of local node
python -m nlcodec.term_freq -dd -wf words.tsv -cf chars.tsv \
    -i ~/work/datasets/wmt/data/*-*/*.en.tok 

words.tsv and chars.tsv have the word and character frequencies respectively.

# word vocab of 32K
python -m nlcodec learn -i words.tsv -tfs -l word -vs 32000 -m word.model

# Character vocab of 99.95% coverage
python -m nlcodec learn -i chars.tsv -tfs -l char  -mf 1 -cv 0.9995 -m char.model

# BPE vocab of 8K 
python -m nlcodec learn -i words.tsv -tfs -l bpe -vs 8000 -m bpe.model

# BPE vocab until minimum merge frequency is 100; set -vs=64000  as some large number 
python -m nlcodec learn -i words.tsv -tfs -l bpe -vs 64000 -m bpe.model -cv 0.99995 -mce 100

Authors

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

nlcodec-0.2.2.tar.gz (30.0 kB view details)

Uploaded Source

Built Distribution

nlcodec-0.2.2-py3-none-any.whl (38.8 kB view details)

Uploaded Python 3

File details

Details for the file nlcodec-0.2.2.tar.gz.

File metadata

  • Download URL: nlcodec-0.2.2.tar.gz
  • Upload date:
  • Size: 30.0 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/3.1.1 pkginfo/1.5.0.1 requests/2.23.0 setuptools/46.0.0.post20200309 requests-toolbelt/0.9.1 tqdm/4.43.0 CPython/3.7.7

File hashes

Hashes for nlcodec-0.2.2.tar.gz
Algorithm Hash digest
SHA256 a0099f89917cadd95cd42884f4c21dbee7bbf3c31b3e577be1df8c8ebf60c06d
MD5 34eef424920c0cf366c079b811dd639c
BLAKE2b-256 09b065b350cbc137c72fe7c81a3d92210d80b123a9613d878df2c8253982480e

See more details on using hashes here.

File details

Details for the file nlcodec-0.2.2-py3-none-any.whl.

File metadata

  • Download URL: nlcodec-0.2.2-py3-none-any.whl
  • Upload date:
  • Size: 38.8 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/3.1.1 pkginfo/1.5.0.1 requests/2.23.0 setuptools/46.0.0.post20200309 requests-toolbelt/0.9.1 tqdm/4.43.0 CPython/3.7.7

File hashes

Hashes for nlcodec-0.2.2-py3-none-any.whl
Algorithm Hash digest
SHA256 e6c47e45dbf1abafd15ed54912984255542687a983893c9e6c09add565059591
MD5 cc9b407043d553a6a4c797d5125b52f6
BLAKE2b-256 27c9b46350d6b6ec0536b4648ca5d603011255bdb38d70a11d3dac2595da43e1

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page