Skip to main content

Craft simple regex-based small language lexers and parsers. Build parsers from grammars and accept Pygments lexers as an input. Derived from NLTK.

Project description

https://github.com/nexB/pygmars

pygmars is a simple lexing and parsing library designed to craft lightweight lexers and parsers using regular expressions.

pygmars allows you to craft simple lexers that recognizes words based on regular expressions and identify sequences of words using lightweight grammars to obtain a parse tree.

The lexing task transforms a sequence of words or strings (e.g. already split in words) in a sequence of Token objects, assigning a label to each word and tracking their position and line number.

In particular, the lexing output is designed to be compatible with the output of Pygments lexers. It becomes possible to build simple grammars on top of existing Pygments lexers to perform lightweight parsing of the many (130+) programming languages supported by Pygments.

The parsing task transforms a sequence of Tokens in a parse Tree where each node in the tree is recognized and assigned a label. Parsing is using regular expression-based grammar rules applied to recognize Token sequences.

These rules are evaluated sequentially and not recursively: this keeps things simple and works very well in practice. This approach and the rules syntax has been battle-tested with NLTK from which pygmars is derived.

What about the name?

“pygmars” is a portmanteau of Pyg-ments and Gram-mars.

Origins

This library is based on heavily modified, simplified and remixed original code from NLTK regex POS tagger (renamed lexer) and regex chunker (renamed parser). The original usage of NLTK was designed by @savinosto parse copyrights statements in ScanCode Toolkit.

Users

pygmars is used by ScanCode Toolkit for copyright detection and for lightweight programming language parsing.

Why pygmars?

Why create this seemingly redundant library? Why not use NLTK directly?

  • NLTK has a specific focus on NLP and lexing/tagging and parsing using regexes is a tiny part of its overall feature set. These are part of rich set of taggers and parsers and implement a common API. We do not have the need for these richer APIs and they make evolving the API and refactoring the code difficult.

  • In particular NLTK POS tagging and chunking has been the engine used in ScanCode toolkit copyright and author detection and there are some improvements, simplifications and optimizations that would be difficult to implement in NLTK directly and unlikely to be accepted upstream. For instance, simplification of the code subset used for copyright detection enabled a big boost in performance. Improvements to track the Token lines and positions may not have been possible within the NLTK API.

  • Newer versions of NLTK have several extra required dependencies that we do not need. This is turn makes every tool heavier and complex when they only use this limited NLTK subset. By stripping unused NLTK code, we get a small and focused library with no dependencies.

  • ScanCode toolkit also needs lightweight parsing of several programming languages to extract metadata (such as dependencies) from package manifests. Some parsers have been built by hand (such as gemfileparser), or use the Python ast module (for Python setup.py), or they use existing Pygments lexers as a base. A goal of this library is to be enable building lightweight parsers reusing a Pygments lexer output as an input for a grammar. This is fairly different from NLP in terms of goals.

Theory of operations

A pygmars.lex.Lexer creates a sequence of pygmars.Token objects such as:

Token(value="for" label="KEYWORD", start_line=12, pos=4)

where the label is a symbol name assigned to this token.

A Token is a terminal symbol and the grammar is composed of rules where the left hand side is a label aka. a non-terminal symbol and the right hand side is a regular expression-like pattern over labels.

See https://en.wikipedia.org/wiki/Terminal_and_nonterminal_symbols

A pygmars.parse.Parser is built from a pygmars.parse.Grammmar and calling its parse function transforms a sequence of Tokens in a pygmars.tree.Tree parse tree.

The grammar is composed of Rules and loaded from a text with one rule per line such as:

ASSIGNMENT: {<VARNAME> <EQUAL> <STRING|INT|FLOAT>} # variable assignment

Here the left hand side “ASSIGNMENT” label is produced when the right hand side sequence of Token labels “<VARNAME> <EQUAL> <STRING|INT|FLOAT>” is matched. “# variable assignment” is kept as a description for this rule.

License

  • SPDX-License-Identifier: Apache-2.0

Based on a substantially modified subset of the Natural Language Toolkit (NLTK) http://nltk.org/

Copyright (c) nexB Inc. and others. Copyright (C) NLTK Project

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

pygmars-0.8.0.tar.gz (86.8 kB view details)

Uploaded Source

Built Distribution

pygmars-0.8.0-py3-none-any.whl (28.4 kB view details)

Uploaded Python 3

File details

Details for the file pygmars-0.8.0.tar.gz.

File metadata

  • Download URL: pygmars-0.8.0.tar.gz
  • Upload date:
  • Size: 86.8 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/4.0.1 CPython/3.9.15

File hashes

Hashes for pygmars-0.8.0.tar.gz
Algorithm Hash digest
SHA256 f434c885da52a0dc61a231ce40fb407ad7a92c0e4e4a6a97b51b49095136d35e
MD5 b459b2d19a8dce4d980fbd55023771cd
BLAKE2b-256 1d2fba7149eb6c36b30ebe413e03f0ae19b15edbff0f82f299a477ef93ab6489

See more details on using hashes here.

Provenance

File details

Details for the file pygmars-0.8.0-py3-none-any.whl.

File metadata

  • Download URL: pygmars-0.8.0-py3-none-any.whl
  • Upload date:
  • Size: 28.4 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/4.0.1 CPython/3.9.15

File hashes

Hashes for pygmars-0.8.0-py3-none-any.whl
Algorithm Hash digest
SHA256 704e23fa8f6ecc70204d0e24b4332e18bd6afc0fe32432aac51c1f5c1d8349a0
MD5 7d1db507792833b07d7ee1044b2b68f9
BLAKE2b-256 4c8e7d4abe2eee821d7cae1ac48588968a168cc37045939aa4bda434cdfb8ea3

See more details on using hashes here.

Provenance

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page