A wrapper around the stdlib `tokenize` which roundtrips.
Project description
tokenize-rt
The stdlib tokenize
module does not properly roundtrip. This wrapper
around the stdlib provides two additional tokens ESCAPED_NL
and
UNIMPORTANT_WS
, and a Token
data type. Use src_to_tokens
and
tokens_to_src
to roundtrip.
This library is useful if you're writing a refactoring tool based on the python tokenization.
Installation
pip install tokenize-rt
Usage
datastructures
tokenize_rt.Offset(line=None, utf8_byte_offset=None)
A token offset, useful as a key when cross referencing the ast
and the
tokenized source.
tokenize_rt.Token(name, src, line=None, utf8_byte_offset=None)
Construct a token
name
: one of the token names listed intoken.tok_name
orESCAPED_NL
orUNIMPORTANT_WS
src
: token's source as textline
: the line number that this token appears on. This will beNone
forESCAPED_NL
andUNIMPORTANT_WS
tokens.utf8_byte_offset
: the utf8 byte offset that this token appears on in the line. This will beNone
forESCAPED_NL
andUNIMPORTANT_WS
tokens.
tokenize_rt.Token.offset
Retrieves an Offset
for this token.
converting to and from Token
representations
tokenize_rt.src_to_tokens(text) -> List[Token]
tokenize_rt.tokens_to_src(Sequence[Token]) -> text
additional tokens added by tokenize-rt
tokenize_rt.ESCAPED_NL
tokenize_rt.UNIMPORTANT_WS
helpers
tokenize_rt.NON_CODING_TOKENS
A frozenset
containing tokens which may appear between others while not
affecting control flow or code:
COMMENT
ESCAPED_NL
NL
UNIMPORTANT_WS
tokenize_rt.parse_string_literal(text) -> Tuple[str, str]
parse a string literal into its prefix and string content
>>> parse_string_literal('f"foo"')
('f', '"foo"')
tokenize_rt.reversed_enumerate(Sequence[Token]) -> Iterator[Tuple[int, Token]]
yields (index, token)
pairs. Useful for rewriting source.
Differences from tokenize
tokenize-rt
addsESCAPED_NL
for a backslash-escaped newline "token"tokenize-rt
addsUNIMPORTANT_WS
for whitespace (discarded intokenize
)tokenize-rt
normalizes string prefixes, even if they are not parsed -- for instance, this means you'll seeToken('STRING', "f'foo'", ...)
even in python 2.
Sample usage
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
File details
Details for the file tokenize_rt-3.1.0.tar.gz
.
File metadata
- Download URL: tokenize_rt-3.1.0.tar.gz
- Upload date:
- Size: 4.7 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/1.13.0 pkginfo/1.5.0.1 requests/2.22.0 setuptools/41.0.1 requests-toolbelt/0.9.1 tqdm/4.32.2 CPython/3.6.8
File hashes
Algorithm | Hash digest | |
---|---|---|
SHA256 | 0ac30f3386b212beb2a0e6dfaa6cb619f711587e9d05436907438ceb51319d58 |
|
MD5 | fdc1e0a371ca51c311a25648f40036eb |
|
BLAKE2b-256 | da15689e5623915c3625d02e8ba3e763cb1928c96cf4d49c09e280cfc474331e |
Provenance
File details
Details for the file tokenize_rt-3.1.0-py2.py3-none-any.whl
.
File metadata
- Download URL: tokenize_rt-3.1.0-py2.py3-none-any.whl
- Upload date:
- Size: 5.2 kB
- Tags: Python 2, Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/1.13.0 pkginfo/1.5.0.1 requests/2.22.0 setuptools/41.0.1 requests-toolbelt/0.9.1 tqdm/4.32.2 CPython/3.6.8
File hashes
Algorithm | Hash digest | |
---|---|---|
SHA256 | 862442a55cd21b24c62adbdf0ae7fa178f1fd289e532fe9f5ab902d227317b42 |
|
MD5 | 4f553cf80fae388d93978bef1732f129 |
|
BLAKE2b-256 | 8788edfba9ab2a34bc9ab557a7241e82aafec767d850852cce7a7f9425d8b47d |