Helper Functions#
In addition to the primary tokenize()
entry-point, the
tokenize
module has several additional helper functions.
generate_tokens(readline)
#
Similar to tokenize()
, except the readline
method should
return strings instead of bytes. This is useful when working interactively, as
you do not need to use bytes literals or encode str
objects into bytes
. It
otherwise works the same as tokenize
: it accepts a readline
method and
returns an iterator of tokens.
An important difference with generate_tokens()
is that since it already
accepts strings, it assumes that the input is already encoded. Therefore, it
will ignore # -*- coding: ... -*-
comments (see the section on
exceptions). Consequently, you should only use this function
when you already have the input as an encoded string (e.g., when working
interactively). If you are reading from a file or receiving the Python from
some other source that is in bytes, you should use tokenize()
instead, as it
will correctly detect the encoding from coding headers.
Another important difference is that generate_tokens()
will not emit the
ENCODING
token.
This guide uses tokenize()
in all its examples. This is because even though
generate_tokens()
may appear to be more convenient—after all, the examples
here are all self-contained pieces of code in strings—the typical use-case
of tokenize
involves reading a code from bytes (i.e., from a file).
Furthermore, it is also often convenient to have the ENCODING
token as a guaranteed first token, even if it is not actually used, as it can
make processing tokens a little simpler in some cases (see the
examples).
Finally, note that untokenize()
returns a bytes
object, so
if you are working with it, it maybe be simpler to just use tokenize()
and
work with bytes
everywhere.
Here is an example comparing tokenize()
to generate_tokens()
for the code
a + b
.
>>> import tokenize
>>> import io
>>> code = "a + b\n"
>>> # With tokenize, we must encode the string as bytes
>>> for t in tokenize.tokenize(io.BytesIO(code.encode('utf-8')).readline):
... print(t)
TokenInfo(type=63 (ENCODING), string='utf-8', start=(0, 0), end=(0, 0), line='')
TokenInfo(type=1 (NAME), string='a', start=(1, 0), end=(1, 1), line='a + b\n')
TokenInfo(type=54 (OP), string='+', start=(1, 2), end=(1, 3), line='a + b\n')
TokenInfo(type=1 (NAME), string='b', start=(1, 4), end=(1, 5), line='a + b\n')
TokenInfo(type=4 (NEWLINE), string='\n', start=(1, 5), end=(1, 6), line='a + b\n')
TokenInfo(type=0 (ENDMARKER), string='', start=(2, 0), end=(2, 0), line='')
>>> # With generate_tokens(), we can use the encoded str object directly.
>>> # The output is the same except for the fact that the ENCODING token is omitted.
>>> for t in tokenize.generate_tokens(io.StringIO(code).readline):
... print(t)
TokenInfo(type=1 (NAME), string='a', start=(1, 0), end=(1, 1), line='a + b\n')
TokenInfo(type=54 (OP), string='+', start=(1, 2), end=(1, 3), line='a + b\n')
TokenInfo(type=1 (NAME), string='b', start=(1, 4), end=(1, 5), line='a + b\n')
TokenInfo(type=4 (NEWLINE), string='\n', start=(1, 5), end=(1, 6), line='a + b\n')
TokenInfo(type=0 (ENDMARKER), string='', start=(2, 0), end=(2, 0), line='')
untokenize(iterable)
#
Converts an iterable of tokens into a bytes string. The string is encoded
using the encoding of the ENCODING
token. If there
is no ENCODING
token present, the string is returned decoded (a str
instead of bytes
). The iterable can be TokenInfo
objects, or tuples of
(TOKEN_TYPE, TOKEN_STRING)
.
This function always round-trips in one direction, namely,
tokenize(io.BytesIO(untokenize(tokens)).readline)
will always return the
same tokens.
If full TokenInfo
tuples are given with correct start
and end
information (iterable of 5-tuples), this function also round-trips in the
other direction, for the most part (it assumes space characters between
tokens). However, be aware that the start
and end
tuples must be
nondecreasing. If the start
of one token is before the end
of the previous
token, it raises ValueError
. Therefore, if you want to modify tokens and use
untokenize()
to convert back to a string, using full 5-tuples, you must keep
track of and maintain the line and column information in start
and end
.
>>> import tokenize
>>> import io
>>> string = b'sum([[1, 2]][0])'
>>> tokenize.untokenize(tokenize.tokenize(io.BytesIO(string).readline))
b'sum([[1, 2]][0])'
If only the token type and token names are given (iterable of 2-tuples),
untokenize()
does not round-trip, and in fact, for any nontrivial input, the
resulting bytes string will be very different than the original input. This is
because untokenize()
adds spaces after certain tokens to ensure the
resulting string is syntactically valid (or rather, to ensure that it
tokenizes back in the same way).
>>> tokenize.untokenize([(i, j) for (i, j, _, _, _) in tokenize.tokenize(io.BytesIO(string).readline)])
b'sum ([[1 ,2 ]][0 ])'
2-tuples and 5-tuples can be mixed (for instance, you can add new tokens to a
list of TokenInfo
objects using only 2-tuples), but in this case, it will
ignore the column information for the 5-tuples.
Consider this simple example which replaces all STRING
tokens with a list of STRING
tokens of individual
characters (making use of implicit string concatenation). Once untokenize()
encounters the newly added 2-tuple tokens, it ignores the column information
and uses its own spacing.
>>> import ast
>>> def split_string(s):
... """
... Split string tokens into constituent characters
... """
... new_tokens = []
... for toknum, tokstr, start, end, line in tokenize.tokenize(io.BytesIO(s.encode('utf-8')).readline):
... if toknum == tokenize.STRING:
... for char in ast.literal_eval(tokstr):
... new_tokens.append((toknum, repr(char)))
... else:
... new_tokens.append((toknum, tokstr, start, end, line))
... return tokenize.untokenize(new_tokens).decode('utf-8')
>>> split_string("print('hello ') and print('world')")
"print('h' 'e' 'l' 'l' 'o' ' ')and print ('w' 'o' 'r' 'l' 'd')"
If you want to use the tokenize
module to extend the Python language
injecting or modifying tokens in a token stream, then using exec
or eval
to convert the resulting source into executable code, and you do not care what
the code itself looks like, you can simply pass this function tuples of
(TOKEN_TYPE, TOKEN_STRING)
and it will work fine. However, if your end goal
is to translate code in a human-readable way, you must keep track of line and
column information near the tokens you modify. The tokenize
module does not
provide any tools to help with this.
detect_encoding(readline)
#
The official
docs
for this function are helpful. This is the function used by tokenize()
to
generate the ENCODING
token. It can be used separately to
determine the encoding of some Python code. The calling syntax is the same as
for tokenize()
.
Returns a tuple of the encoding, and a list of any lines (in bytes) that it
has read from the function (it will read at most two lines from the file).
Invalid encodings will cause it to raise a
SyntaxError
.
>>> tokenize.detect_encoding(io.BytesIO(b'# -*- coding: ascii -*-').readline)
('ascii', [b'# -*- coding: ascii -*-'])
This function should be used to detect the encoding of a Python source file before opening it in text mode. For example
with open('file.py', 'br') as f:
encoding, _ = tokenize.detect_encoding(f.readline)
with open('file.py', encoding=encoding) as f:
...
Otherwise, the text read from the file may not be parsable as Python. For
example, ast.parse
may fail if text from the file is read with the wrong
encoding. For example, if a file starts with a Unicode BOM
character, ast.parse
will
fail if the file is not opened with the proper encoding.
tokenize.open(filename)
#
This is an alternative to the built-in open()
function that automatically
opens a Python file in text mode with the correct encoding, as detected by
detect_encoding()
.
This function is not particularly useful in conjunction with the tokenize()
function (remember that tokenize()
requires opening a file in binary mode,
whereas this function opens it in text mode). Rather, this is a function that
uses the functionality of the tokenize
module, in particular,
detect_encoding()
, to provide a higher level task that would be difficult to
do otherwise (opening a Python source file in text mode using the syntactically
correct encoding).
Command Line Usage#
The tokenize
module can be called from the command line using python -m tokenize filename.py
. This prints three columns, representing the start-end
line and column positions, the token type, and the token string. If the -e
flag is used, the token type for operators is the exact type. Otherwise the
OP
type is used.
$ python -m tokenize example.py
0,0-0,0: ENCODING 'utf-8'
1,0-1,43: COMMENT '# This is a an example file to be tokenized'
1,43-1,44: NL '\n'
2,0-2,1: NL '\n'
3,0-3,3: NAME 'def'
3,4-3,7: NAME 'two'
3,7-3,8: OP '('
3,8-3,9: OP ')'
3,9-3,10: OP ':'
3,10-3,11: NEWLINE '\n'
4,0-4,4: INDENT ' '
4,4-4,10: NAME 'return'
4,11-4,12: NUMBER '1'
4,13-4,14: OP '+'
4,15-4,16: NUMBER '1'
4,16-4,17: NEWLINE '\n'
5,0-5,0: DEDENT ''
5,0-5,0: ENDMARKER ''
$ python -m tokenize -e example.py
0,0-0,0: ENCODING 'utf-8'
1,0-1,43: COMMENT '# This is a an example file to be tokenized'
1,43-1,44: NL '\n'
2,0-2,1: NL '\n'
3,0-3,3: NAME 'def'
3,4-3,7: NAME 'two'
3,7-3,8: LPAR '('
3,8-3,9: RPAR ')'
3,9-3,10: COLON ':'
3,10-3,11: NEWLINE '\n'
4,0-4,4: INDENT ' '
4,4-4,10: NAME 'return'
4,11-4,12: NUMBER '1'
4,13-4,14: PLUS '+'
4,15-4,16: NUMBER '1'
4,16-4,17: NEWLINE '\n'
5,0-5,0: DEDENT ''
5,0-5,0: ENDMARKER ''