18.5 tokenize -- Tokenizer for Python source

The tokenize module provides a lexical scanner for Python source code, implemented in Python. The scanner in this module returns comments as tokens as well, making it useful for implementing ``pretty-printers,'' including colorizers for on-screen displays.

The primary entry point is a generator:

generate_tokens( readline)
The generate_tokens() generator requires one argment, readline, which must be a callable object which provides the same interface as the readline() method of built-in file objects (see section 2.3.9). Each call to the function should return one line of input as a string.

The generator produces 5-tuples with these members: the token type; the token string; a 2-tuple (srow, scol) of ints specifying the row and column where the token begins in the source; a 2-tuple (erow, ecol) of ints specifying the row and column where the token ends in the source; and the line on which the token was found. The line passed is the logical line; continuation lines are included. New in version 2.2.

An older entry point is retained for backward compatibility:

tokenize( readline[, tokeneater])
The tokenize() function accepts two parameters: one representing the input stream, and one providing an output mechanism for tokenize().

The first parameter, readline, must be a callable object which provides the same interface as the readline() method of built-in file objects (see section 2.3.9). Each call to the function should return one line of input as a string.

The second parameter, tokeneater, must also be a callable object. It is called once for each token, with five arguments, corresponding to the tuples generated by generate_tokens().

All constants from the token module are also exported from tokenize, as are two additional token type values that might be passed to the tokeneater function by tokenize():

COMMENT
Token value used to indicate a comment.
NL
Token value used to indicate a non-terminating newline. The NEWLINE token indicates the end of a logical line of Python code; NL tokens are generated when a logical line of code is continued over multiple physical lines.
See About this document... for information on suggesting changes.