With the release of ANTLR 2.7.5, you can now generate your Lexers, Parsers and TreeParsers in Python. This feature extends the benefits of ANTLR's predicated-LL(k) parsing technology to the Python language and platform.
To be able to build and use the Python language Lexers, Parsers and TreeParsers, you will need to have the ANTLR Python runtime library installed in your Python path. The Python runtime model is based on the existing runtime model for Java and is thus immediately familiar. The Python runtime and the Java runtime are very similar although there a number of subtle (and not so subtle) differences. Some of these result from differences in the respective runtime environments.
ANTLR Python support was contributed (and is to be maintained) by Wolfgang Haefelinger and Marq Kole.
The ANTLR Python runtime source and build files are completely
integrated in the ANTLR build process.The ANTLR runtime support module
for Python is located in the lib/python
subdirectory
of the ANTLR distribution. Installation of the Python runtime support
is enabled automatically if Python can be found on your system by the
configure script.
With Python support enabled the current distribution will look for the presence of a python executable of version 2.2 or higher. If it has found such a beast, it will generate and install the ANTLR Python runtime as part of the overall ANTLR building and installation process.
If the python distribution you are using is at an unusual location, perhaps because you are using a local installation instead of a system-wide one, you can provide the location of that python executable using the --with-python=<path> option for the configure script, for instance:
./configure --with-python=$HOME/bin/python2.3
Also, if the python executable is at a regular location, but has a name that differs from "python", you can specify the correct name through the --with-python=<path>, as shown above, or through environment variable $PYTHON
PYTHON=python2.3 export PYTHON ./configure
All the example grammars for the ANTLR Python runtime are built when ANTLR itself is built. They can be run in one go by running make test in the same directory where you ran the configure script in the ANTLR distribution. So after you've run configure you can do:
# Build ANTLR and all examples make # Run them make test # Install everything make install
Note that make install will not add the ANTLR Python runtime (i.e. antlr.py) to your Python installation but rather install antlr.py in ${prefix}/lib. To be able to use antlr.py you would need to adjust Python's sys.path.
However, there a script is provided that let's you easily add antlr.py as module to your Python installation. After installation just run
${prefix}/sbin/pyantlr.sh install
Note that usually you need to be superuser in order to succeed. Also note that you can run this command later at any time again, for example, if you have a second Python installation etc. Just make sure that python is in your $PATH when running pyantlr.sh.
Note further that you can also do this to install ANTLR Python runtime immediatly after having called ./configure:
scripts/pyantlr.sh install
You can instruct ANTLR to generate your Lexers, Parsers and TreeParsers using the Python code generator by adding the following entry to the global options section at the beginning of your grammar file.
{ language="Python"; }
After that things are pretty much the same as in the default
java code generation mode. See the examples in
examples/python
for some illustrations.
One particular issue that is worth mentioning is the handling of
comments in ANTLR Python. Java, C++, and C# all use the same lexical
structures to define comments: //
for single-line
comments, and /* ... */
for block comments. Unfortunately,
Python does not handle comments this way. It only knows about
single-line comments, and these start off with a #
symbol.
Normally, all comments outside of actions are actually comments in the ANTLR input language. These comments, and that is both block comments and single-line comments are translated into Python single-line comments.
Secondly, all comments inside actions should be comments in the
target language, Python in this case. Unfortunately, if the actions
contain ANTLR actions, such as $getText
, the code
generator seems to choke on Python comments as the #
sign
is also used in tree construction. The solution is to use Java/C++-style
comments in all actions; these will be translated into Python comments
by the ANTLR as it checks these actions for the presence of predefined
action symbols such as $getText
.
So, as a general issue: all comments in an ANTLR grammar for the Python target should be in Java/C++ style, not in Python style.
import
directives
You can instruct the ANTLR Python code generator to import additional Python packages in your generated Lexer/Parser/TreeParser by adding code to the header section which must be the first section at the beginning of your ANTLR grammar file, apart from any other header sections.
header { import os, sys }
__init__
method
You can instruct the ANTLR Python code generator to include
additional Python code in your generated Lexer/Parser/TreeParser by
adding code to the init
header section which must
be the first section at the beginning of your ANTLR grammar file,
apart from any other header sections. The code in the header is
appended to the end of the __init__
method.
header "__init__" { self.message = "This is the default message" }
If your grammar file contains both a Lexer and a Parser (or any
other multiple of definitions), the code in the
__init__
header will be reproduced in the
__init__
methods of all of these definitions without
change. If you really want to update only one of the definitions,
for instance, the __init__
method of the Lexer class
you are creating, use
header "<LexerGrammar>.__init__" { self.message = "This is the default message" }
where <LexerGrammar> is the name of the Lexer grammar. The same construction also works with the Parsers and TreeParsers, of course.
In the case both a generic init header and a grammar-specific header are present, the grammar-specific one will override the generic one.
You can instruct the ANTLR Python code generator to add
additional Python code at the end of your generated
Lexer/Parser/TreeParser, so after the class definition itself by
adding code to the __main__
header section which must
be the first section at the beginning of your ANTLR grammar file,
apart from any other header sections.
header "__main__" { print "You cannot execute this file!" }
If your grammar file contains both a Lexer and a Parser (or any
other multiple of definitions), the code in the __main__
header will be reproduced at the end of all of the generated class
definitions. If you really want to add code after only one of the
definitions, for instance, after the Lexer class, use
header "<LexerGrammar>.__main__" { print "You cannot execute this file!" }
where <LexerGrammar> is the name of the Lexer grammar. The same construction also works with the Parsers and TreeParsers, of course.
In the case both a generic init header and a grammar-specific
header are present, the grammar-specific one will override the
generic one. If no __main__
headers are present and the
grammar is for a Lexer, automated test code for that lexer is
automatically added at the end of the generated module. This can be
prevented by providing an empty __main__
header. In the
latter case it is good practise to provide a comment explaining why
an empty header is present.
header "<LexerGrammar>.__main__" { // Empty main header to prevent automatic test code from being added // to the generated lexer module. }
This automated test code can be executed by running Python with the generated lexer file (<LexerGrammar>.pywhere <LexerGrammar> is the name of the Lexer grammar) and providing some test input on stdin:
python <LexerGrammar>.py < test.in
options { className="Scanner"; }
If you are using the className option conjunction with the
Python specific header options, there will be no collisions. The
className option changes the class name, while the
main
headers require the use of the grammar name which
will become the module name after code generation.
header "ParrotSketch.init" { self.state = JohnCleese.select("dead", "pushing up daisies", \ "no longer", "in Parrot Heaven") print "This parrot is", self.state } class ParrotSketch extends Lexer; options { className="Scanner"; }
As the handling of modules &emdash; packages in Java speak &emdash; in Python differs from that in Java, the current approach in ANTLR to call both the file and the class they contain after the name of the grammar is kind of awkward. Instead, a different approach is chosen that better reflects the handling of modules in Python. The name of the generated Python file is still derived from the name of the grammar, but the name of the class is fixed to the particular kind of grammar. A lexer grammar will be used to generate a class Lexer; a parser grammar will be used to generate a class Parser; and a treeparser grammar will be used to generate a class Walker.
header { // gets inserted in the Python source file before any generated // declarations ... } header "__init__" { // gets inserted in the __init__ method of each of the generated Python // classes ... } header "MyParser.__init__" { // gets inserted in the __init__ method of the generated Python class // for the MyParser grammar ... } header "__main__" { // gets inserted at the end of each of the generated Python files in an // indented section preceeded by the conditional: // if __name__ == "__main__": ... } header "MyLexer.__init__" { // gets inserted at the end of the generated Python file for the MyLexer // grammar in an indented section preceeded by the conditional: // if __name__ == "__main__": // and preventing the insertion of automatic test code in the same place. ... } options { language = "Python"; } { // global code stuff that will be included in the 'MyParser.py' source // file just before the 'Parser' class below ... } class MyParser extends Parser; options { exportVocab=My; } { // additional methods and members for the generated 'Parser' class ... } ... generated RULES go here ... { // global code stuff that will be included in the 'MyLexer' source file // just before the 'Lexer' class below ... } class MyLexer extends Lexer; options { exportVocab=My; } { // additional methods and members for the generated 'Lexer' class ... } ... generated RULES go here ... { // global code stuff that will be included in the 'MyTreeParser' source // file just before the 'Walker' class below ... } class MyTreeParser extends TreeParser; options { exportVocab=My; } { // additional methods and members for the generated 'Walker' class ... } ... generated RULES go here ...Version number in parentheses shows the tool version used to develop and test. It may work with older versions as well. Python 2.2 or better is required as some recent Python features (like super() for example) are being used.
More notes on using ANTLR Python
The API of the generated lexers, parsers, and treeparsers is supposed to be similar to the Java ones. However, calling a lexer is somewhat simplified:
### class "calcLexer extends Lexer" will generate python ### module "calcLexer" with class "Lexer". import calcLexer ### read from stdin .. L = calcLexer.Lexer() ### read from file "test.in" .. L = calcLexer.Lexer("test.in") ### open a file and read from it .. f = file("test.in", "r") L = calcLexer.Lexer(f) ### this works of course as well import sys L = calcLexer.Lexer(sys.stdin) ### use a shared input state L1 = calcLexer.Lexer(...) state = L1.inputState L2 = calcLexer.Lexer(state)The loop for the lexer to retrieve token by token can be written as:
or even:lexer = calcLexer.Lexer() ### create a lexer for calculator for token in lexer: ## do something with token print tokenfor token in calcLexer.Lexer(): ### create a lexer for calculator ## do something with token print tokenAs an iterator is available for all TokenStreams, you can apply the same technique with a TokenStreamSelector.
However, writing this particular lexer loop is rarely necessary as it is generated by default in each generated lexer. Just run:
to test the generated lexer.python calcLexer.py < calc.in
Symbolic token number, table of literals bitsets and bitset data functions are generated on file (module) scope instead of class scope. For example:
import calcLexer # import calc lexer module calcLexer.EOF_TYPE # prints 1 calcLexer.literals # { ';': 11, 'end': 12, 'begin': 10 }Comments in action should be in Java/C++ formats, ie. // and /* ... */ are valid comments. However, make sure that you put a comment before or after a statement, but not within. For example, this will not work:
x = /* one */ 1The reason is that Python only supports single-line comments. Such a Python comment skips everything till end-of-line. Therefore in the translation of the comment a newline will be introduced on reaching */. The code above would result in the following Python code in the generated file:
x = # one 1which is probably not what you want.
- The Lexer actions $newline, $nl and $skip have been introduced as language independent shortcuts for calling self.newline() ($newline, $nl) and _ttype = SKIP ($skip).
In Python arguments to function and method calls do not have a declared type. Also, functionns and methdos do not have to declare a return type. If you want to pass a value to a rule in your grammar, you can do so by providing simply the name of a variable.
ident [symtable] : ( 'a'..'z' | '0'..'9' )+ ;Similarly, is you want a rule to pass a return value, you do not have to provide a type either. It is possible to provide a default value.
sign returns [isPos = False] : '-' { /* default value is OK */ } | '+' { isPos = True } ;The __init__ method of the generated Lexer, Parser, or TreeParser has the following heading:
def __init__(self, *args, **kwargs): ...So if you need to pass special arguments to your generated class, you can use the **kwargs to check for a particular keyword argument, irrespective of any non-keyword arguments that you did provide. So if you have a TokenStreamSelector that you want to access locally, you can pass it to the Lexer in the following call:
MySpecialLexer.Lexer(sys.stdin, selector=TokenStreamSelector())while in the __init__ header of this particular grammar you can specify the handling of the selector keyword argument in the following way:
header "MyParser.__init__" { self.selector = None if kwargs.has_key("selector"): self.selector = kwargs["selector"] assert(isinstance(self.selector, TokenStreamSelector)) }Because of limitations in the lexer of the ANTLR compiler generator itself, you cannot use single quoted strings of more than one character in your Python code.
So if you use a Python string like 'wink, wink, nudge, nudge' in one of your actions, ANTLR will give a parse error when you try to compile this grammar. Instead you should use double quotes: "wink, wink, nudge, nudge".Unicode is supported but it's easy to run into errors if your terminal(output device) is not able to handle unicode chars.
Here are some rules when using Unicode input:
- You need to wrap your input stream by a stream reader which translates bytes into unicode chars. This requires usually knowledge about your input's encoding. Assume for example that your input is 'latin1', you would do this:
Here reading from stdin gets wrapped.### replace stdin with a wrapper that spits out ### unicode chars. sys.stdin = codecs.lookup('latin1')[-2](sys.stdin)- When printing tokens etc containing Unicode chars it appears to be best to translate explicit to a unicode string before printing. Consider:
This explicit cast appears to be a bug in Python found during development (discussion still in progress).for token in unicode_l.Lexer() : print unicode(token) ## explict cast