A lexer (often called a scanner) breaks up an input stream of characters into vocabulary symbols for a parser, which applies a grammatical structure to that symbol stream. Because ANTLR employs the same recognition mechanism for lexing, parsing, and tree parsing, ANTLR-generated lexers are much stronger than DFA-based lexers such as those generated by DLG (from PCCTS 1.33) and lex.
The increase in lexing power comes at the cost of some inconvenience in lexer specification and indeed requires a serious shift your thoughts about lexical analysis. See a comparison of LL(k) and DFA-based lexical analysis.
ANTLR generates predicated-LL(k) lexers, which means that you can have semantic and syntactic predicates and use k>1 lookahead. The other advantages are:
The overall structure of a lexer is:
class MyLexer extends Lexer; options { some options } { lexer class members } lexical rules
Rules defined within a lexer grammar must have a name beginning with an uppercase letter. These rules implicitly match characters on the input stream instead of tokens on the token stream. Referenced grammar elements include token references (implicit lexer rule references), characters, and strings. Lexer rules are processed in the exact same manner as parser rules and, hence, may specify arguments and return values; further, lexer rules can also have local variables and use recursion. The following rule defines a rule called ID that is available as a token type in the parser.
ID : ( 'a'..'z' )+ ;
This rule would become part of the resulting lexer and would appear as a method called mID() that looks sort of like this:
public final void mID(...)
throws RecognitionException,
CharStreamException, TokenStreamException
{
...
_loop3:
do {
if (((LA(1) >= 'a' && LA(1) <= 'z'))) {
matchRange('a','z');
}
} while (...);
...
}
It is a good idea to become familiar with ANTLR's output--the generated lexers are human-readable and make a lot of concepts more transparent.
To have the characters matched by a rule ignored, set the token type to Token.SKIP. For example,
WS : ( ' ' | '\t' | '\n' { newline(); } | '\r' )+ { $setType(Token.SKIP); } ;Skipped tokens force the lexer to reset and try for another token. Skipped tokens are never sent back to the parser.
As with most lexer generators like lex, you simply list a
set of lexical rules that match tokens. The tool then automatically
generates code to map the next input character(s) to a rule likely to
match. Because ANTLR generates recursive-descent lexers just like it
does for parsers and tree parsers, ANTLR automatically generates a
method for a fictitious rule called nextToken that predicts
which of your lexer rules will match upon seeing the character
lookahead. You can think of this method as just a big "switch" that
routes recognition flow to the appropriate rule (the code may be much
more complicated than a simple switch-statement, however).
Method nextToken is the only method of TokenStream
(in Java):
public interface TokenStream {
public Token nextToken() throws TokenStreamException;
}
A parser feeds off a lookahead buffer and the buffer pulls from any
TokenStream.
Consider the following two ANTLR lexer rules:
INT : ('0'..'9')+;
WS : ' ' | '\t' | '\r' | '\n';
You will see something like the following method in lexer generated by
ANTLR:
public Token nextToken() throws TokenStreamException {
...
for (;;) {
Token _token = null;
int _ttype = Token.INVALID_TYPE;
resetText();
...
switch (LA(1)) {
case '0': case '1': case '2': case '3':
case '4': case '5': case '6': case '7':
case '8': case '9':
mINT(); break;
case '\t': case '\n': case '\r': case ' ':
mWS(); break;
default: // error
}
...
}
}
What happens when the same character predicts more than a single lexical rule? ANTLR generates an nondeterminism warning between the offending rules, indicating you need to make sure your rules do not have common left-prefixes. ANTLR does not follow the common lexer rule of "first definition wins" (the alternatives within a rule, however, still follow this rule). Instead, sufficient power is given to handle the two most common cases of ambiguity, namely "keywords vs. identifiers", and "common prefixes"; and for especially nasty cases you can use syntactic or semantic predicates.
What if you want to break up the definition of a complicated rule into multiple rules? Surely you don't want every rule to result in a complete Token object in this case. Some rules are only around to help other rules construct tokens. To distinguish these "helper" rules from rules that result in tokens, use the protected modifier. This overloading of the access-visibility Java term occurs because if the rule is not visible, it cannot be "seen" by the parser (yes, this nomeclature sucks). See also What is a "protected" lexer rule.
Another, more practical, way to look at this is to note that only non-protected rules get called by nextToken and, hence, only non-protected rules can generate tokens that get shoved down the TokenStream pipe to the parser.
All rules return a token object (conceptually) automatically, which contains the text matched for the rule and its token type at least. To specify a user-defined return value, define a return value and set it in an action:
protected INT returns [int v] : (0..9)+ { v=Integer.valueOf($getText); } ;
Note that only protected rules can have a return type since regular lexer rules generally are invoked by nextToken() and the parser cannot access the return value, leading to confusion.
Lexer rules allow your parser to match context-free structures on the input character stream as opposed to the much weaker regular structures (using a DFA--deterministic finite automaton). For example, consider that matching nested curly braces with a DFA must be done using a counter whereas nested curlies are trivially matched with a context-free grammar:
ACTION : '{' ( ACTION | ~'}' )* '}' ;
The recursion from rule ACTION to ACTION, of course, is the dead giveaway that this is not an ordinary lexer rule.
Because the same algorithms are used to analyze lexer and parser rules, lexer rules may use more than a single symbol of lookahead, can use semantic predicates, and can specify syntactic predicates to look arbitrarily ahead, thus, providing recognition capabilities beyond the LL(k) languages into the context-sensitive. Here is a simple example that requires k>1 lookahead:
ESCAPE_CHAR : '\\' 't' // two char of lookahead needed, | '\\' 'n' // due to common left-prefix ;
To illustrate the use of syntactic predicates for lexer rules, consider the problem of distinguishing between floating point numbers and ranges in Pascal. Input 3..4 must be broken up into 3 tokens: INT, RANGE, followed by INT. Input 3.4, on the other hand, must be sent to the parser as a REAL. The trouble is that the series of digits before the first '.' can be arbitrarily long. The scanner then must consume the first '.' to see if the next character is a '.', which would imply that it must back up and consider the first series of digits an integer. Using a non-backtracking lexer makes this task very difficult; without bracktracking, your lexer has to be able to respond with more than a single token at one time. However, a syntactic predicate can be used to specify what arbitrary lookahead is necessary:
class Pascal extends Parser; prog: INT ( RANGE INT { System.out.println("INT .. INT"); } | EOF { System.out.println("plain old INT"); } ) | REAL { System.out.println("token REAL"); } ; class LexPascal extends Lexer; WS : (' ' | '\t' | '\n' | '\r')+ { $setType(Token.SKIP); } ; protected INT : ('0'..'9')+ ; protected REAL: INT '.' INT ; RANGE : ".." ; RANGE_OR_INT : ( INT ".." ) => INT { $setType(INT); } | ( INT '.' ) => REAL { $setType(REAL); } | INT { $setType(INT); } ;
ANTLR lexer rules are even able to handle FORTRAN assignments and other difficult lexical constructs. Consider the following DO loop:
DO 100 I = 1,10
If the comma were replaced with a period, the loop would become an assignment to a weird variable called "DO100I":
DO 100 I = 1.10
The following rules correctly differentiate the two cases:
DO_OR_VAR : (DO_HEADER)=> "DO" { $setType(DO); } | VARIABLE { $setType(VARIABLE); } ; protected DO_HEADER options { ignore=WS; } : "DO" INT VARIABLE '=' EXPR ',' ; protected INT : ('0'..'9')+; protected WS : ' '; protected VARIABLE : 'A'..'Z' ('A'..'Z' | ' ' | '0'..'9')* { /* strip space from end */ } ; // just an int or float protected EXPR : INT ( '.' (INT)? )? ;
The previous examples discuss differentiating lexical rules via
lots of lookahead (fixed k or arbitrary). There are other situations
where you have to turn on and off certain lexical rules (making
certain tokens valid and invalid) depending on prior context or
semantic information. One of the best examples is matching a token
only if it starts on the left edge of a line (i.e., column 1).
Without being able to test the state of the lexer's column counter,
you cannot do a decent job. Here is a simple DEFINE rule
that is only matched if the semantic predicate is true.
DEFINE
: {getColumn()==1}? "#define" ID
;
Semantic predicates on the left-edge of single-alternative lexical rules get hoisted into the nextToken prediction mechanism. Adding the predicate to a rule makes it so that it is not a candidate for recognition until the predicate evaluates to true. In this case, the method for DEFINE would never be entered, even if the lookahead predicted #define, if the column > 1.
Another useful example involves context-sensitive recognition such
as when you want to match a token only if your lexer is in a particular
context (e.g., the lexer previously matched some trigger sequence). If
you are matching tokens that separate rows of data such as
"----", you probably only want to match this if the "begin
table" sequence has been found.
BEGIN_TABLE
: '[' {this.inTable=true;} // enter table context
;
ROW_SEP
: {this.inTable}? "----"
;
END_TABLE
: ']' {this.inTable=false;} // exit table context
;
This predicate hoisting ability is another way to simulate lexical
states from DFA-based lexer generators like lex, though
predicates are much more powerful. (You could even turn on certain
rules according to the phase of the moon). ;)
Many languages have a general "identifier" lexical rule, and keywords that are special cases of the identifier pattern. A typical identifier token is defined as:
ID : LETTER (LETTER | DIGIT)*;
This is often in conflict with keywords. ANTLR solves this problem by letting you put fixed keywords into a literals table. The literals table (which is usally implemented as a hash table in the lexer) is checked after each token is matched, so that the literals effectively override the more general identifier pattern. Literals are created in one of two ways. First, any double-quoted string used in a parser is automatically entered into the literals table of the associated lexer. Second, literals may be specified in the lexer grammar by means of the literal option. In addition, the testLiterals option gives you fine-grained control over the generation of literal-testing code.
Fixed-length common prefixes in lexer rules are best handled by increasing the lookahead depth of the lexer. For example, some operators from Java:
class MyLexer extends Lexer; options { k=4; } GT : ">"; GE : ">="; RSHIFT : ">>"; RSHIFT_ASSIGN : ">>="; UNSIGNED_RSHIFT : ">>>"; UNSIGNED_RSHIFT_ASSIGN : ">>>=";
Token definitions can be transferred from one grammar to another by way of token definition files. This is accomplished using the importVocab and exportVocab options.
Use the ~ operator to invert a character or set of characters. For example, to match any character other than newline, the following rule references ~'\n'.
SL_COMMENT: "//" (~'\n')* '\n';
The ~ operator also inverts a character set:
NOT_WS: ~(' ' | '\t' | '\n' | '\r');
The range operator can be used to create sequential character sets:
DIGIT : '0'..'9' ;
See the next section.
A unique situation occurs when analyzing lexical grammars, one which is similar to the end-of-file condition when analyzing regular grammars. Consider how you would compute lookahead sets for the ('b' | ) subrule in following rule B:
class L extends Lexer; A : B 'b' ; protected // only called from another lex rule B : 'x' ('b' | ) ;
The lookahead for the first alternative of the subrule is clearly 'b'. The second alternative is empty and the lookahead set is the set of all characters that can follow references to the subrule, which is the follow set for rule B. In this case, the 'b' character follows the reference to B and is therefore the lookahead set for the empty alt indirectly. Because 'b' begins both alternatives, the parsing decision for the subrule is nondeterminism or ambiguous as we sometimes say. ANTLR will justly generate a warning for this subrule (unless you use the warnWhenFollowAmbig option).
Now, consider what would make sense for the lookahead if rule A did not exist and rule B was not protected (it was a complete token rather than a "subtoken"):
B : 'x' ('b' | ) ;
In this case, the empty alternative finds only the end of the rule as the lookahead with no other rules referencing it. In the worst case, any character could follow this rule (i.e., start the next token or error sequence). So, should not the lookahead for the empty alternative be the entire character vocabulary? And should not this result in a nondeterminism warning as it must conflict with the 'b' alternative? Conceptually, yes to both questions. From a practical standpoint, however, you are clearly saying "heh, match a 'b' on the end of token B if you find one." I argue that no warning should be generated and ANTLR's policy of matching elements as soon as possible makes sense here as well.
Another reason not to represent the lookahead as the entire vocabulary is that a vocabulary of '\u0000'..'\uFFFF' is really big (one set is 2^16 / 32 long words of memory!). Any alternative with '<end-of-token>' in its lookahead set will be pushed to the ELSE or DEFAULT clause by the code generator so that huge bitsets can be avoided.
The summary is that lookahead purely derived from hitting the end of a lexical rule (unreferenced by other rules) cannot be the cause of a nondeterminism. The following table summarizes a bunch of cases that will help you figure out when ANTLR will complain and when it will not.
X : 'q' ('a')? ('a')? ; |
The first subrule is nondeterministic as 'a' from second subrule (and end-of-token) are in the lookahead for exit branch of (...)? |
X : 'q' ('a')? ('c')? ; |
No nondeterminism. |
Y : 'y' X 'b' ; protected X : 'b' | ; |
Nondeterminism in rule X. |
X : 'x' ('a'|'c'|'d')+ | 'z' ('a')+ ; |
No nondeterminism as exit branch of loops see lookahead computed purely from end-of-token. |
Y : 'y' ('a')+ ('a')? ; |
Nondeterminism between 'a' of (...)+ and exit branch as the exit can see the 'a' of the optional subrule. This would be a problem even if ('a')? were simply 'a'. A (...)* loop would report the same problem. |
X : 'y' ('a' 'b')+ 'a' 'c' ; |
At k=1, this is a nondeterminism for the (...)? since 'a' predicts staying in and exiting the loop. At k=2, no nondeterminism. |
Q : 'q' ('a' | )? ; |
Here, there is an empty alternative inside an optional subrule. A nondeterminism is reported as two paths predict end-of-token. |
You might be wondering why the first subrule below is ambiguous:
('a')? ('a')?
The answer is that the NFA to DFA conversion would result in a DFA with the 'a' transitions merged into a single state transition! This is ok for a DFA where you cannot have actions anywhere except after a complete match. Remember that ANTLR lets you do the following:
('a' {do-this})? ('a' {do-that})?
One other thing is important to know. Recall that alternatives in lexical rules are reordered according to their lookahead requirements, from highest to lowest.
A : 'a' | 'a' 'b' ;
At k=2, ANTLR can see 'a' followed by '<end-of-token>' for the first alternative and 'a' followed by 'b' in the second. The lookahead at depth 2 for the first alternative being '<end-of-token>' suppressing a warning that depth two can match any character for the first alternative. To behave naturally and to generate good code when no warning is generated, ANTLR reorders the alternatives so that the code generated is similar to:
A() { if ( LA(1)=='a' && LA(2)=='b' ) { // alt 2 match('a'); match('b'); } else if ( LA(1)=='a' ) { // alt 1 match('a') } else {error;} }
Note the lack of lookahead test for depth 2 for alternative 1. When an empty alternative is present, ANTLR moves it to the end. For example,
A : 'a' | | 'a' 'b' ;
results in code like this:
A() { if ( LA(1)=='a' && LA(2)=='b' ) { // alt 2 match('a'); match('b'); } else if ( LA(1)=='a' ) { // alt 1 match('a') } else { } }
Note that there is no way for a lexing error to occur here (which makes sense because the rule is optional--though this rule only makes sense when protected).
Semantic predicates get moved along with their associated alternatives when the alternatives are sorted by lookahead depth. It would be weird if the addition of a {true}? predicate (which implicitly exists for each alternative) changed what the lexer recognized! The following rule is reorder so that alternative 2 is tested for first.
B : {true}? 'a' | 'a' 'b' ;
Syntactic predicates are not reordered. Mentioning the predicate after the rule it conflicts with results in an ambiguity such as is in this rule:
F : 'c' | ('c')=> 'c' ;
Other alternatives are, however, reordered with respect to the syntactic predicates even when a switch is generated for the LL(1) components and the syntactic predicates are pushed the default case. The following rule illustrates the point.
F : 'b' | {/* empty-path */} | ('c')=> 'c' | 'c' | 'd' | 'e' ;
Rule F's decision is generated as follows:
switch ( la_1) { case 'b': { match('b'); break; } case 'd': { match('d'); break; } case 'e': { match('e'); break; } default: boolean synPredMatched15 = false; if (((la_1=='c'))) { int _m15 = mark(); synPredMatched15 = true; guessing++; try { match('c'); } catch (RecognitionException pe) { synPredMatched15 = false; } rewind(_m15); guessing--; } if ( synPredMatched15 ) { match('c'); } else if ((la_1=='c')) { match('c'); } else { if ( guessing==0 ) { /* empty-path */ } } }
Notice how the empty path got moved after the test for the 'c' alternative.
Character literals are not limited to printable ASCII characters. To demonstrate the concept, imagine that you want to parse a binary file that contains strings and short integers. To distinguish between them, marker bytes are used according to the following format:
format | description |
---|---|
'\0' highbyte lowbyte | Short integer |
'\1' string of non-'\2' chars '\2' | String |
Sample input (274 followed by "a test") might look like the following in hex (output from UNIX od -h command):
0000000000 00 01 12 01 61 20 74 65 73 74 02
or as viewed as characters:
0000000000 \0 001 022 001 a t e s t 002
The parser is trivially just a (...)+ around the two types of input tokens:
class DataParser extends Parser; file: ( sh:SHORT {System.out.println(sh.getText());} | st:STRING {System.out.println("\""+ st.getText()+"\"");} )+ ;
All of the interesting stuff happens in the lexer. First, define the class and set the vocabulary to be all 8 bit binary values:
class DataLexer extends Lexer; options { charVocabulary = '\u0000'..'\u00FF'; }
Then, define the two tokens according to the specifications, with markers around the string and a single marker byte in front of the short:
SHORT : // match the marker followed by any 2 bytes '\0' high:. lo:. { // pack the bytes into a two-byte short int v = (((int)high)<<8) + lo; // make a string out of the value $setText(""+v); } ; STRING : '\1'! // begin string (discard) ( ~'\2' )* '\2'! // end string (discard) ;
To invoke the parser, use something like the following:
import java.io.*; class Main { public static void main(String[] args) { try { // use DataInputStream to grab bytes DataLexer lexer = new DataLexer( new DataInputStream(System.in) ); DataParser parser = new DataParser(lexer); parser.file(); } catch(Exception e) { System.err.println("exception: "+e); } } }
ANTLR (as of 2.7.1) allows you to recognize input composed of Unicode characters; that is, you are not restricted to 8 bit ASCII characters. I would like to emphasize that ANTLR allows, but does yet not support Unicode as there is more work to be done. For example, end-of-file is currently incorrectly specified:
CharScanner.EOF_CHAR=(char)-1;
This must be an integer -1 not char, which is actually narrowed to 0xFFFF via the cast. I have to go throught the entire code base looking for these problems. Plus, we should really have a special syntax to mean "java identifier character" and some standard encodings for non-Western character sets etc... I expect 2.7.3 to add nice predefined character blocks like LETTER.
The following is a very simple example of how to match a series of space-separated identifiers.
class L extends Lexer; options { // Allow any char but \uFFFF (16 bit -1) charVocabulary='\u0000'..'\uFFFE'; } { private static boolean done = false; public void uponEOF() throws TokenStreamException, CharStreamException { done=true; } public static void main(String[] args) throws Exception { L lexer = new L(System.in); while ( !done ) { Token t = lexer.nextToken(); System.out.println("Token: "+t); } } } ID : ID_START_LETTER ( ID_LETTER )* ; WS : (' '|'\n') {$setType(Token.SKIP);} ; protected ID_START_LETTER : '$' | '_' | 'a'..'z' | '\u0080'..'\ufffe' ; protected ID_LETTER : ID_START_LETTER | '0'..'9' ;
A final note on Unicode. The ~x "not" operator includes everything in your specified vocabulary (up to 16 bit character space) except x. For example,
~('$'|'a'..'z')
results in every unicode character except '$' and lowercase latin-1 letters, assuming your charVocabulary is 0..FFFF.
Once you have specified what to match in a lexical rule, you may ask "what can I discover about what will be matched for each rule element?" ANTLR allows you to label the various elements and, at parse-time, access the text matched for the element. You can even specify the token object to return from the rule and, hence, from the lexer to the parser. This section describes the text and token object handling characteristics of ANTLR.
There are times when you want to look at the text matched for the current rule, alter it, or set the text of a rule to a new string. The most common case is when you want to simply discard the text associated with a few of the elements that are matched for a rule such as quotes.
ANTLR provides the '!' operator that lets you indicate certain elements should not contribute to the text for a token being recognized. The '!' operator is used just like when building trees in the parser. For example, if you are matching the HTML tags and you do not want the '<' and '>' characters returned as part of the token text, you could manually remove them from the token's text before they are returned, but a better way is to suffix the unwanted characters with '!'. For example, the <br> tag might be recognized as follows:
BR : '<'! "br" '>'! ; // discard < and >
Suffixing a lexical rule reference with '!' forces the text matched by the invoked rule to be discarded (it will not appear in the text for the invoking rule). For example, if you do not care about the mantissa of a floating point number, you can suffix the rule that matches it with a '!':
FLOAT : INT ('.'! INT!)? ; // keep only first INT
As a shorthand notation, you may suffix an alternative or rule with '!' to indicate the alternative or rule should not pass any text back to the invoking rule or parser (if nonprotected):
// ! on rule: nothing is auto added to text of rule. rule! : ... ; // ! on alt: nothing is auto added to text for alt rule : ... |! ...;
Item suffixed with '!' | Effect |
---|---|
char or string literal | Do not add text for this atom to current rule's text. |
rule reference | Do not add text for matched while recognizing this rule to current rule's text. |
alternative | Nothing that is matched by alternative is added to current rule's text; the enclosing rule contributes nothing to any invoking rule's text. For nonprotected rules, the text for the token returned to parser is blank. |
rule definition | Nothing that is matched by any alternative is added to current rule's text; the rule contributes nothing to any invoking rule's text. For nonprotected rules, the text for the token returned to parser is blank. |
While the '!' implies that the text is not added to the text for the current rule, you can label an element to access the text (via the token if the element is a rule reference).
In terms of implementation, the characters are always added to the current text buffer, but are carved out when necessary (as this will be the exception rather than the rule, making the normal case efficient).
The '!' operator is great for discarding certain characters or groups of characters, but what about the case where you want to insert characters or totally reset the text for a rule or token? ANTLR provides a series of special methods to do this (we prefix the methods with '$' because Java does not have a macro facility and ANTLR must recognize the special methods in your actions). The following table summarizes.
Method | Description/Translation |
---|---|
$append(x) | Append x to the text of the surrounding rule. Translation: text.append(x) |
$setText(x) | Set the text of the surrounding rule to x. Translation: text.setLength(_begin); text.append(x) |
$getText |
Return a String of the text for the surrounding rule. Translation;
new String(text.getBuffer(), _begin,text.length()-_begin) |
$setToken(x) | Set the token object that this rule is to return. See the section on Token Object Creation. Translation: _token = x |
$setType(x) | Set the token type of the surrounding rule. Translation: _ttype = x |
setText(x) | Set the text for the entire token being recognized regardless of what rule the action is in. No translation. |
getText() | Get the text for the entire token being recognized regardless of what rule the action is in. No translation. |
One of the great things about an ANTLR generated lexer is that the text of a token can be modified incrementally as the token is recognized (an impossible task for a DFA-based lexer):
STRING: '"' ( ESCAPE | ~('"'|'\\') )* '"' ; protected ESCAPE : '\\' ( 'n' { $setText("\n"); } | 'r' { $setText("\r"); } | 't' { $setText("\t"); } | '"' { $setText("\""); } ) ;
Because lexical rules can call other rules just like in the parser, you sometimes want to know what text was matched for that portion of the token being matched. To support this, ANTLR allows you to label lexical rules and obtain a Token object representing the text, token type, line number, etc... matched for that rule reference. This ability corresponds to be able to access the text matched for a lexical state in a DFA-based lexer. For example, here is a simple rule that prints out the text matched for a rule reference, INT.
INDEX : '[' i:INT ']' {System.out.println(i.getText());} ;
INT : ('0'..'9')+ ;
If you moved the labeled reference and action to a parser, it would the same thing (match an integer and print it out).
All lexical rules conceptually return a Token object, but in practice this would be inefficient. ANTLR generates methods so that a token object is created only if any invoking reference is labeled (indicating they want the token object). Imagine another rule that calls INT without a label.
FLOAT : INT ('.' INT)? ;
In this case, no token object is created for either reference to INT. You will notice a boolean argument to every lexical rule that tells it whether or not a token object should be created and returned (via a member variable). All nonprotected rules (those that are "exposed" to the parser) must always generate tokens, which are passed back to the parser.
While token creation is normally handled automatically, you can also manually specify the token object to be returned from a lexical rule. The advantage is that you can pass heterogeneous token objects back to the parser, which is extremely useful for parsing languagues with complicated tokens such as HTML (the <img> and <table> tokens, for example, can have lots of attributes). Here is a rule for the <img> tag that returns a token object of type ImageToken:
IMAGE { Attributes attrs; } : "<img " attrs=ATTRIBUTES '>' { ImageToken t = new ImageToken(IMAGE,$getText); t.setAttributes(attrs); $setToken(t); } ; ATTRIBUTES returns [Attributes a] : ... ;
The $setToken function specifies that its argument is to be returned when the rule exits. The parser will receive this specific object instead of a CommonToken or whatever else you may have specified with the Lexer.setTokenObjectClass method. The action in rule IMAGE references a token type, IMAGE, and a lexical rule references, ATTRIBUTES, which matches all of the attributes of an image tag and returns them in a data structure called Attributes.
What would it mean for rule IMAGE to be protected (i.e., referenced only from other lexical rules rather than from nextToken)? Any invoking labeled rule reference would receive the object (not the parser) and could examine it, or manipulate it, or pass it on to the invoker of that rule. For example, if IMAGE were called from TAGS rather than being nonprotected, rule TAGS would have to pass the token object back to the parser for it.
TAGS : IMG:IMAGE {$setToken(img);} // pass to parser | PARAGRAPH // probably has no special token | ... ;
Setting the token object for a nonprotected rule invoked without a label has no effect other than to waste time creating an object that will not be used.
We use a CharScanner member _returnToken to do the return in order to not conflict with return values used by the grammar developer. For example,
PTAG: "<p>" {$setToken(new ParagraphToken($$));} ;
which would be translated to something like:
protected final void mPTAG() throws RecognitionException, CharStreamException, TokenStreamException { Token _token = null; match("<p>"); _returnToken = new ParagraphToken(text-of-current-rule); }
You often want to perform an action upon seeing a pattern or two in a complicated input stream, such as pulling out links in an HTML file. One solution is to take the HTML grammar and just put actions where you want. Using a complete grammar is overkill and you may not have a complete grammar to start with.
ANTLR provides a mechanism similar to AWK that lets you say "here are the patterns I'm interested in--ignore everything else." Naturally, AWK is limited to regular expressions whereas ANTLR accepts context-free grammars (Uber-AWK?). For example, consider pulling out the <p> and <br> tags from an arbitrary HTML file. Using the filter option, this is easy:
class T extends Lexer; options { k=2; filter=true; } P : "<p>" ; BR: "<br>" ;
In this "mode", there is no possibility of a syntax error. Either the pattern is matched exactly or it is filtered out.
This works very well for many cases, but is not sophisticated enough to handle the situation where you want "almost matches" to be reported as errors. Consider the addition of the <table...> tag to the previous grammar:
class T extends Lexer; options { k=2; filter = true; } P : "<p>" ; BR: "<br>" ; TABLE : "<table" (WS)? (ATTRIBUTE)* (WS)? '>' ; WS : ' ' | '\t' | '\n' ; ATTRIBUTE : ... ;
Now, consider input "<table 8 = width ;>" (a bogus table definition). As is, the lexer would simply scarf past this input without "noticing" the invalid table. What if you want to indicate that a bad table definition was found as opposed to ignoring it? Call method
setCommitToPath(boolean commit)
in your TABLE rule to indicate that you want the lexer to commit to recognizing the table tag:
TABLE : "<table" (WS)? {setCommitToPath(true);} (ATTRIBUTE)* (WS)? '>' ;
Input "<table 8 = width ;>" would result in a syntax error. Note the placement after the whitespace recognition; you do not want <tabletop> reported as a bad table (you want to ignore it).
One further complication in filtering: What if the "skip language" (the stuff in between valid tokens or tokens of interest) cannot be correctly handled by simply consuming a character and trying again for a valid token? You may want to ignore comments or strings or whatever. In that case, you can specify a rule that scarfs anything between tokens of interest by using option filter=RULE. For example, the grammar below filters for <p> and <br> tags as before, but also prints out any other tag (<...>) encountered.
class T extends Lexer; options { k=2; filter=IGNORE; charVocabulary = '\3'..'\177'; } P : "<p>" ; BR: "<br>" ; protected IGNORE : '<' (~'>')* '>' {System.out.println("bad tag:"+$getText);} | ( "\r\n" | '\r' | '\n' ) {newline();} | . ;
Notice that the filter rule must track newlines in the general case where the lexer might emit error messages so that the line number is not stuck at 0.
The filter rule is invoked either when the lookahead (in nextToken) predicts none of the nonprotected lexical rules or when one of those rules fails. In the latter case, the input is rolled back before attempting the filter rule. Option filter=true is like having a filter rule such as:
IGNORE : . ;
Actions in regular lexical rules are executed even if the rule fails and the filter rule is called. To do otherwise would require every valid token to be matched twice (once to match and once to do the actions like a syntactic predicate)! Plus, there are few actions in lexer rules (usually they are at the end at which point an error cannot occur).
Is the filter rule called when commit-to-path is true and an error is found in a lexer rule? No, an error is reported as with filter=true.
What happens if there is a syntax error in the filter rule? Well, you can either put an exception handler on the filter rule or accept the default behavior, which is to consume a character and begin looking for another valid token.
In summary, the filter option allows you to:
To make ANTLR generate lexers that behave like the UNIX utility sed (copy standard in to standard out except as specified by the replace patterns), use a filter rule that does the input to output copying:
class T extends Lexer; options { k=2; filter=IGNORE; charVocabulary = '\3'..'\177'; }P : "<p>" {System.out.print("<P>");}; BR : "<br>" {System.out.print("<BR>");};
protected IGNORE : ( "\r\n" | '\r' | '\n' ) {newline(); System.out.println("");} | c:. {System.out.print(c);} ;
This example dumps anything other than <p> and <br> tags to standard out and pushes lowercase <p> and <br> to uppercase. Works great.
Quick: What does the following match?
BLOCK : '{' (.)* '}';
Your first reaction is that it matches any set of characters inside of curly quotes. In reality, it matches '{' followed by every single character left on the input stream! Why? Well, because ANTLR loops are greedy--they consume as much input as they can match. Since the wildcard matches any character, it consumes the '}' and beyond. This is a pain for matching strings, comments and so on.
Why can't we switch it around so that it consumes only until it sees something on the input stream that matches what follows the loop, such as the '}'? That is, why can't we make loops nongreedy? The answer is we can, but sometimes you want greedy and sometimes you want nongreedy (PERL has both kinds of closure loops now too). Unfortunately, parsers usually want greedy and lexers usually want nongreedy loops. Rather than make the same syntax behave differently in the various situations, Terence decided to leave the semantics of loops as they are (greedy) and make a subrule option to make loops nongreedy.
I have yet to see a case when building a parser grammar where I did not want a subrule to match as much input as possible. For example, the solution to the classic if-then-else clause ambiguity is to match the "else" as soon as possible:
stat : "if" expr "then" stat ("else" stat)? | ... ;
This ambiguity (which statement should the "else" be attached to) results in a parser nondeterminism. ANTLR warns you about the (...)? subrule as follows:
warning: line 3: nondeterminism upon k==1:"else" between alts 1 and 2 of block
If, on the other hand, you make it clear to ANTLR that you want the subrule to match greedily (i.e., assume the default behavior), ANTLR will not generate the warning. Use the greedy subrule option to tell ANTLR what you want:
stat : "if" expr "then" stat ( options {greedy=true;} : "else" stat)? | ID ;
You are not altering the behavior really, since ANTLR was going to choose to match the "else" anyway, but you have avoided a warning message.
There is no such thing as a nongreedy (...)? subrule because telling an optional subrule not to match anything is the same as not specifying the subrule in the first place. If you make the subrule nongreedy, you will see:
warning in greedy.g: line(4), Being nongreedy only makes sense for (...)+ and (...)* warning: line 4: nondeterminism upon k==1:"else" between alts 1 and 2 of block
Greedy subrules are very useful in the lexer also. If you want to grab any whitespace on the end of a token definition, you can try (WS)? for some whitespace rule WS:
ID : ('a'..'z')+ (WS)? ;
However, if you want to match ID in a loop in another rule that could also match whitespace, you will run into a nondeterminism warning. Here is a contrived loop that conflicts with the (WS)? in ID:
LOOP : ( ID | WS )+ ;
The whitespace on the end of the ID could be matched in ID or in LOOP now. ANTLR chooses to match the WS immediately, in ID. To shut off the warning, simply tell ANTLR that you mean for it do be greedy, it's default behavior:
ID : ('a'..'z')+ (options {greedy=true;}:WS)? ;
ANTLR's default behavior of matching as much as possible in loops and optional subrules is sometimes not what you want in lexer grammars. Most loops that match "a bunch of characters" in between markers, like curly braces or quotes, should be nongreedy loops. For example, to match a nonnested block of characters between curly braces, you want to say:
CURLY_BLOCK_SCARF : '{' (.)* '}' ;
Unfortunately, this does not work--it will consume everything after the '{' until the end of the input. The wildcard matches anything including '}' and so the loop merrily consumes past the ending curly brace.
To force ANTLR to break out of the loop when it sees a lookahead sequence consistent with what follows the loop, use the greedy subrule option:
CURLY_BLOCK_SCARF : '{' ( options { greedy=false; } : . )* '}' ;
To properly take care of newlines inside the block, you should really use the following version that "traps" newlines and bumps up the line counter:
CURLY_BLOCK_SCARF : '{' ( options { greedy=false; } : '\r' ('\n')? {newline();} | '\n' {newline();} | . )* '}' ;
What happens when what follows a nongreedy subrule is not as simple as a single "marker" character like a right curly brace (i.e., what about when you need k>1 to break out of a loop)? ANTLR will either "do the right thing" or warn you that it might not.
First, consider the matching C comments:
CMT : "/*" (.)* "*/" ;
As with the curly brace matching, this rule will not stop at the end marker because the wildcard matches the "*/" end marker as well. You must tell ANTLR to make the loop nongreedy:
CMT : "/*" (options {greedy=false;} :.)* "*/" ;
You will not get an error and ANTLR will generate an exit branch
do { // nongreedy exit test if ((LA(1)=='*')) break _loop3; ...
Ooops. k=1, which is not enough lookahead. ANTLR did not generate a warning because it assumes you are providing enough lookahead for all nongreedy subrules. ANTLR cannot determine how much lookahead to use or how much is enough because, by definition, the decision is ambiguous--it simply generates a decision using the maximum lookahead.
You must provide enough lookahead to let ANTLR see the full end marker:
class L extends Lexer; options { k=2; } CMT : "/*" (options {greedy=false;} :.)* "*/" ;
Now, ANTLR will generate an exit branch using k=2.
do { // nongreedy exit test if ((LA(1)=='*') && (LA(2)=='/')) break _loop3; ...
If you increase k to 3, ANTLR will generate an exit branch using k=3 instead of 2, even though 2 is sufficient. We know that k=2 is ok, but ANTLR is faced with a nondeterminism as it will use as much information as it has to yield a deterministic parser.
There is one more issue that you should be aware of. Because ANTLR generates linear approximate decisions instead of full LL(k) decisions, complicated "end markers" can confuse ANTLR. Fortunately, ANTLR knows when it is confused and will let you know.
Consider a simple contrived example where a loop matches either ab or cd:
R : ( options {greedy=false;} : ("ab"|"cd") )+ ("ad"|"cb") ;
Following the loop, the grammar can match ad or cb. These exact sequences are not a problem for a full LL(k) decision, but due to the extreme compression of the linear approximate decision, ANTLR will generate an inaccurate exit branch. In other words, the loop will exit, for example, on ab even though that sequence cannot be matched following the loop. The exit condition is as follows:
// nongreedy exit test if ( _cnt10>=1 && (LA(1)=='a'||LA(1)=='c') && (LA(2)=='b'||LA(2)=='d')) break _loop10;
where the _cnt10 term ensures the loop goes around at least once (but has nothing to do with the nongreedy exit branch condition really). Note that ANTLR has compressed all characters that can possibly be matched at a lookahead depth into a single set, thus, destroying the sequence information. The decision matches the cross product of the sets, including the spurious lookahead sequences such as ab.
Fortunately, ANTLR knows when a decision falls between its approximate decision and a full LL(k) decision--it warns you as follows:
warning in greedy.g: line(3), nongreedy block may exit incorrectly due to limitations of linear approximate lookahead (first k-1 sets in lookahead not singleton).
The parenthetical remark gives you a hint that some k>1 lookahead sequences are correctly predictable even with the linear approximate lookahead compression. The idea is that if all sets for depths 1..(k-1) are singleton sets (exactly one lookahead sequence for first k-1 characters) then linear approximate lookahead compression does not weaken your parser. So, the following variant does not yield a warning since the exit branch is linear approximate as well as full LL(k):
R : ( options {greedy=false;} : . )+ ("ad"|"ae") ;
The exit branch decision now tests lookahead as follows:
(LA(1)=='a') && (LA(2)=='d'||LA(2)=='e')
which accurately predicts when to exit.
With DFA-based lexer generates such as lex, you often need to match pieces of your input with separate sets of rules called lexical states. In ANTLR, you can simply define another rule and call it like any other to switch "states". Better yet, this "state" rule can be reused by other parts of your lexer grammar because the method return stack tells the lexer which rule to return to. DFAs have no stacks unlike recursive-descent parsers and, hence, can only switch back to one hard-coded rule.
Consider an example where you would normally see a lexical
state--that of matching escape characters within a string. You would
attach an action to the double quote character that switched state to
a STRING_STATE state. This subordinate state would then
define rules for matching the various escapes and finally define a
rule for double quote that whose action would switch you back to the
normal lexical state. To demonstrate the solution with ANTLR, let's
start with just a simple string definition:
/** match anything between double-quotes */
STRING : '"' (~'"')* '"' ;
To allow escape characters like \t, you need to add an
alternative to the (...)* loop. (You could do that with a DFA-based
lexer as well, but you could not have any actions associated with the
escape character alternatives to do a replacement etc...). For
convenience, collect all escape sequences in another rule called ESC:
STRING : '"' (ESC | ~('\\'|'"'))* '"' ;
protected
ESC : '\\' ('t' {...} | '"' {...} )* ;
The protected is a (poorly named) indicator that the rule,
ESC, is not a token to be returned to the parser. It just
means that the nextToken method does not attempt to route
recognition flow directly to that rule--ESC must be called
from another lexer rule.
This works for simple escapes, but does not include escapes like
\20. To fix it, just add a reference to another rule
INT that you probably have already defined.
STRING : '"' (ESC | ~('\\'|'"'))* '"' ;
protected
ESC : '\\' ('t' {...} | '"' {...} | INT {...})* ;
INT : ('0'..'9')+ ;
Notice that INT is a real token that you want the parser to
see so the rule is not protected. A rule may invoke any
other rule, protected or not.
Lexical states with DFA-based lexers merely allow you to recognize complicated tokens more easily--the parser has no idea the contortions the lexer goes through. There are some situations where you might want multiple, completely-separate lexers to feed your parser. One such situation is where you have an embedded language such as javadoc comments. ANTLR has the ability to switch between multiple lexers using a token stream multiplexor. Please see the discussion in streams.
A method is available for reacting to the end of file condition as if it were an event; e.g., you might want to pop the lexer state at the end of an include file. This method, CharScanner.uponEOF(), is called from nextToken() right before the scanner returns an EOF_TYPE token object to parser:
public void uponEOF()| throws TokenStreamException, CharStreamException;
This event is not generated during a syntactic predicate evaluation (i.e., when the parser is guessing) nor in the middle of the recognition of a lexical rule (that would be an IO exception). This event is generated only after the complete evaluation of the last token and upon the next request from the parser for a token.
You can throw exceptions from this method like "Heh, premature eof" or a retry stream exception. See the includeFile/P.g for an example usage.
You may use option caseSensitive=false in the lexer to indicate that you do not want case to be significant when matching characters against the input stream. For example, you want element 'd' to match either upper or lowercase D, however, you do not want to change the case of the input stream. We have implemented this feature by having the lexer's LA() lookahead method return lowercase versions of the characters. Method consume() still adds the original characters to the string buffer associated with a token. We make the following notes:
Case sensitivity for literals is handled separately. That is, set lexer option caseSensitiveLiterals to false when you want the literals testing to be case-insensitive. Implementing this required changes to the literals table. Instead of adding a String, it adds an ANTLRHashString that implements a case-insensitive or case-sensitive hashing as desired.
Note: ANTLR checks the characters of a lexer string to make sure they are lowercase, but does not process escapes correctly--put that one on the "to do" list.
One of the great things about ANTLR is that it generates full predicated-LL(k) lexers rather than the weaker (albeit sometimes easier-to-specify) DFA-based lexers of DLG. With such power, you are tempted (and encouraged) to do real parsing in the lexer. A great example of this is HTML parsing, which begs for a two-level parse: the lexer parsers all the attributes and so on within a tag, but the parser does overall document structure and ordering of the tags etc... The problem with parsing within a lexer is that you encounter the usual "ignore whitespace" issue as you do with regular parsing.
For example, consider matching the <table> tag of HTML, which has many attributes that can be specified within the tag. A first attempt might yield:
OTABLE : "<table" (ATTR)* '>' ;
Unfortunately, input "<table border=1>" does not parse because of the blank character after the table identifier. The solution is not to simply have the lexer ignore whitespace as it is read in because the lookahead computations must see the whitespace characters that will be found in the input stream. Further, defining whitespace as a rudimentary set of things to ignore does not handle all cases, particularly difficult ones, such as comments inside tags like
<table <!--wow...a comment--> border=1>
The correct solution is to specify a rule that is called after each lexical element (character, string literal, or lexical rule reference). We provide the lexer rule option ignore to let you specify the rule to use as whitespace. The solution to our HTML whitespace problem is therefore:
TABLE options { ignore=WS; } : "<table" (ATTR)* '>' ;
// can be protected or non-protected rule WS : ' ' | '\n' | COMMENT | ... ;
We think this is cool and we hope it encourages you to do more and more interesting things in the lexer!
Oh, almost forgot. There is a bug in that an extra whitespace reference is inserted after the end of a lexer alternative if the last element is an action. The effect is to include any whitespace following that token in that token's text.
Each lexer object has a line member that can be incremented by calling newline() or by simply changing its value (e.g., when processing #line directives in C).
SL_COMMENT : "//" (~'\n')* '\n' {newline();} ;
Do not forget to split out \n recognition when using the not operator to read until a stopping character such as:
BLOCK: '(' ( '\n' { newline(); } | ~( \n | ) ) )* ') ;
Another way to track line information is to override the consume() method:
ANTLR (2.7.1 and beyond), tracks character column information so that each token knows what column it starts in; columns start at 1 just like line numbers. The CharScanner.consume() method asks method tab() to update the column number if it sees a tab, else it just increments the column number:
... if ( c=='\t' ) { tab(); } else { inputState.column++; }
By default, tab() is defined as follows:
/** advance the current column number by an appropriate amount. If you do not override this to specify how much to jump for a tab, then tabs are counted as one char. This method is called from consume(). */ public void tab() { // update inputState.column as function of // inputState.column and tab stops. // For example, if tab stops are columns 1 // and 5 etc... and column is 3, then add 2 // to column. inputState.column++; }
Upon new line, the lexer needs to reset the column number to 1. Here is the default implementation of CharScanner.newline():
public void newline() { inputState.line++; inputState.column = 1; }
Do not forget to call newline() in your lexer rule that matches '\n' lest the column number not be reset to 1 at the start of a line.
The shared input state object for a lexer is actually the critter that tracks the column number (as well as the starting column of the current token):
public class LexerSharedInputState { protected int column=1; protected int line=1; protected int tokenStartColumn = 1; protected int tokenStartLine = 1; ... }
If you want to handle tabs in your lexer, just implement a method like the following to override the standard behavior.
/** set tabs to 4, just round column up to next tab + 1 12345678901234567890 x x x x */ public void tab() { int t = 4; int c = getColumn(); int nc = (((c-1)/t)+1)*t+1; setColumn( nc ); }
See the examples/java/columns directory for the complete example.
On rare occasions, you may find it useful to explicitly test the lexer lookahead in say a semantic predicate to help direct the parse. For example, /*...*/ comments have a two character stopping symbol. The following example demonstrates how to use the second symbol of lookahead to distinguish between a single '/' and a "*/":
ML_COMMENT : "/*" ( { LA(2)!='/' }? '*' | '\n' { newline(); } | ~('*'|'\n') )* "*/" ;
The same effect might be possible via a syntactic predicate, but would be much slower than a semantic predicate. A DFA-based lexer handles this with no problem because they use a bunch of (what amount to) gotos whereas we're stuck with structured elements like while-loops.
The following set of rules match arithmetical expressions in a lexer not a parser (whitespace between elements is not allowed in this example but can easily be handled by specifying rule option ignore for each rule):
EXPR { int val; } : val=ADDEXPR { System.out.println(val); } ; protected ADDEXPR returns [int val] { int tmp; } : val=MULTEXPR ( '+' tmp=MULTEXPR { val += tmp; } | '-' tmp=MULTEXPR { val -= tmp; } )* ; protected MULTEXPR returns [int val] { int tmp; } : val=ATOM ( '*' tmp=ATOM { val *= tmp; } | '/' tmp=ATOM { val /= tmp; } )* ; protected ATOM returns [int val] : val=INT | '(' val=ADDEXPR ')' ; protected INT returns [int val] : ('0'..'9')+ {val=Integer.valueOf($getText);} ;
Lexical analyzers were all built by hand in the early days of compilers until DFAs took over as the scanner implementation of choice. DFAs have several advantages over hand-built scanners:
integer : "[0-9]+" ; real : "[0-9]+{.[0-9]*}|.[0-9]+" ;
Building a scanner for this would require factoring out the common [0-9]+. For example, a scanner might look like:
Token nextToken() { if ( Character.isDigit(c) ) { match an integer if ( c=='.' ) { match another integer return new Token(REAL); } else { return new Token(INT); } } else if ( c=='.' ) { match a float starting with . return new Token(REAL); } else ... }
Conversely, hand-built scanners have the following advantages over DFA implementations:
Tom Pennello of Metaware back in 1986 ("Very Fast LR Parsing") generated LR-based parsers in machine code that used the program counter to do state transitions rather than simulating the PDA. He got a huge speed up in parse time. We can extrapolate from this experiment that avoiding a state machine simulator in favor of raw code results in a speed up.
So, what approach does ANTLR take? Neither! ANTLR allows you to specify lexical items with expressions, but generates a lexer for you that mimics what you would generate by hand. The only drawback is that you still have to do the left-factoring for some token definitions (but at least it is done with expressions and not code). This hybrid approach allows you to build lexers that are much stronger and faster than DFA-based lexers while avoiding much of the overhead of writing the lexer yourself.
In summary, specifying regular expressions is simpler and shorter than writing a hand-built lexer, but hand-built lexers are faster, stronger, able to handle unicode, and easy to debug. This analysis has led many programmers to write hand-built lexers even when DFA-generation tools such as lex and dlg are commonly-available. PCCTS 1.xx made a parallel argument concerning PDA-based LR parsers and recursive-descent LL-based parsers. As a final justification, we note that writing lexers is trivial compared to building parsers; also, once you build a lexer you will reuse it with small modifications in the future.
Version: $Id: //depot/code/org.antlr/release/antlr-2.7.5/doc/lexer.html#1 $