Chapter 7: About Analyzing LALR(1) Grammars

This chapter describes the algorithm that is used by bisonc++. Generating parsers of course begins with a grammar to be analyzed. The analysis consists of these steps:

All the above phases are illustrated and discussed in the next sections. Additional details of the parsing process can be found in various books about compiler construction, e.g., in Aho, Sethi and Ullman's (2003) book Compilers (Addison-Wesley).

In the sections below, the following grammar is used to illustrate the various phases:


    %token NR
    
    %left '+'
    
    %%
    
    start:
        start expr
    |
        // empty
    ;
    
    expr:
        NR
    |
        expr '+' expr
    ;
        
The grammar is interesting as it has a rule containing an empty alternative and because it harbors a shift-reduce conflict. The shift-reduce conflict is solved by explictly assigning a priority and association to the '+' token.

The analysis starts by defining an additional rule, which is recognized (reduced) at end of input. This rule and the rules specified in the grammar together define what is known as the augmented grammar. In the coming sections the symbol $ is used to indicate `end of input'. From the above grammar the following augmented grammar is derived:


    1.  start:      start expr
    2.  start:      // empty
    3.  expr:       NR
    4.  expr:       expr '+' expr
    5.  start_$:    start   (input ends here)
        

bisonc++ itself produces an extensive analysis of any grammar it is offered when the option --construction is provided.

7.0.1: The FIRST Sets

The FIRST set defines all terminal tokens that can be encountered when beginning to recognize a grammatical symbol. For each grammatical symbol (terminal and nonterminal) a FIRST set can be determined as follows:

When starting this algorithm, only the nonterminals need to be considered. Also, required FIRST sets may not yet be available. Therefore the above algorithm iterates over all nonterminals until no changes were observed. In the algorithm $ is not considered.

Applying the above algorithm to the rules of our grammar we get:


nonterminal rule FIRST set

start_$ start not yet available
start start expr not yet available
start // empty ε
expr NR NR
expr expr '+' expr NR

changes in the next cycle:
start start expr NR ε
start // empty NR ε

changes in the next cycle:
start_$ start NR ε

no further changes

7.0.2: The States

Having determined the FIRST set, bisonc++ determines the states of the grammar. The analysis starts at the augmented grammar rule and proceeds until all possible states have been determined. In this analysis the concept of the dot symbol is used. The dot shows the position we are at when analyzing production rules defined by a grammar. Using the provided example grammar the analysis proceeds as follows:

7.0.3: The Look-ahead Sets

In the previous section a grammar was discussed whose fifth state contained two items: one resulting in a shift-action, the other resulting in a reduce-action. This state contained these two items: Although this state in theory defines two different actions, in practice only one is used. This is a direct consequence of the %left '+' specification, which is explained in this and the next section.

When analyzing a grammar all states that can be reached from the augmented start rule are determined. In the current grammar's fifth state bisonc++ must decide which action to take: should it shift on '+' or should it reduce according to the item `expr -> expr '+' expr .'? What choice will bisonc++ make?

Here the fact that bisonc++ implements a parser for a Look Ahead Left to Right (1) (LALR(1)) grammar becomes relevant. Bisonc++ computes look-ahead sets to determine which alternative to select when confronted with a choice. The look-ahead set can be used to favor one action over another when generating tables for the parsing function.

Sometimes the look-ahead sets allow bisonc++ simply to remove one action from the set of possible actions. When bisonc++ is called to process the example grammar while specifying the --construction option state five only shows the reduction and not the shifting action, as bisonc++ has removed that latter action from the action set. In state five the choice is between shifting a '+' token on the stack, or reducing the stack according to the rule


        expr -> expr '+' expr
    
Here, as we will shortly see, the '+' is also an element of the look-ahead set of the reducible item, creating a conflict: what to do on '+'?

In this case the grammar designer has provided bisonc++ with a way out: the %left directive tells bisonc++ to favor a reduction over a shift, and so it removed expr -> expr . '+' expr from its set of actions in state five.

7.0.3.1: The look-ahead token

The bisonc++ parser does not always perform a reduction when a state is reached where an item has its dot position beyond the last element of its production rule. For most languages such a simple strategy is incorrect. Instead, when a reduction is possible, the parser sometimes `looks ahead' to the next token to decide what to do.

Whenever a token is read, it is not immediately shifted; first it becomes the look-ahead token, which is not yet shifted on the stack. This allows the parser to perform one or more reductions, with the look-ahead token still waiting to be processed. Only when all available reductions have been performed the look-ahead token is shifted on the stack. The phrase `all available reductions' does not necessarily mean all possible reductions. Depending on the look-ahead token, a shift rather than a reduce may be performed in states in which both actions are possible.

Here is a simple case where a look-ahead token is required. The production rules define expressions which may contain binary addition operators and postfix unary factorial operators (`!'), as well as parentheses for grouping expressions:


    expr:     
        term '+' expr
    | 
        term
    ;

    term:     
        '(' expr ')'
    | 
        term '!'
    | 
        NUMBER
    ;
        
Suppose that the tokens `1 + 2' have been read and shifted; what should be done? If the following token is `)', then the first three tokens must be reduced, forming an expr. This is the only valid course, because shifting the `)' would produce the sequence of symbols

    term 'CLOSEPAR'
    
which is not syntactically correct.

But if the next token is `!', then that token must be shifted so that `2 !' can be reduced to recognize a term. If in this case the parser would perform a reduction then `1 + 2' would become an expr. In that case the `!' can't be shifted because doing so would result in the sequence


    expr '!'
    
which is also syntactically incorrect.

7.0.3.2: How look-ahead sets are determined

Once the items of all the grammar's states have been determined the LA sets for the states' items are computed. Starting from the LA set of the kernel item of state 0 (representing the augmented grammar's production rule S_$: . S, where S is the grammar's start rule) the LA sets of all items of all of the grammar's states are determined. By definition, the LA set of state 0's kernel item equals $, representing end-of-file.

Starting from the function State::determineLAsets, which is called for state 0, the LA sets of all items of all states are computed.

For each state, the LA sets of its items are computed first. Once they have been computed, the LA sets of items from where transitions to other states are possible are then propagated to the matching kernel items of those destination states. When the LA sets of kernel items of those destination states are enlarged then their state indices are added to a set todo. LA sets of the items of states whose indices are stored in the todo set are (re)computed (by calling determineLAsets for those states) until todo is empty, at which point all LA sets have been computed. Initially todo only contains 0, the index of the initial state, representing the augmented grammar's production rule.

To compute the LA sets of a state's items the LA set of each of its kernel items is distributed (by the member State::distributeLAsetOf) over the items which are implied by the item being considered. E.g., for item X: a . Y z, where a and z are any sequence of grammar symbols and X and Y are nonterminal symbols, all of Y's production rules are added as new items to the current state.

Then the member distributeLAsetOfItem(idx) matches the item's rule specification with the specification a.Bc, where a and c are (possibly empty) sequences of grammatical symbols, and B is a (possibly empty) nonterminal symbol appearing immediately to the right of the item's dot position. if B is empty then there are no additional production rules and distributeLAsetOf may return. Otherwise, the set b = FIRST(c) is computed. This set holds all symbols which may follow B. If b contains ε (i.e., the element representing the empty set), then the currently defined LA set of the item can also be observed. In that case ε is removed, and the currently defined LA set is added to b. Finally, the LA sets of all items representing a production rule for B are inspected: if b contains unique elements compared to the LA sets of those items, then the unique elements of b are added to the LA sets of those items. Finally, distributeLAsetOfItem is recursively called for those items whose LA sets were enlarged.

Once the LA sets of the items of a state have thus been computed, inspectTransitions is called to propagate the LA sets of items from where transitions to other states are possible to the affected (kernel) items of those other (destination) states. The member inspectTransitions inspects all Next objects of the current state's d_nextVector. Next objects provide

If the LA set of a destination item can be enlarged from the LA set of the source item then the LA sets of the destination state's items must be recomputed. This is realized by inserting the destation state's index into the `todo' set.

To illustrate an LA-set computation we will now compute the LA sets of (some of) the items of the states of the grammar introduced at the beginning of this chapter. Its augmented grammar consists of the following production rules:


    1.  start:      start expr
    2.  start:      // empty
    3.  expr:       NR
    4.  expr:       expr '+' expr
    5.  start_$:    start
        
When analyzing this grammar, we found the following five states, consisting of several items and transitions (kernel items are marked with K following their item indices). Next to the items, where applicable, the goto-table is shown: the state to go to when the mentioned grammatical symbol has been recognized:

                                            Goto table
                                            -----------
State 0:                                    start
    0K:     start_$ ->  . start               1
    1:      start   ->  . start expr          1
    2:      start   ->  . 

State 1:                                    expr    NR
    0K:     start_$ ->  start  .         
    1K:     start   ->  start  . expr         2
    2:      expr    ->  . NR                         3
    3:      expr    ->  . expr '+' expr

State 2:                                     '+'
    0K:     start   -> start expr  .
    1K:     expr    -> expr  . '+' expr       4

State 3: 
    0K:     expr    -> NR  .

State 4:                                    expr    NR
    0K:     expr    -> expr '+'  . expr       5
    1:      expr    -> . NR                          3
    2:      expr    -> . expr '+'  expr       5

State 5:                                     '+'
    0K:     expr    -> expr '+' expr  .     
    1K:     expr    -> expr . '+' expr        4
    

Item 0 of state 0 by definition has LA symbol $, and LA computation therefore always starts at item 0 of state 0. The interesting part of the LA set computation is encountered in the recursive member distributeLAsets:


distributeLAsetsOfItem(0)
  start_$ -> . start:     LA: {$}, B: start, c: {}, so b: {$}

  items 1 and 2 refer to production rules of B (start) and are inspected:

  1: LA(1): {}: b contains unique elements. Therefore: 
    LA(1) = {$}
    distributeLAsetsOfItem(1):
      start -> . start expr: LA: {$}, B: start, c: {expr}, so b: {NR}
      inspect items 1 and 2 as they refer to production rules of B (start):

      1: LA(1): {}: b contains unique elements. Therefore: 
        LA(1) = {$,NR}
        distributeLAsetsOfItem(1)
          start -> . start expr: LA: {$,NR}, B: start, c: {expr}, so b: {NR}
          inspect items 1 and 2 as they refer to prod. rules of B (start):

          1: LA(1): {$,NR}, so b does not contain unique elements: done

          2: LA(2): {}, b contains unique elements
            LA(2) = {NR}
            distributeLAsetsOfItem(2)
              start -> .: LA: {NR}, B: -, c: {}, so b: {NR}
              inspect items 1 and 2 as they refer to prod. rules of B (start):

              1: LA(1): {$,NR}, b does not contain unique elements: done

              2: LA(2): {NR}, so b does not contain unique elements: done

      2: LA(2): {NR}, so b does not contain unique elements: done

  2: LA(2): {NR}: b contains unique elements. Therefore:
    LA(2) = {$,NR}
    distributeLAsetsOfItem(2)
      start -> .: LA: {$,NR}, B: -, c: {}
      B empty, so return.
    
So, item 0 has LA set {$}, items 1 and 2 have LA sets {$,NR}.

The next step involves propagating the LA sets to kernel items of the states to where transitions are possible:

Following this LA set propagation the LA sets of all items of state 1 are computed, which in turn is followed by LA propagation to other states (states 2 and 3), etc. etc.

In this grammar there are no transitions to the current state (i.e., transitions from state x to state x). If such transitions are encountered then they can be ignored by inspectTransitions as the LA sets of the items of a state have already be computed by the time inspectTransitions is called.

7.0.4: The Final Transition Tables

7.0.4.1: Preamble

The member function parse() is implemented using a finite-state machine. The values pushed on the parser stack are not simply token type codes; they represent the entire sequence of terminal and nonterminal symbols at or near the top of the stack. The current state collects all the information about previous input which is relevant to deciding what to do next.

Each time a look-ahead token is read, the current parser state together with the current (not yet processed) token are looked up in a table. This table entry can say Shift the token. This also specifies a new parser state, which is then pushed onto the top of the parser stack. Or it can say Reduce using rule number n. This means that a certain number of tokens or nonterminals are removed from the stack, and that the rule's nonterminal becomes the `next token' to be considered. That `next token' is then used in combination with the state then at the stack's top, to determine the next state to consider. This (next) state is then again pushed on the stack, and a new token is requested from the lexical scanner, and the process repeats itself.

There are two special situations the parsing algorithm must consider:

Once bisonc++ has successfully analyzed the grammar it generates the tables that are used by the parsing function to parse input according to the provided grammar. Each state results in a state transition table. For the example grammar used so far there are five states. Each table consists of rows having two elements. The meaning of the elements depends on their position in the table.

Here are the tables defining the five states of the example grammar as they are generated by bisonc++ in the file containing the parsing function:

    SR_ s_0[] =
    {
        { { DEF_RED}, {  2} },         
        { {     258}, {  1} }, // start
        { {       0}, { -2} },         
    
    };
    
    SR_ s_1[] =
    {
        { { REQ_TOKEN}, {            4} },        
        { {       259}, {            2} }, // expr
        { {       257}, {            3} }, // NR  
        { {     EOF_}, { PARSE_ACCEPT} },        
        { {         0}, {            0} },        
    
    };
    
    SR_ s_2[] =
    {
        { { REQ_DEF}, {  2} },       
        { {      43}, {  4} }, // '+'
        { {       0}, { -1} },       
    
    };
    
    SR_ s_3[] =
    {
        { { DEF_RED}, {  1} }, 
        { {       0}, { -3} }, 
    
    };
    
    SR_ s_4[] =
    {
        { { REQ_TOKEN}, { 3} },        
        { {       259}, { 5} }, // expr
        { {       257}, { 3} }, // NR  
        { {         0}, { 0} },        
    
    };
    
    SR_ s_5[] =
    {
        { { REQ_DEF}, {  1} }, 
        { {       0}, { -4} }, 
    
    };
        

7.0.5: Processing Input

Bisonc++ implements the parsing function in the member function parse(). This function obtains its tokens from the member lex() and processes all tokens until a syntactic error, a non-recoverable error, or the end of input is encountered.

The algorithm used by parse() is the same, irrespective of the used grammar. In fact, the parse() member's behavior is completely determined by the tables generated by bisonc++.

The parsing algorithm is known as the shift-reduce (S/R) algorithm, and it allows parse() to perform two actions while processing series of tokens:

The parsing function maintains two stacks, which are manipulated by the above two actions: a state stack and a value stack. These stacks are not accessible to the parser: they are private data structures defined in the parser's base class. The parsing member parse() may use the following member functions to manipulate these stacks:

Apart from the state- and semantic stacks, the S/R algorithm itself sometimes needs to push a token on a two-element stack. Rather than using a formal stack, two variables (d_token_ and d_nextToken_) are used to implement this little token-stack. The member function pushToken_() pushes a new value on the token stack, the member popToken_() pops a previously pushed value from the token stack. At any time, d_token_ contains the topmost element of the token stack.

The member nextToken() determines the next token to be processed. If the token stack contains a value it is returned. Otherwise, lex() is called to obtain the next token to be pushed on the token stack.

The member lookup() looks up the current token in the current state's SR_ table. For this a simple linear search algorithm is used. If searching fails to find an action for the token an UNEXPECTED_TOKEN_ exception is thrown, which starts the error recovery. If an action was found, it is returned.

Rules may have actions associated with them. These actions are executed when a grammatical rule has been completely recognized. This is always at the end of a rule: mid-rule actions are converted by bisonc++ into pseudo nonterminals, replacing mid-rule action blocks by these pseudo nonterminals. The pseudo nonterminals show up in the verbose grammar output as rules having LHSs starting with #. So, once a rule has been recognized its action (if defined) is executed. For this the member function executeAction() is available.

Finally, the token stack can be cleared using the member clearin().

Now that the relevant support functions have been introduced, the S/R algorithm itself turns out to be a fairly simple algorithm. First, the parser's stack is initialized with state 0 and the token stack is cleared. Then, in a never ending loop:

The following table shows the S/R algorithm in action when the example grammar is given the input 3 + 4 + 5. The first column shows the (remaining) input, the second column the current token stack (with - indicating an empty token stack), the third column the state stack. The fourth column provides a short description. The leftmost elements of the stacks represent the tops of the stacks. The information shown below is also (in more elaborate form) shown when the --debug option is provided to Bisonc++ when generating the parsing function.


remaining input token stack state stack description

3 + 4 + 5 - 0 initialization
3 + 4 + 5 start 0 reduction by rule 2
3 + 4 + 5 - 1 0 shift `start'
+ 4 + 5 NR 1 0 obtain NR token
+ 4 + 5 - 3 1 0 shift NR
+ 4 + 5 expr 1 0 reduction by rule 3
+ 4 + 5 - 2 1 0 shift `expr'
4 + 5 + 2 1 0 obtain `+' token
4 + 5 - 4 2 1 0 shift `+'
+ 5 NR 4 2 1 0 obtain NR token
+ 5 - 3 4 2 1 0 shift NR
+ 5 expr 4 3 1 0 reduction by rule 3
+ 5 - 5 4 3 1 0 shift `expr'
5 + 5 4 3 1 0 obtain `+' token
5 expr + 1 0 reduction by rule 4
5 + 2 1 0 shift `expr'
5 - 4 2 1 0 shift '+'
NR 4 2 1 0 obtain NR token
- 3 4 2 1 0 shift NR
expr 4 2 1 0 reduction by rule 3
- 5 4 2 1 0 shift `expr'
EOF 5 4 2 1 0 obtain EOF
expr EOF 1 0 reduction by rule 4
EOF 2 1 0 shift `expr'
start EOF 2 1 0 reduction by rule 1
EOF 1 0 shift `start'
EOF 1 0 ACCEPT

7.1: Shift/Reduce Conflicts

Suppose we are parsing a language which has if and if-else statements, with a pair of rules like this:

    if_stmt:
        IF '(' expr ')' stmt
    | 
        IF '(' expr ')' stmt ELSE stmt
    ;
        
Here we assume that IF and ELSE are terminal symbols for specific keywords, and that expr and stmnt are defined nonterminals.

When the ELSE token is read and becomes the look-ahead token, the contents of the stack (assuming the input is valid) are just right for reduction by the first rule. But it is also legitimate to shift the ELSE, because that would lead to eventual reduction by the second rule.

This situation, where either a shift or a reduction would be valid, is called a shift/reduce conflict. Bisonc++ is designed to resolve these conflicts by implementing a shift, unless otherwise directed by operator precedence declarations. To see the reason for this, let's contrast it with the other alternative.

Since the parser prefers to shift the ELSE, the result is to attach the else-clause to the innermost if-statement, making these two inputs equivalent:


    if (x) if (y) then win(); else lose();

    if (x) 
    {
        if (y) then win(); else lose(); 
    }
        
But if the parser would perform a reduction whenever possible rather than a shift, the result would be to attach the else-clause to the outermost if-statement, making these two inputs equivalent:

    if (x) if (y) then win(); else lose();

    if (x) 
    {
        if (y) win(); 
    }
    else 
        lose();
        
The conflict exists because the grammar as written is ambiguous: either parsing of the simple nested if-statement is legitimate. The established convention is that these ambiguities are resolved by attaching the else-clause to the innermost if-statement; this is what bisonc++ accomplishes by implementing a shift rather than a reduce. This particular ambiguity was first encountered in the specifications of Algol 60 and is called the dangling else ambiguity.

To avoid warnings from bisonc++ about predictable, legitimate shift/reduce conflicts, use the %expect n directive. There will be no warning as long as the number of shift/reduce conflicts is exactly n. See section 4.5.6.

The definition of if_stmt above is solely to blame for the conflict, but the plain stmnt rule, consisting of two recursive alternatives will of course never be able to match actual input, since there's no way for the grammar to eventually derive a sentence this way. Adding one non-recursive alternative is enough to convert the grammar into one that does derive sentences. Here is a complete bisonc++ input file that actually shows the conflict:

%token IF ELSE VAR

%%

stmt:     
    VAR ';'
|
    IF '(' VAR ')' stmt
|
    IF '(' VAR ')' stmt ELSE stmt
;


Looking again at the dangling else problem note that there are multiple ways to handle stmnt productions. Depending on the particular input that is provided it could either be reduced to a stmt or the parser could continue to consume input by processing an ELSE token, eventually resulting in the recognition of IF '(' VAR ')' stmt ELSE stmt as a stmt.

There is little we can do but resorting to %expect to handle the dangling else problem. The default handling is what most people intuitively expect and so in this case using %expect 1 is an easy way to prevent bisonc++ from reporting a shift/reduce conflict. But shift/reduce conflicts are most often solved by specifying disambiguating rules specifying priorities or associations, usually in the context of arithmetic expressions, as discussed in the next sections.

However, shift-reduce conflicts can also be observed in grammars where a state contains items that could be reduced to a certain nonterminal and items in which a shift is possible in an item of a production rule of a completely different nonterminal. Here is an example of such a grammar:

    %token  ID 
    %left  '-'
    %left  '*'
    %right UNARY
    
    %%
    
    expr:
        expr '-' term
    | 
        term
    ;
    
    term:
        term '*' factor
    | 
        factor
    ;
    
    factor:
       '-' expr %prec UNARY
    | 
        ID
    ;
Why these grammars show shift reduce conflicts and how these are solved is discussed in the next section.

7.2: Operator Precedence

Shift/reduce conflicts are frequently encountered in grammars specifying rules of arithmetic expressions. Here shifting is not always the preferred resolution; the bisonc++ directives for operator precedence allow you to specify when to shift and when to reduce. How and when to do so is discussed next.

7.2.1: When Precedence is Needed

Consider the following ambiguous grammar fragment (ambiguous because the input `1 - 2 * 3' can be parsed in two different ways):

    expr:     
        expr '-' expr
    |
        expr '*' expr
    | 
        expr '<' expr
    | 
        '(' expr ')'
    ...
    ;
        
Suppose the parser has seen the tokens `1', `-' and `2'; should it reduce them via the rule for the addition operator? It depends on the next token. Of course, if the next token is `)', we must reduce; shifting is invalid because no single rule can reduce the token sequence `- 2 )' or anything starting with that. But if the next token is `*' or `<', we have a choice: either shifting or reduction would allow the parse to complete, but with different results.

To decide which one bisonc++ should do, we must consider the results. If the next operator token op is shifted, then it must be reduced first in order to permit another opportunity to reduce the sum. The result is (in effect) `1 - (2 op 3)'. On the other hand, if the subtraction is reduced before shifting op, the result is `(1 - 2) op 3'. Clearly, then, the choice of shift or reduce should depend on the relative precedence of the operators `-' and op: `*' should be shifted first, but not `<'.

What about input such as `1 - 2 - 5'; should this be `(1 - 2) - 5' or should it be `1 - (2 - 5)'? For most operators we prefer the former, which is called left association. The latter alternative, right association, is desirable for, e.g., assignment operators. The choice of left or right association is a matter of whether the parser chooses to shift or reduce when the stack contains `1 - 2' and the look-ahead token is `-': shifting results in right-associativity.

7.2.2: Specifying Operator Precedence

Bisonc++ allows you to specify these choices with the operator precedence directives %left and %right. Each such directive contains a list of tokens, which are operators whose precedence and associativity is being declared. The %left directive makes all those operators left-associative and the %right directive makes them right-associative. A third alternative is %nonassoc, which declares that it is a syntax error to find the same operator twice `in a row'. Actually, %nonassoc is not currently (0.98.004) punished that way by bisonc++. Instead, %nonassoc and %left are handled identically.

The relative precedence of different operators is controlled by the order in which they are declared. The first %left or %right directive in the file declares the operators whose precedence is lowest, the next such directive declares the operators whose precedence is a little higher, and so on.

7.2.3: Precedence Examples

In our example, we would want the following declarations:

    %left '<'
    %left '-'
    %left '*'
        
In a more complete example, which supports other operators as well, we would declare them in groups of equal precedence. For example, '+' is declared with '-':

    %left '<' '>' '=' NE LE GE
    %left '+' '-'
    %left '*' '/'
        
(Here NE and so on stand for the operators for `not equal' and so on. We assume that these tokens are more than one character long and therefore are represented by names, not character literals.)

7.2.4: How Precedence Works

The first effect of the precedence directives is to assign precedence levels to the terminal symbols declared. The second effect is to assign precedence levels to certain rules: each rule gets its precedence from the last terminal symbol mentioned in the components. (You can also specify explicitly the precedence of a rule. See section 7.3).

Finally, the resolution of conflicts works by comparing the precedence of the rule being considered with that of the look-ahead token. If the token's precedence is higher, the choice is to shift. If the rule's precedence is higher, the choice is to reduce. If they have equal precedence, the choice is made based on the associativity of that precedence level. The verbose output file made by `-V' (see section 9) shows how each conflict was resolved.

Not all rules and not all tokens have precedence. If either the rule or the look-ahead token has no precedence, then the default is to shift.

7.2.5: Rule precedence

Consider the following (somewhat peculiar) grammar:
    %token  ID 
    %left  '-'
    %left  '*'
    %right UNARY
    
    %%
    
    expr:
        expr '-' term
    | 
        term
    ;
    
    term:
        term '*' factor
    | 
        factor
    ;
    
    factor:
       '-' expr %prec UNARY
    | 
        ID
    ;

Even though operator precedence and association rules are used the grammar still displays a shift/reduce conflict. One of the grammar's states consists of the following two items:


    0: expr -> term  .   
    1: term -> term  . '*' factor
        
and bisonc++ reduces to item 0, dropping item 1 rather than shifting a '*' and proceeding with item 0.

When considering states where shift/reduce conflicts are encountered the `shiftable' items of these states shift when encountering terminal tokens that are also in the follow sets of the reducible items of these states. In the above example item 1 shifts when '*' is encountered, but '*' is also an element of the set of look-ahead tokens of item 0. Bisonc++ must now decide what to do. In cases we've seen earlier bisonc++ could make the decision because the reducible item itself had a well known precedence. The precedence of a reducible item is defined as the precedence of the rule's LHS. Item 0 in the above example is an item of the rule expr -> term.

The precedence of a production rule is defined as follows:

Since expr -> term does not contain a terminal token and does not use %prec, its precedence is the maximum possible precedence. Consequently in the above state the shift/reduce conflict is solved by reducing rather than shifting.

Some final remark as to why the above grammar is peculiar. It is peculiar as it combines precedence and association specifying directives with auxiliary nonterminals that may be useful conceptually (or when implementing an expression parser `by hand') but which are not required when defining grammars for bisonc++. The following grammar does not use term and factor but recognizes the same grammar as the above `peculiar' grammar without reporting any shift/reduce conflict:

    %token  ID 
    %left  '-'
    %left  '*'
    %right UNARY
    
    %%
    
    expr:
        expr '-' expr
    |
        expr '*' expr
    | 
       '-' expr %prec UNARY
    | 
        ID
    ;




7.3: Context-Dependent Precedence

Often the precedence of an operator depends on the context. This sounds outlandish at first, but it is really very common. For example, a minus sign typically has a very high precedence as a unary operator, and a somewhat lower precedence (lower than multiplication) as a binary operator.

The bisonc++ precedence directives, %left, %right and %nonassoc, can only be used once for a given token; so a token has only one precedence declared in this way. For context-dependent precedence, you need to use an additional mechanism: the %prec modifier for rules.

The %prec modifier declares the precedence of a particular (non-empty) rule by specifying a terminal symbol whose precedence should be used for that rule. It's not necessary for that symbol to appear otherwise in the rule. The modifier's syntax is:

%prec terminal-symbol

and it is written after the components of the rule. Its effect is to assign the rule the precedence of terminal-symbol, overriding the precedence that would be deduced for it in the ordinary way. The altered rule precedence then affects how conflicts involving that rule are resolved (see section Operator Precedence).

Here is how %prec solves the problem of unary minus. First, declare a precedence for a fictitious terminal symbol named UMINUS. There are no tokens of this type, but the symbol serves to stand for its precedence:


    ...
    %left '+' '-'
    %left '*'
    %left UMINUS

Now the precedence of UMINUS can be used in specific rules:

 
    exp:
        ...
        | exp '-' exp
        ...
        | '-' exp %prec UMINUS

7.4: Reduce/Reduce Conflicts

A reduce/reduce conflict occurs if there are two or more rules that apply to the same sequence of input. This usually indicates a serious error in the grammar.

For example, here is an erroneous attempt to define a sequence of zero or more words:

    %stype char *
    %token WORD

    %%

    sequence: 
        // empty 
        { 
            cout << "empty sequence\n"; 
        }
    | 
        maybeword
    | 
        sequence WORD
        { 
            cout << "added word " << $2 << endl;
        }
    ;

    maybeword: 
        // empty 
        { 
            cout << "empty maybeword\n"; 
        }
    | 
        WORD
        { 
            cout << "single word " << $1 << endl;
        }
    ;

The error is an ambiguity: there is more than one way to parse a single word into a sequence. It could be reduced to a maybeword and then into a sequence via the second rule. Alternatively, nothing-at-all could be reduced into a sequence via the first rule, and this could be combined with the word using the third rule for sequence.

There is also more than one way to reduce nothing-at-all into a sequence. This can be done directly via the first rule, or indirectly via maybeword and then the second rule.

You might think that this is a distinction without a difference, because it does not change whether any particular input is valid or not. But it does affect which actions are run. One parsing order runs the second rule's action; the other runs the first rule's action and the third rule's action. In this example, the output of the program changes.

Bisonc++ resolves a reduce/reduce conflict by choosing to use the rule that appears first in the grammar, but it is very risky to rely on this. Every reduce/reduce conflict must be studied and usually eliminated. Here is the proper way to define sequence:


    sequence: 
        // empty 
        { printf ("empty sequence\n"); }
    | sequence word
        { printf ("added word %s\n", $2); }
    ;
    

Here is another common error that yields a reduce/reduce conflict:

 
    sequence: 
        // empty 
        | sequence words
        | sequence redirects
    ;

    words:    
        // empty 
        | words word
    ;

    redirects:
        // empty
        | redirects redirect
    ;
    

The intention here is to define a sequence containing either word or redirect nonterminals. The individual definitions of sequence, words and redirects are error-free, but the three together make a subtle ambiguity: even an empty input can be parsed in infinitely many ways!

Consider: nothing-at-all could be a words. Or it could be two words in a row, or three, or any number. It could equally well be a redirects, or two, or any number. Or it could be a words followed by three redirects and another words. And so on.

Here are two ways to correct these rules. First, to make it a single level of sequence:


    sequence: 
        // empty
        | sequence word
        | sequence redirect
    ;
    

Second, to prevent either a words or a redirects from being empty:


    sequence: 
        // empty
        | sequence words
        | sequence redirects
    ;

    words:    
        word
        | words word
    ;

    redirects:
        redirect
        | redirects redirect
    ;
    

7.5: Mysterious Reduce/Reduce Conflicts

Sometimes reduce/reduce conflicts occur that are puzzling at first sight. Here is an example:

    %token ID
    
    %%
    def:    
        param_spec return_spec ','
    ;

    param_spec:
        type
    |    
        name_list ':' type
    ;

    return_spec:
        type
    |
        name ':' type
    ;

    type:
        ID
    ;

    name:
        ID
    ;

    name_list:
        name
    |
        name ',' name_list
    ;
        
It would seem that this grammar can be parsed with only a single look-ahead token: when a param_spec is being read, an ID is a name if a comma or colon follows, or a type if another ID follows. In other words, this grammar is LR(1).

However, bisonc++, like most parser generators, cannot actually handle all LR(1) grammars. In this grammar two contexts, one after an ID at the beginning of a param_spec and another one at the beginning of a return_spec, are similar enough for bisonc++ to assume that they are identical. They appear similar because the same set of rules would be active--the rule for reducing to a name and that for reducing to a type. Bisonc++ is unable to determine at that stage of processing that the rules would require different look-ahead tokens in the two contexts, so it makes a single parser state for them both. Combining the two contexts causes a conflict later. In parser terminology, this occurrence means that the grammar is not LALR(1).

In general, it is better to fix deficiencies than to document them. But this particular deficiency is intrinsically hard to fix; parser generators that can handle LR(1) grammars are hard to write and tend to produce parsers that are very large. In practice, bisonc++ is more useful the way it's currently operating.

When the problem arises, you can often fix it by identifying the two parser states that are being confused, and adding something to make them look distinct. In the above example, adding one rule to return_spec as follows makes the problem go away:


    %token BOGUS
    ...
    %%
    ...
    return_spec:
        type
    |    
        name ':' type
    |    
        ID BOGUS        // This rule is never used. 
    ;
        
This corrects the problem because it introduces the possibility of an additional active rule in the context after the ID at the beginning of return_spec. This rule is not active in the corresponding context in a param_spec, so the two contexts receive distinct parser states. As long as the token BOGUS is never generated by the parser's member function lex(), the added rule cannot alter the way actual input is parsed.

In this particular example, there is another way to solve the problem: rewrite the rule for return_spec to use ID directly instead of via name. This also causes the two confusing contexts to have different sets of active rules, because the one for return_spec activates the altered rule for return_spec rather than the one for name.


    param_spec:
        type
    |    
        name_list ':' type
    ;

    return_spec:
        type
    |    
        ID ':' type
    ;