Beefy Boxes and Bandwidth Generously Provided by pair Networks
Problems? Is your data what you think it is?

Parsing with Regexes and Beyond

by moritz (Cardinal)
on Jun 06, 2008 at 07:02 UTC ( [id://690624] : perltutorial . print w/replies, xml ) Need Help??

The topic of parsing comes up regularly. Most texts on parsing are either very theoretical or bound to a particular tool. I'll try to introduce parsing techniques that are well suited to be used with perl, and give just a bit of theoretical background.

Intended audience: Perl hackers that can already work with regular expressions, but don't have any formal Computer Science eduction, and want to learn a bit more about the computer scientist's view on parsing, and how to parse simple, recursively defined structures.

This tutorial was first posted here, and since then has been slightly improved based on the great feedback in that discussion.

Theoretical background: Parsing

Parsing is the process of matching a text against a formal specification.

In computer science the most important question is if the text can be derived from the specification.

In practice it is equally important to extract information according to the formal specification, like capturing part of a regex, or building a parse tree from a grammar.

Theoretical background: Regular Expressions

The name "Regular Expression" comes from computer science (I'll call it CS-Regex for disambiguation), and in CS it has a different meaning than in Perl.

CS-Regexes consists of concatenation, alternation, repetition (with *) and grouping. Perl regexes add many features that don't fit into the computer scientist's view on Regexes, like lookaround, capturing groups and backreferences, zero width assertions (like \b) and so on.

So CS-Regexes are not as powerful as Perl Regexes, but they have a big advantage: it's easy to build a machine that performs a CS-Regex match.

Regular expressions are well suited to match strings with short, fixed patterns, but they can't be used for any type of contstructs that uses nesting, like parentheis in arithmetic expressions, nested XML tags and the like. (You can do this with (??{ .. }) constructs in 5.8 and with recursive regexes in 5.10, but both are hard to write and maintain and generally a pain.)

CS-Regexes can be matched with a simple state machine called DFA (deterministic finite automaton), which consists of a bunch of states, and for each character from the input it is decided which state should be the next. So matching a string against a regex takes linear time (compared to the length of the string).

Context Free Languages

If you want to parse something like arithmetic expressions or a programming language, you need one more feature in your description language: recursion.

With recursion you can formulate grammars like this:

term -> atom operator term term -> '(' term ')' term -> atom operator -> '*' | '/'| '+' | '-' atom -> \d+

The first two definitions are recursive, so they are not "regular" any more. The computer scientist calls this a CFL, a "context free language".

Symbols like term are called non-terminals, while '*' represents a terminal symbol, i.e. one that appears verbatimely in the input.

Suppose our input string is '2*3+4', the derivation goes like this:

term -> atom operator term '2' operator term '2' '*' term '2' '*' (atom operator term) '2' '*' ('3' operator term) '2' '*' ('3' '+' term) '2' '*' ('3' '+' (atom)) '2' '*' ('3' '+' ('4'))

You can see that our short grammar matches the given expression, but not quite the way we'd like to have it. In normal notation '*' has tighter precedence than '+', if we would evaluate our parse tree it would first evaluate 3+4, and then multiply the result by 2.

That's not very surprising because our grammar didn't contain any precedence information. We can fix this:

expression -> (term add_op)* term term -> (factor mul_op)* factor factor -> atom | '(' expression ')' add_op -> '+' | '-' mul_op -> '*' | '/' atom -> \d+

Now we can derive '2*3+4' this way:

expression term add_op term term '+' (factor) term '+' ('4') (factor mulop factor) '+' ('4') ((atom) mulop (atom)) '+' ('4') (('2') '*' ('3') ) '+' ('4')

(Some lines contain a few derivations in parallel, just to save space. Each production that doesn't result in a terminal symbol produces a pair of parentheses to emphasize the resulting tree structure.)

The introduction of separate precedence levels for addition and multiplication leads to a correctly nested parse tree.


Most compilers don't parse their input in one pass, but first break it up into smaller pieces called "tokens". The process is called "lexing" or "scanning".

There are two important reasons for that:

  1. Efficiency. Parsing CFLs takes O(n) time in the worst case, scanning is done in linear time. By reducing parsing to tokens instead of characters the n decreases.
  2. Handling whitespaces and comments: The lexer can just throw away whitespaces and comments, that way we don't have to clutter up the grammar with rules with whitespace handling.

Lexing is normally done with a DFA, but since we're focusing on perl we can just as well use regular expressions:

#!/usr/bin/perl use strict; use warnings; use Data::Dumper; my @token_def = ( [Whitespace => qr{\s+}, 1], [Comment => qr{#.*\n?$}m, 1], [AddOp => qr{[+-]} ], [MulOp => qr{[*/]} ], [Number => qr{\d+} ], [OpenParen => qr{\(} ], [CloseParen => qr{\)} ], ); my $input = $ARGV[0] || "2 * 3 + 4 # and a comment\n"; my @tokens; pos($input) = 0; while(pos($input) < length $input){ my $matched = 0; for my $t (@token_def){ my ($name, $re, $ignore_flag) = @$t; if ($input =~ m/\G($re)/gc){ $matched = 1; next if $ignore_flag; push @tokens, [$name, $1]; next; } } die "Syntax error at postion " . pos($input) unless $matched } print Dumper \@tokens; __END__ # output reformatted for brevity: $VAR1 = [ [ 'Number', '2' ], [ 'MulOp', '*' ], [ 'Number', '3' ], [ 'AddOp', '+' ], [ 'Number', '4' ] ];

The token definition consits of a name for the token, a regular expression and optionally a flag that indicates that the token shouldn't appear in the output.

The program loops while the end of the string is reached, trying to match all regexes in turn. As soon as a match is found, the token is pushed onto the @tokens array. (In a real compiler it would also store the current position to allow better error messages on wrong input).

The /c modifier on the regex takes care that a failed match doesn't reset pos($input). The \G assertion anchors the regex to the end of the previous match.

Implementing the parser

For the parser we need two utility functions:

sub match { my $expected_token = shift; if ($tokens[0][0] eq $expected_token){ my $current = shift @tokens; return $current->[1]; } else { die "Syntax error: expected $expected_token, got $tokens[0][0] +\n"; } } sub lookahead { my @expected = @_; no warnings 'uninitialized'; for (0 .. $#expected){ return 0 if $tokens[$_][0] ne $expected[$_]; } return 1; }

The production rules can be translated very easily to functions:

sub expression { my @res = (term()); while (lookahead('AddOp')){ push @res, match('AddOp'); push @res, term(); } return \@res; } sub term { my @res = (factor()); while (lookahead('MulOp')){ push @res, match('MulOp'); push @res, factor(); } return \@res; } sub factor { if (lookahead('OpenParen')){ match('OpenParen'); my $res = expression(); match('CloseParen'); return $res; } else { atom(); } } sub atom { match('Number'); }

You can see that branches (like in factor) and repetition (in expression and term) use a lookahead of one token to decide which branch to take. Fortunately we never have to backtrack, we're always in the right branch.

Enough talking, let's test it:

my $parse_tree = expression(); print Dumper $parse_tree; __END__ # again output reformatted $VAR1 = [ [ '2', '*', '3' ], '+', [ '4' ] ];

Woot, that looks good!

The only annoying thing is that the 4 at the end is embedded into a list of its own, instead of being just a number. But that is easily fixed. Replace return \@res; with return @res > 1 ? \@res : $res[0];, in both expression and term, and the tree is as shallow as we want it to be.

If you input a string with parenthesis, like "2*(3+4)" you'll notice that our parse tree doesn't contain any parenthesis, but the structure of the parse tree reflects the stucture they enforced:

# parse tree for '2 * (3+4)': [ '2', '*', [ '3', '+', '4' ], ];

One minor gotcha still remains: suppose you parse the string 1 2 - oddly it doesn't throw a syntax error. That's because the parser recognizes the 1 as a valid expression, and then stops. So what it also has to do is recognize the end of input (traditionally called "EOF", "end of file"), and parses until it finds EOF.

To get that working, simply add the line push @tokens, ['EOF', '']; after the while-loop of the lexer, and then parse the input string like this:

my $parse_tree = expression(); match('EOF');

Now we you run it again with 1 2 as input, it tells you Syntax error: expected EOF, got Number.

Since we nearly wrote a calculator by now, we can just as well finish it by writing a short sub that evaluates the parse tree:

sub execute { my $tree = shift; return $tree unless ref $tree; my %ops = ( '*' => sub { $_[0] * $_[1] }, '/' => sub { $_[0] / $_[1] }, '+' => sub { $_[0] + $_[1] }, '-' => sub { $_[0] - $_[1] }, ); my ($val, @rest) = @$tree; $val = execute($val); while (@rest){ my $op = shift @rest; my $next = execute(shift @rest); $val = $ops{$op}->($val, $next); } return $val; } print execute($parse_tree), "\n";

A bit more theory

This parser is called a "recursive descending" parser, because it uses recursion to descent into the rules that finally match terminal symbols.

It is a predictive, i.e. it guesses what the next token is, and it's a top down parser.

Recursive descending parsers can't parse all context free languages, only a subset called LL(k), where k is the number of lookahead tokens (1 in our case).

Instead of predicting the next token, we could read all the tokens and push them onto a stack. When the top items on the stack correspond to the right hand side of a grammar rule we can remove them, and push the rule (annoted with the match tokens, of course) onto the stack.

That technique is bottom up parsing (or LR parsing), and the languages that can be parsed are called "deterministic context free languages", short DCFL. It is more flexible, but also more complicated.

Parser generators like yacc or bison usually produce such LR parsers.

More on Recursive Descending Parsers

You will have noticed that the subs expression and term have the same structure, and only differ in the operator and the next function to be called.

One way around that duplicate code is a function factory (you'll notice that I've started to read "Higher Order Perl" ;-). Very good stuff!):

sub make_rule { my ($operator, $next) = @_; return sub { my @res = ($next->()); while (lookahead($operator)){ push @res, match($operator); push @res, $next->(); } return @res > 1 ? \@res : $res[0]; }; } *term = make_rule('MulOp', \&factor); *expression = make_rule('AddOp', \&term);

Since you need such a rule for every precedence level, that can save save you a lot of typing.

But if you write more parsers, you'll notice that a lot of work will still be duplicated.

A good alternative is a module like Parse::RecDescent which does all the work that you normally duplicate (though I admit that I've never used it myself).

Perl 6 Rules, the replacement for regular expressions, are another alternative. And don't start screaming "but it's not there yet" - it is. You can use the Parrot Grammar Engine (PGE) to write Perl 6-like rules.

A Perl 6 Rule comes in three flavours: "regex", "rule" and "token". A regex is just what you think it is: a regex, but not very regular. A "token" is a regex that doesn't backtrack, and a "rule" is a regex that doesn't backtrack and that treats every whitespace in the rule body as an optional whitespace in the input.

So this is what our grammar would look like in Perl 6, with the old one as a comparsion:

grammar Arithmetic { # expression -> (term add_op)* term rule expression { [ <term> <mul_op> ]* <term> } # term -> (factor mul_op)* factor rule term { [ <factor> <mul_op> ]* <factor> } # factor -> atom | '(' expression ')' rule factor { | <atom> | '(' <expression> ')' } # add_op -> '+' | '-' token add_op { <[+-]> # that's a char class } # mul_op -> '*' | '/' token mul_op { <[*/]> } token atom { \d+ } } # match it: $input ~~ Grammar.expression;

Note that [...] are non-capturing groups, while char classes are <[...]> now. <name> is a call to another rule, and it's result is stored in a named capture with the name of the rule (which can be change with <capturename=rulename>).

In case of a syntax error this regex will just not match, which is usually not the desired result. You can add a bit of error handling code:

rule factor { | <atom> | '(' <expression> [ ')' || <panic: Expected ')'> ] }

Now when a closing parenthesis is missing you'll get a nice error message.

Perl 6 will have an even better mechanism for parsing arithmetic expressions: you can specify what an operator and a what an atom is, and specify the precedence and associativity of each operator. Then Perl uses a bottom-up parser to does what you want:

# this example mixes official Perl 6 syntax an PGE syntax, # I don't really know if they are 100% compatible # see # and rule factor { | <atom> | '(' <expression> ')' } token infix:<*> { <sym> }; # <sym> stands for a literal * here token infix:</> is equiv('infix:*') { <sym> }; token infix:<+> is looser('infix:*') { <sym> }; token infix:<-> is equiv('inifx:+') { <sym> }; # put parser in bottom-up mode: rule expression is optable { ... }; # yes, '...' is valid Perl 6 ;-) proto 'term:' is tighter('infix:*') is parsed(&factor) { ... };

We write a rule for each infix operator so that one can simply inherit from the grammar (like from a class) and override one rule without having to worry about the rest:

ModernArithmetic is Arithmetc { token infix:</> { '' } # different symbol, same rule name }


You've seen that parsing isn't magic, and while it takes a little while to get use it, it's not all that hard if you can stick to simple grammars.

In practice a recursive descending parser will get you very far, and it's not hard to write one. You can either do it manually, or use a module or Perl 6's fabolous rules.

There's much more to say about parsing, for example about error handling and recovery, associativity and many other topics, but it's a start ;-)

(And as usual I couldn't resist to introduce some Perl 6 stuff. You can see that from Perl 5 to Perl 6 we jump from "regular" to context free languages, which is quite a step in the Chomsky hierarchy.)

Update: incorporate ikegami's suggestion on how to handle EOF detection, and to do it at all.

Second update: added missing tokens for ( and ), hopefully clarified placement of EOF pushing

Replies are listed 'Best First'.
Re: Parsing with Regexes and Beyond
by ambrus (Abbot) on Jun 06, 2008 at 08:13 UTC
Re: Parsing with Regexes and Beyond
by ikegami (Patriarch) on Jun 07, 2008 at 09:11 UTC


    Small nit: qr{#.*\n?$}m can be written simply as qr{#.*\n?}

    Small problem: 3 + 4 6 5 7 is considered valid by your parser. You need to check to make sure all tokens were absorbed or return an EOF token and check if that's the next token. The latter method would remove the need for "no warnings 'uninitialized';".

    A big difference between your parser and PRD is that yours doesn't allow backtracking. With PRD, you can do "ruleA: ruleB | ruleC" without worrying if ruleB and ruleC can start with the same token. With yours, you'd need to factor out the common leading token ("ruleA: token ( ruleB | ruleC )"). But that's probably not a bad thing, since you should do that in PRD anyway for performance reasons. On the plus side, I think that gives PRD an "infinite" lookahead.

    I think you could improve performance by generating a single regexp from @token_def (/\G(?>$ws)(?>$tok1(?{...}|$tok2(?{...}|...)/g).

    Also, it would be better if the lexer was an iterator instead of doing all the lexing up front. That would decrease the memory footprint.

      Small problem: 3 + 4 6 5 7 is considered valid by your parser. You need to check to make sure all tokens were absorbed or return an EOF token and check if that's the next token. The latter method would remove the need for "no warnings 'uninitialized';".

      Very good catch, thank you. I'll update the tutorial accordingly.

      I think you could improve performance by generating a single regexp from @token_def (/\G(?>$ws)(?>$tok1(?{...}|$tok2(?{...}|...)/g).

      Since (?{...}) is marked as experimental even in perl 5.10.0 and I (re)discovered some bugs in it, I won't use it, at least not in a tutorial at this level. Thanks for the suggestion anyway, it made me think about 5.10's named captures that could be used instead.

      Also, it would be better if the lexer was an iterator instead of doing all the lexing up front. That would decrease the memory footprint.

      Indeed. I decided against it because it slightly increases complexity (or at least makes it harder to understand), but I should at least mention it.

      I guess I'll find some time tomorrow to incorporate your suggestions.

        I won't use it, at least not in a tutorial at this level.

        I decided against it because it slightly increases complexity

        And that's why I didn't code up a solution for the last two tips. They were more along the lines of "Where do we go from here?". I meant to say as much, but I was being rushed away.