http://qs321.pair.com?node_id=822898


in reply to Reverse regexp a regexp?

Is there any "reverse regex" module, code or information available which explains how to get a regexp back to a fully conditioned string in Perl? Like the regexp /^http:\/\/(?:www.)?youtube.com\/user\/(.*)\/$// could get translated back to an original string like http://www.youtube.com/user/username ?
This is possible by converting a regex into some sort of finite automaton and doing a traversal on it. At one point I had been working on a suite of modules that would let one manipulate regular expressions & automata in this way, with these kinds of things as a goal (tinkering with perl regexes). Unfortunately, the development has ground to a halt, and we never got quite that far. I know that other programming languages have more developed tools for this kind of stuff, specifically JFLAP for java. It might be worth checking out.

The regex you give is anchored with ^ and $, so that makes the problem easier -- this is the standard interpretation of "classical" (not Perl) regular expressions, so the techniques from automata theory are more readily applicable. And if all the regexes you care about are as simple as the one you gave, it should be easy to roll your own solution. You'll have to parse the regex, but once you have the regex structure in a recursive data structure, it should be straight-forward. For example, you can replace any (...)? or (....)* with nothing. You can replace a "." with, say, character "x". You can replace any ( ... | .... ) by recursing on just one of the alternations, etc etc..

To stay with regexes; is there way to create a regexp of different inputs when they look similar ? Like a data set of 10 url's, create a most compatible regexp from this? This way regexp could be dynamic usable with a set of data; not bothering the user at all to search for the best options...
This problem is a bit underspecified. For example, if you give me any sample of, say, 10 URLs, the regex /./ would match them all, but I doubt that's what you have in mind. I think it may be a challenge to be more precise about what you mean by "most compatible regex". And I think any non-trivial way of defining this problem will result in a very computationally difficult task.

Here is one way of formalizing the problem that seems natural, but is actually an exceedingly difficult computational problem: given a sample of some URLs that the regex should match, and some others that the regex should not match, find the smallest regex that is consistent. Even getting a good heuristic is NP-complete (minimum consistent regular expression problem).

blokhead

Replies are listed 'Best First'.
Re^2: Reverse regexp a regexp?
by Anonymous Monk on Feb 12, 2010 at 19:48 UTC

    Here is one way of formalizing the problem that seems natural, but is actually an exceedingly difficult computational problem: given a sample of some URLs that the regex should match, and some others that the regex should not match, find the smallest regex that is consistent. Even getting a good heuristic is NP-complete (minimum consistent regular expression problem).

    I just finished doing something exactly like this for work. I had 679 strings in a total of 51 groups. I had to write 51 regular expressions to match members of the groups without including members of other groups. I spent about 5 minutes searching on google for someone else's solution before I buckled down and wrote a quick script to help me find them. The guts are here:

    my %data; while (<DATA>) { chomp; my ($v, $k) = split /,/; $data{$k} = $v; } for my $inst ( sort keys %data ) { for my $reg ( sort keys %re ) { if( $data{$inst} eq $reg ) { print "Should match but doesn't: $inst, $data{$inst}, $reg, $re +{$reg}\n" unless $inst =~ $re{ $reg}; } else { print "Is matching but shouldn't: $inst, $data{$inst}, $reg, $re +{$reg}\n" if $inst =~ $re{$reg }; } } } __DATA___ . . .

    %re was a hash containing the group names mapped to regexes. I would run the program in one window, and make corrections in the other. From start to finish took less than an hour.

Re^2: Reverse regexp a regexp?
by freakingwildchild (Scribe) on Feb 12, 2010 at 19:52 UTC
    This is possible by converting a regex into some sort of finite automaton and doing a traversal on it.

    Perl would be my prefered choice in this though. The use of such would be great in creating examples in how regexes get processed.

    This problem is a bit underspecified. For example, if you give me any sample of, say, 10 URLs

    http://www.youtube.com/user/test1 http://www.youtube.com/user/test2 http://www.youtube.com/user/test3 http://www.youtube.com/user/test4 http://www.youtube.com/user/test5:
    regex created: http://www.youtube.com/user/(.*)

    http://be.apple.itunes.com/blabla/blabla/id/1543285432 http://us.apple.itunes.com/blabla/blabla/id/154328545 http://uk.apple.itunes.com/blabla/blabla/id/5532424342 http://be.apple.itunes.com/blabla/blabla/id/15432 http://be.apple.itunes.com/blabla/id/4324
    regex created: http://(\d)+\.apple.itunes.com/blabla/blabla/id/(.*)

      Reread what Roboticus said. Your first example could just as easily have been http://www.youtube.com/user/test[1-5] and your second could have been http://(u[sk]|be)\.apple.itunes.com/blabla/blabla/id/(\d*)5(\d+).

      How do you make the choices? Once you can answer those questions, you'll be able to make some progress towards what you want.

        How do you make the choices? Once you can answer those questions, you'll be able to make some progress towards what you want.

        I'd say ... an algorithm of minimum requirements could do the trick here.

        for example:

        1. 10 url's of the same domain of a 95% "similarity" would be matched as "clean url".
        2. 100 url's of the same domain would give more weight
        3. The string detected has to be atleast 4 characters difference before it adds weight/validity
        4. Any others could be selected manually or kept in a cache till they offer a difference
        Some url's will be easier than some others (especially script url's); but most sites got a general structure, to be SEO friendly towards search engines; which would make most URL's a heaven to work with...