Here is one way of formalizing the problem that seems natural, but is actually an exceedingly difficult computational problem: given a sample of some URLs that the regex should match, and some others that the regex should not match, find the smallest regex that is consistent. Even getting a good heuristic is NP-complete (minimum consistent regular expression problem).
I just finished doing something exactly like this for work. I had 679 strings in a total of 51 groups. I had to write 51 regular expressions to match members of the groups without including members of other groups. I spent about 5 minutes searching on google for someone else's solution before I buckled down and wrote a quick script to help me find them. The guts are here:
my %data;
while (<DATA>) {
chomp;
my ($v, $k) = split /,/;
$data{$k} = $v;
}
for my $inst ( sort keys %data ) {
for my $reg ( sort keys %re ) {
if( $data{$inst} eq $reg ) {
print "Should match but doesn't: $inst, $data{$inst}, $reg, $re
+{$reg}\n" unless $inst =~ $re{
$reg};
} else {
print "Is matching but shouldn't: $inst, $data{$inst}, $reg, $re
+{$reg}\n" if $inst =~ $re{$reg
};
}
}
}
__DATA___
.
.
.
%re was a hash containing the group names mapped to regexes. I would run the program in one window, and make corrections in the other. From start to finish took less than an hour.