"be consistent" | |
PerlMonks |
comment on |
( [id://3333]=superdoc: print w/replies, xml ) | Need Help?? |
Hi monks -- longtime listener; first time caller. I just tried this one over at StackOverflow but the one responder apparently got upset when I suggested his quite nice solution wasn't right for my situation -- and deleted his own answer and all comments! His ball and he's going home! Oh well. So. I've got big files of hand-entered data in various formats that needs cleaning and rearranging; the current one looks like this:
That first line is bad -- missing its fourth field, which should be [0-3]; all sorts of typo-like errors like that. Catch those, send them to the Bad file, cut up good lines into a hash for redistribution. I've got this one matched like so:
That works but I'd really like to do it in a single pattern so I can simply swap patterns for the many differently similar files still to come. I couldn't figure out anything for this that would do both the careful pattern matching and the variable-length lines all in one go (It's easy to catch every field with just /(\S+)\s+/g but then I have to check each catch separately for its proper form, which makes it messy when I retool the script for the next stinking input file). At this point I'm mainly interested in the theoretico-mechanical question of whether what I want is *possible*. Can you do a match like where the first three patts each occur once and patt19 occurs {1,n} times, you validate all catches with picky matching or next, and however many patt19s there are in a given line everything winds up in @allFields? Everything I tried got the first three fields and either the first patt19 or the last but I could never get them all. Thanks! In reply to validate variable-length lines in one regex? by uhClem
|
|