We don't parse CGI .. *grin* .. at the minute it's all CSV and TDT in flat files, and yes, the modules to read them are all hand-rolled (long before my time). As I said in the original post, we have core
Perl modules and DBI installed, as well as the in-house modules people have written over the years, and nothing further than that.
By losing data I meant badly-formatted or wrongly tagged lines being silently kicked out, not the module itself failing to read or "damage" data. Error reporting and handling is, I believe, one of the reasons management here decided to move away from external code - we're *very* liable if something isn't reported on correctly - and forcing people to write their own code to complete tasks makes you at least stop and think about how the code will cope if the data isn't the *exact* format it should be (spaces in tags, blank lines in the middle of XML, things like that).
Similarly, the scripts can't fall over if they encounter data they don't know what to do with - errors should be reported and the reports run with the data that *does* exist - we can always re-run that section of the batch run if needs be the following day.
This is a policy that's existed since long before I got here, and while I'm arguing against it, I can see why it exists. Saying it's all down to ignorance is all well and good, and I agree, it doesn't make a lot of sense, but when you're fighting against years of "this is just the way we do it here", I don't know if progress can ever easily be made. People can, and do, get very set in their ways - even minor changes to policy can come across as a very big thing.
A friend is someone who can see straight through you, yet still enjoy the view. (Anon)