log1.log<br>
--------<br>
Time=00:00:00.001|Request=det|Category=btyjar<br>
Time=00:00:00.002|Request=sdf|Category=345<br>
Time=00:00:00.003|Request=fdgh|Category=cvn<br>
Time=00:00:00.004|Request=cv|Category=ryui<br>
log2.log<br>
--------<br>
Time=00:00:00.001|Request=h5|Category=56yjh<br>
Time=00:00:00.002|Request=hjk|Category=dr6<br>
Time=00:00:00.003|Request=qw|Category=345<br>
Time=00:00:00.004|Request=thgj|Category=234<br>
ok, that's example data. then to collect the data i use:
for my $log (@logs) {
if ($log =~ /\.gz$/) {
eval "`gunzip $log`";
next if $@;
$log = substr $log, 0, -3;
}
open(LOG,$log) || warn "log not opened\n";
while (<LOG>) {
chomp;
my @data = split /\|/;
for (@data) {
my($cat,$value) = split /=/;
$logdata{$log}{$.}{lc($cat)} = $value;
}
}
close LOG;
}
output is in one of two forms: log by log (so i do a little "1 of 10", "previous", "next" to switch between) or Merged - all the files in one big table. both output formats need to be sortable. the log by log is easy because I am sorting at the line level but the merged table is where I got stuck.
I do have many lines and so I need to find a way to get the sort to iterate through those lines, pulling out the time and comparing it, while still at the log level to find out in which order to sort all of the lines of all of the logs.
cheers for taking the time to look at this. |