in reply to Remove duplicate lines in a file
If the duplicate records will always be grouped together, you could do something like the following to keep track of the last record you've seen. I'm assuming that the first column is the key you care about. If you really care about the first 3 columns, you'll have to modify accordingly.
If the duplicate records don't necessarily follow each other, then use a hash to determine which ones you've already seen.use strict; use warnings; my $last = ''; while(<>) { my @columns = split; next if $columns[0] eq $last; $last = $columns[0]; print; }
use strict; use warnings; my %seen; while (<>) { my @columns = split; next if exists $seen{$columns[0]}; $seen{$columns[0]} = 1; print; }
|
---|
Replies are listed 'Best First'. | |
---|---|
Re^2: Remove duplicate lines in a file
by Anonymous Monk on Nov 05, 2008 at 17:10 UTC | |
by RhetTbull (Curate) on Nov 05, 2008 at 18:14 UTC |
In Section
Seekers of Perl Wisdom