http://qs321.pair.com?node_id=721864
Category: Text Processing
Author/Contact Info graff
Description: This is like the standard unix "uniq" tool to remove lines from an input stream when they match the content of the preceding line, except that the determination of matching content can be limited to specific columns of flat-table data. There's an option to keep just the first matching line or just the last matching line. Note that if the input is not sorted with respect to the column(s) of interest, non-adjacent copies will not be removed (just like with unix "uniq"). (update: note that column delimiters are specifiable by a command-line option)

The code has been updated to avoid the possible "out of memory" condition cited by repellent in the initial reply below.

#!/usr/bin/perl

use strict;
use Getopt::Long;
use Pod::Usage;

my ( $delim, $colspec, $last, $man, $help ) = ( 'default', '1', 0, 0, 
+0 );
my $argok = GetOptions( 'l' => \$last, 'c=s' => \$colspec,
                        'd=s' => \$delim, 'm' => \$man, 'h|?' => \$hel
+p );

pod2usage(-exitstatus => 0, -verbose => 2) if $man;
pod2usage(1) unless ( $argok and $colspec =~ /^\d[\d,]*$/ and !$help a
+nd
                      ( @ARGV or !-t ));

my @cols = map { $_ -1 } split( /,/, $colspec );
if ( $delim ne 'default' ) {
    my %ctrl = ( tab => "\t", dot => '\.', vb => '\|', bs => '\\\\' );
    $delim = ( exists( $ctrl{$delim} )) ? qr{$ctrl{$delim}} : qr{$deli
+m};
}

my @lastseen;
my $heldline = '';
while (<>) {
    my @tkns = ( $delim eq 'default' ) ? split : split $delim;
    my $match = 0;
    for my $i ( 0 .. $#cols ) {
        $match++ if ( $tkns[$cols[$i]] eq $lastseen[$i] );
    }
    if ( @cols == $match ) {
        $heldline = $_ if ( $last );
    }
    else {
        print $heldline;
        $heldline = $_;
        $lastseen[$_] = $tkns[$cols[$_]] for ( 0..$#cols );
    }
}
print $heldline;

=head1 NAME

col-uniq

=head1 SYNOPSIS

 col-uniq [-l] [-d delim] [-c col#[,col#...]] [sorted.list ...]

 col-uniq -m   # to print user manual

=head1 DESCRIPTION

This tool will scan through lines of text that have been sorted with
respect to one or more selected columns, and in the event that two or
more consecutive lines have the same value(s) in the selected
column(s), only one line from the matching set will be output.  All
other lines, having non-repeating values in the selected column(s),
are also printed.

This tool will only work as intended if the input has been sorted with
respect to the column(s) of interest, so a typical usage would be:

  sort [...] some.list | col-uniq [...]

By default, the first whitespace-delimited column will be taken as the
column of interest, and for every set of two or more consecutive lines
having the same value in that column, only the first line will be
printed to STDOUT (along with all the other "unique" lines).

Use the "-c col#[,...]" option to select one or more specific columns
of interest, by index number (first column on each line is "1").  For
example, in a directory where varying numbers of files are created per
day, this command line:

  ls -lt | col-uniq -c 6,7

will show only the first file created on each date.  (The "ls" options
"-lt" produce a full-detail file list, sorted by date.)

Use the "-d delim_regex" option to select something other than
whitespace as the delimiter for splitting each line into columns.
The given string is passed to the perl "split" function as a regular
expression, so a wide assortment of column separation strategies is
possible (but bear in mind that the input must be sorted on the
basis of the designated columns, in order to locate all duplicates).
Also, some "specially defined" split expressions are provided for
convenience:

  -d tab : split on tabs only (not other whitespace)
     dot : split on period characters
      vb : split on the vertical-bar (pipe) symbol "|"
      bs : split on backslash

Note that if you set the delimiter to a pattern that never occurs in
the data, the result will be to check for (and remove) consecutive
full-line duplications. (This will include sets of blank lines.)

The "-l" option can be used to preserve only the last line from each
set of matching lines (in case that is preferable to keeping only the
first).

=head1 AUTHOR

David Graff

=cut
Replies are listed 'Best First'.
Re: col-uniq -- remove lines that match on selected column(s)
by repellent (Priest) on Nov 06, 2008 at 00:39 UTC
    This is nice :) Some comments:

    • Why not 'default' be ' '? Also, the delimiter could potentially be 'default' literally.
    • @heldlines could be coded up as a scalar $heldline, lest ( @cols == $match ) be true till memory usage blows up.
      Thanks! I almost agreed with your first suggestion, until I remembered why I used "default" as the, um, default value for the delimiter option. It seemed a lot less likely that someone would actually need to use the word "default" as a column delimiter, and rather more likely that they might want to use a single space character -- not in the "magical" sense of split ' ' but rather in the literal sense of split / / -- and this entails that every time the user gives a delimiter on the command line, it should always be treated as a regex. This way you only get the "magical" split behavior when you don't supply the "-d regex" option, and you have the ability to split on single space if you want to.

      As for your second point, it's true the code as originally posted could lead to an "out of memory" condition, if it got a very long stream of repeated lines. But I wanted an array that I could "pop" or "shift" off of in order to print a duplicate line only once. So to fix the possible memory consumption problem, instead of "pushing" onto the array every time there's a duplication, I just make sure the array never contains more than one element (and this happens to be the line that the user wants to see). That made the print statements a lot simpler too, which is nice.

      Update: then again, after making that change to how I was using the "heldline" array, I finally realized that it doesn't have to be an array, which is exactly what you said. So I fixed it (and I thank you) again.

          ... might want to use a single space character -- not in the "magical" sense of split ' ' but rather in the literal sense of split / / ...

        Ahh, I see. Then I'll suggest using $delim = undef and defined($delim) instead, as a means towards a more thorough solution.
Re: col-uniq -- remove lines that match on selected column(s)
by keszler (Priest) on Sep 18, 2011 at 11:04 UTC

    This is nice - and now I'll suggest changing it <grin>

    Since this will most often be used following a sort, I suggest the options should match sort's, i.e. -t for delimiter, -k for column.

    Also, you should make this a CPAN module: App::col-uniq

      I appreciate the suggestions, but I'll respectfully decline. My '-c' and '-d' options work very differently from the '-k' and '-t' options in unix 'sort' (and I don't really want to implement the 'sort' style for these options), so I'd rather not confuse people by suggesting that they're the same thing. (Plus I have a few other utils for flat-table text data that use '-c' and/or '-d' the same way this script does.)

      BTW, I checked out the App:: name space at the CPAN, and didn't find a lot of command line utils there. If I use the search term command line utility, I see more things that this script might group with, but they don't seem to be in a specific name space.

      Oh well, having this up at PerlMonks (with a link on my home node) is good enough for me.