Beefy Boxes and Bandwidth Generously Provided by pair Networks
XP is just a number
 
PerlMonks  

Re: How to check lines that start with the same word then delete one of them

by LanX (Saint)
on Apr 10, 2020 at 10:44 UTC ( [id://11115311]=note: print w/replies, xml ) Need Help??


in reply to How to check lines that start with the same word then delete one of them

Yes just one loop and a %seen hash.

You put $var into %seen with $seen{$var}++ and whenever it's already set you'll know it needs to be checked.

I'm not sure though how you want the first line to be handled, and you didn't provide test data.

Cheers Rolf
(addicted to the Perl Programming Language :)
Wikisyntax for the Monastery

Replies are listed 'Best First'.
Re^2: How to check lines that start with the same word then delete one of them
by agnes00 (Novice) on Apr 10, 2020 at 11:42 UTC
    I didn't see how hash can do it. About the check, I test if the first word after the first semicolon match another word in the same file. Example :
    S_FER_SCAM1_ARRESTO;ARRESTO;ST;0;ST;1;0;TS;0;0 S_FER_SCAM1_ARRESTO;ARRESTO;SU LI IR ST;0;SU LI IR ST;1;0;TS;0;0
    here S_FER_SCAM1_ARRESTO match
      use strict; use warnings; use Test::More tests => 1; my @in = ( 'S_FER_SCAM1_ARRESTO;ARRESTO;ST;0;ST;1;0;TS;0;0', 'S_FER_SCAM1_ARRESTO;ARRESTO;SU LI IR ST;0;SU LI IR ST;1;0;TS;0;0' ); my @want = ( 'S_FER_SCAM1_ARRESTO;ARRESTO;ST;0;ST;1;0;TS;0;0', ); my @have; my %seen; for (@in) { /^(\w+)/; if (exists $seen{$1}) { next if (/SU LI IR ST/); # More code here if it doesn't match - this section not descri +bed. } $seen{$1} //= $_; push @have, $_; } is_deeply \@have, \@want, 'Arrays match';

      See also How to ask better questions using Test::More and sample data.

        Pretty much what I meant, thanks!

        Minor nitpick, I'd assign the first match to a normal var.

        Special vars like $1 can get overwritten easily by "more code" before seen is set.

        Cheers Rolf
        (addicted to the Perl Programming Language :)
        Wikisyntax for the Monastery

        Thank you for your answer. The problem is that I've a big file (40000 line), that's why I did'nt bring it to my post. and also I can't specify all the wanted lines, I just want to check if for every line there is another line witch have the same first word then I check it to delete one of them, that's why I did two loops. but the algorithm is so slow
        I've tested your code with this input :
        my @in = ( 'S_FER_SCAM1_ARRESTO;ARRESTO;ST;0;ST;1;0;TS;0;0', 'S_VINREU_RLIP1_ALLARMEZONA3;ALLARME ZONA 3;SU LI IR ST;0;SU LI I +R ST;1;0;TS;0;0', 'S_VINREU_RLIP1_ANOMBAT;ANOMALIA BATTERIA;SU LI IR ST;0;SU LI IR S +T;1;0;TS;0;0', 'S_FER_VENT1_ERRCOLINV;ERRORE PROFIBUS COLL INVERTER;SU LI IR ST;0 +;SU LI IR ST;1;0;TS;0;0', 'S_VINREU_RLIP1_CIRCZONE1;CIRCUITO ZONA 1 FUNZONANTE;SU LI IR ST;0 +;SU LI IR ST;1;0;TS;0;0', 'S_VINREU_RLIP1_CIRCZONE2;CIRCUITO ZONA 2 FUNZONANTE;SU LI IR ST;0 +;SU LI IR ST;1;0;TS;0;0', 'S_FER_SCAM1_ARRESTO;ARRESTO;SU LI IR ST;0;SU LI IR ST;1;0;TS; +0;0', 'S_FER_VENT1_ERRCOLINV;ERRORE PROFIBUS COLL INVERTER;ST;0;ST;1;0;T +S;0;0' );
        I print  @have array and it shows 7 lines with  S_FER_VENT1_ERRCOLINV is duplicated (should show only 6), it hide only one, in my data file I've lines that have the same id ($1) two times and others are normal (no duplicate id)

      agnes00:

      You can use a hash to help you decide what to do on later lines something like this:

      $ cat foo.pl use strict; use warnings; # Read the input file. Trim trailing whitespace # and preserve the line number. my $cnt = 0; my @inp = map { s/\s+$//; [ ++$cnt, $_ ] } <DATA>; print "INPUT LINES:\n"; print join(": ", @$_), "\n" for @inp; # Process the file. We'll keep the first record for # each key we find and ignore all successive values # with two exceptions: First, we won't process a # 'foo' record until we've handled a 'bar'. Second, # we won't handle a 'baz' record in the first five # lines. my %seen; my @out; for my $rLine (@inp) { # Parse out the interesting fields my $line_num = $rLine->[0]; # parse out the interesting fields my ($key, $val) = split /\s+/, $rLine->[1]; # ignore keys we've already processed next if $seen{$key}; # don't process 'foo' until we've handled 'baz' next if $key eq 'foo' and ! exists $seen{baz}; # don't process 'baz' in the first five lines next if $key eq 'baz' and $line_num < 5; # process the line and remember the key push @out, $rLine->[1]; ++$seen{$key}; } print "\n\nOUTPUT LINES:\n"; print $_, "\n" for @out; __DATA__ foo the bar quick baz red bar fox foo jumped biz over bar the bim lazy baz red foo dog

      As you process your file, you record the important decisions you've made in the hash to help guide future decisions.

      In the example I cobbled together, I used three rules:

      1. Only process a 'foo' record if we've already processed a 'baz' record.
      2. Ignore 'baz' records occurring in the first five lines of the file.
      3. Otherwise, keep the first record of each type we find.

      Using these rules, when we run the program we get:

      $ perl foo.pl INPUT LINES: 1: foo the 2: bar quick 3: baz red 4: bar fox 5: foo jumped 6: biz over 7: bar the 8: bim lazy 9: baz red 10: foo dog OUTPUT LINES: bar quick biz over bim lazy baz red foo dog

      As you can see, we're able to handle all the rules with a single pass over the file with the help of a little bookkeeping.

      As you've guessed in your original post, the nested loop can consume quite a bit of time for a large file. So it's worthwhile to think of ways you can do your processing without having to repeatedly scan the file.

      What if you wanted to keep the *last* line starting with each key? One way would be to leave the logic the same, but to process the records in reverse order. Another way would be to change the way you handle the "seen" hash: Instead of checking whether you've processed the key or not, you could store the data you want to keep in it. That way, you can simply overwrite each record with a later record if you want, and then output them at the end. If you're keeping your data in memory, you can even come up with a method to process the data in *one* order and output the data in a *different* order to make your task simpler.

      It's often a mistake to immediately jump in and solve the problem until you think about how to simplify things. Sometimes you'll find that a problem could easily be solved if the data came in a more convenient form or order. In those cases, it may be profitable to simply reshape or reorder the data to suit and then solve the simpler problem.

      ...roboticus

      When your only tool is a hammer, all problems look like your thumb.

        Thanks for your reply. If I tried to resolve my problem making rules. I'll say for the first line I won't process sata until I find another line with the same first word. and If I don't find I won't treat other line because the loop has been finished, you got me ? my problem is to find for each line a line that has the same first word and I should do this for all lines. I don't see how to do this in one loop

Log In?
Username:
Password:

What's my password?
Create A New User
Domain Nodelet?
Node Status?
node history
Node Type: note [id://11115311]
help
Chatterbox?
and the web crawler heard nothing...

How do I use this?Last hourOther CB clients
Other Users?
Others pondering the Monastery: (3)
As of 2024-04-24 23:50 GMT
Sections?
Information?
Find Nodes?
Leftovers?
    Voting Booth?

    No recent polls found