in reply to Re^2: processing file content as string vs array
in thread processing file content as string vs array

Good points.

but what if the final line is supposed to be processed by some other piece of code? You can't just ungetc a readline...

You are correct in that there is no "unget" or "un-read" for a line that has already been read. There are various ways of handling that sort of situation. In the case where the process() sub needs to deal with the first line, I pass that first line as a parameter to the process() sub. Usually these sorts of things are record oriented.... something has to be done with a record that was read and the process() sub's job is to assemble a complete record. If you want the code that "does something to the record" to be in the main driver, then just have process() return a structure or modify a struct ref that is passed in. I don't see any issue here at all. Can't use Perl's single action "if" in that situation, but I don't see any issue.

Also, note that your process_record is making use of a global variable, DATA, and three of your four examples will throw an undef warning if the end-of-file is reached before the closing line is seen.

As far as global DATA goes, I have no issue with that for a short (<1 page) piece of code. In a larger program I would pass a lexical file handle to the sub. Note: You can make a lexical file handle out of DATA like this: my $fh = *DATA; print while (<fh>); Pass $fh to the sub.

In almost all of the situations I deal with, throwing an error for a malformed file input is the correct behaviour. This is a usually good thing and the input file needs to be fixed. It is rare for me to throw away or silently ignore a malformed record. Of course "seldom" does not mean "never". It could certainly be argued that the program that doesn't throw an undef warning is in error! Of course the programs I demoed can be modified to have either behaviour.

I think a state machine type approach would be better, because it is more flexible and can handle the above cases specially, if needed.

I guess we disagree. I don't see any case for "more flexible". However, having said that, there is no real quibble on my part with having a state variable approach. Using a sub() to keep track of the "inside record" state is very clean. I actually think the Perl flip-flop operator is very cool. No problem with that either! When I use it, I have to go to Grandfather's classic post and look at the various start/end regex situations.

I often have to write "one-off" programs to convert wierd file formats. I will attach such a program that I wrote a few days ago. For such a thing, efficiency doesn't matter, "general purpose" doesn't matter - I will never see a file like this again. My job was to convert this file as part of a larger project. This is not "perfect" but it did its job.

#!/usr/bin/perl use strict; use warnings; use Data::Dump qw(pp); use Data::Dumper; $|=1; while (my $line = <DATA>) { process_record ($line) if $line =~ /^<CALL/; } sub process_record { my $line = shift; chomp $line; my $data = $line; while ( $line = <DATA>) { last if $line =~ /^<EOR/; chomp $line; $data .= $line; } my %hash = $data =~ /<(\w+):\d+>([\w. ]+)/g; print_Cabrillo_QSO (\%hash); } sub print_Cabrillo_QSO { my $Qref = shift; print "QSO: "; my $freq = $Qref->{FREQ}*100; $freq = int $freq; printf "%i6 ",$freq; print "PH "; my $date = $Qref->{QSO_DATE}; # 29180504 => 2019-05-04 $date =~ s/(\d\d\d\d)(\d\d)(\d\d)/$1-$2-$3/; print "$date "; my $time = $Qref->{TIME_ON}; $time =~ s/^(\d\d\d\d).*/$1/; print "$time "; print "W7RN 59 NVSTO "; printf "%15s ",$Qref->{CALL}; print "59 "; $Qref->{COMMENT}=~ s/ +//g; #assume next field is < print $Qref->{COMMENT}; # my $qth = $Qref->{QTH}; #$qth //= ''; #print $qth; print "\n"; } =Prints QSO: 3816 PH 2019-05-05 0659 W7RN 59 NVSTO W6LVW 5 +9 CO QSO: 3816 PH 2019-05-05 0657 W7RN 59 NVSTO K7CAR 5 +9 UTWSH =cut __DATA__ This ADIF file was created by MacLoggerDX <PROGRAMID:11>MacLoggerDX<PROGRAMVERSION:4>6.22<ADIF_VER:5>3.0.7 <EOH> <CALL:5>W6LVW<NAME:18>Michael J Sparling<QTH:8>MONUMENT<STATE:2>CO<CNT +Y:7>El Paso<QSO_DATE:8>20190505<TIME_ON:6>065952<QSO_DATE_OFF:8>20190 +505<TIME_OFF:6>070013 <FREQ_RX:5>3.816<FREQ:5>3.816<BAND:3>80M<BAND_RX:3>80M<MODE:3>SSB<SUBM +ODE:3>LSB <TX_PWR:3>100<ANT_AZ:4>86.8<RST_SENT:2>59<RST_RCVD:2>59 <DXCC:3>291<COUNTRY:13>United States<GRIDSQUARE:6>DM79nb<LAT:11>N039 0 +4.562<LON:11>W104 53.096 <MY_GRIDSQUARE:6>DM09ei<OPERATOR:4>K5XI<MY_RIG:11>Elecraft K3<COMMENT: +2>CO<EMAIL:19> <EOR> <CALL:5>K7CAR<NAME:13>Kent B O Sell<QTH:9>Hillsboro<STATE:2>OR<CNTY:10 +>Washington<QSO_DATE:8>20190505<TIME_ON:6>065758<QSO_DATE_OFF:8>20190 +505<TIME_OFF:6>065814 <FREQ_RX:5>3.816<FREQ:5>3.816<BAND:3>80M<BAND_RX:3>80M<MODE:3>SSB<SUBM +ODE:3>LSB <TX_PWR:3>100<ANT_AZ:3>124<RST_SENT:2>59<RST_RCVD:2>59<QSL_VIA:10>eQSL +, LoTW <DXCC:3>291<COUNTRY:13>United States<GRIDSQUARE:6>DM44ik<LAT:11>N034 2 +5.359<LON:11>W111 19.869 <MY_GRIDSQUARE:6>DM09ei<OPERATOR:4>K5XI<MY_RIG:11>Elecraft K3<COMMENT: +6>UT WSH<EMAIL:17> <EOR>

Replies are listed 'Best First'.
Re^4: processing file content as string vs array
by haukex (Bishop) on May 18, 2019 at 18:58 UTC
    I have no issue with that for a short (<1 page) piece of code.

    For a short script, I don't see the advantage of a sub over just inlining the code. But since TMTOWTDI, it's fine.

    I don't see any issue here at all. ... I don't see any case for "more flexible".

    Just to be clear, I was talking about the general case, and especially for a longer script, where I disagree with this pattern. Personally, I think it's best to just read from the file in one place in the code, because as I said, I think it's more flexible across different input file formats. In a long script it would also become difficult to keep track of all the places that read the file, and what state they expect the filehandle to be in, and what state they leave it in.

    You said "You are correct in that there is no 'unget' or 'un-read' for a line that has already been read." - that's what I was referring to. I still think a state machine approach is better, but if you disagree, perhaps you could show how you'd use the pattern you showed (a <DATA> in the main loop and a <DATA> in a sub) to read a file like the below __DATA__ section.

    #!/usr/bin/env perl use warnings; use strict; my @output; use constant { STATE_IDLE=>0, STATE_IN_SECTION=>1 }; my $state = STATE_IDLE; my @buf; my $end_section = sub { if ( $state == STATE_IN_SECTION ) { push @output, [@buf]; @buf = () } $state = STATE_IDLE; }; while (<DATA>) { chomp; if ( my ($x,$y) = /^ (?: (.+) \s+ )? START (?: \s+ (.+) )? $/x ) { if ( defined $x ) { die "unexpected: $_\n" unless $state == STATE_IN_SECTION; push @buf, $x; } $end_section->(); $state = STATE_IN_SECTION; push @buf, $y if defined $y; } elsif ( my ($z) = /^ (?: (.+) \s+ )? END $/x ) { die "unexpected: $_\n" unless $state == STATE_IN_SECTION; push @buf, $z if defined $z; $end_section->(); } else { if ( $state == STATE_IN_SECTION ) { push @buf, $_ } else {} # ignore outside of section } } $end_section->(); use Test::More tests=>1; is_deeply \@output, [["a", "b"], ["c" .. "g"], ["h", "i"], ["j", "k"]] or diag explain \@output; __DATA__ START a b START c d e f g END ignoreme START h i START j k
      I like your code and have no problem with it!

      There are a number of techniques to deal with this kind of parsing. I know how to implement several of them and I'm ok with them all.

      Your example data format is unusual because it has more than one significant complicating factor.

      Just for fun, I show an alternate coding that demo's some other techniques. I make no claim about "better". There is seldom a coding pattern that works "the best" in all situations. I used your regex'es as they looked fine to me. At the end of the day, all of the "states" have to be described and handled.

      #!/usr/bin/perl use strict; use warnings; use Data::Dumper; $|=1; # Don't read in another line if we are still working # on a START line. This is caused by the # X START Y syntax in conjunction with the idea # of END absent a START in this example file format. # As a thought redefining the input separtator to # be 'START' could possibly be productive if the format # is not exactly like this?, # This format has some of the nastiest things to deal # with. They normally do not occur all at once! my @record=(); my $line_in =''; while ( $line_in =~ /START/ or $line_in =<DATA>) { $line_in = construct_record($line_in) if $line_in =~ /START/; } sub construct_record { my $line = shift; if ( (my $x) = $line =~ /START\s+(\w+)\s*$/) { push @record, $x; } while (defined ($line = <DATA>) and ($line !~ /(START|END)/) ) { $line =~ s/^\s*|\s*$//g; push @record, $line; } $line //= ''; #could be an EOF if (my ($b4end) = $line =~ /^ (?: (.+) \s+ )? END $/x) { push @record, $b4end if $b4end; output_record(); return ''; # no continuation of this record } if ( my ($x,$y) = $line =~ /^ (?: (.+) \s+ )? START (?: \s+ (.+) ) +? $/x ) { if ($x) { push @record, $x; output_record(); } if ($y) { output_record(); # might be: "^START 77"? return "START $y"; } } return ''; } sub output_record # or process it somehow... { print "Record: @record\n" if (@record >1); @record=(); } =Prints Record: a b Record: c d e f g Record: h i Record: j k =cut __DATA__ START a b START c d e f g END ignoreme START h i START j k END

        Of course TMTOWTDI. I just don't see the advantage of this code over just inlining the sub construct_record code directly in the while loop. Plus, you've increased the number of global variables you're using. There are a couple other things I could nitpick, like that you've got five different regexes all checking for the string START.

        This format has some of the nastiest things to deal with. They normally do not occur all at once!

        I disagree - I don't find this format nasty and there are plenty of data formats this complicated. Which was exactly my point - a state machine type approach can handle them all. Anyway, as I said, as long as it works you're free to write code like this - I personally still disagree with it ;-)