Beefy Boxes and Bandwidth Generously Provided by pair Networks
We don't bite newbies here... much
 
PerlMonks  

Re^5: Module for parsing tables from plain text document

by cavac (Parson)
on Jan 13, 2023 at 14:03 UTC ( [id://11149564]=note: print w/replies, xml ) Need Help??


in reply to Re^4: Module for parsing tables from plain text document
in thread Module for parsing tables from plain text document

Let me throw in some assumptions here (having to deal with quite a few other text-based formats at work):

  • You can skip the headers, because they are standardized across files
  • Everything before the first number (or minus sign) is the location name
  • Data columns always contain a value
  • The only column that can contain spaces is the location name

That means, we can just collapse spaces. We have to handle the location name special, but after that can use split to recover the columns:

#!/usr/bin/env perl use strict; use warnings; use Data::Dumper; use Carp; my @sites; open(my $ifh, '<', 'eclipse.txt') or croak($!); # Skip header for(1..5) { my $tmp = <$ifh>; } while((my $line = <$ifh>)) { chomp $line; next if($line eq ''); # Ignore empty lines my %entry; $line =~ s/\ +/ /g; # Collapse spaces if($line =~ /^(.*?)\s[-\d]/) { $entry{location} = $1; # Remove location name $line =~ s/^.*?\s([-\d])/$1/; # Split along spaces my @parts = split/\ /, $line; foreach my $name (qw[long1 long2 lat1 lat2 elevation h m s PA +Alt]) { $entry{$name} = shift @parts; } push @sites, \%entry; } } close $ifh; print Dumper(\@sites);

That results in an array of hashes:

$VAR1 = [ { 's' => '59', 'elevation' => '0', 'long2' => '45.', 'lat2' => '55.', 'lat1' => '-36', 'location' => 'Auckland', 'm' => '33', 'h' => '4', 'long1' => '174', 'PA' => '313', 'Alt' => '13' }, { 'h' => '4', 'm' => '40', 'PA' => '326', 'Alt' => '11', 'long1' => '173', 'lat2' => '35.', 'long2' => '55.', 's' => '34', 'elevation' => '30', 'location' => 'Blenheim', 'lat1' => '-41' }, { 'h' => '4', 'm' => '42', 'PA' => '327', 'Alt' => '9', 'long1' => '175', 'lat2' => '35.', 'long2' => '25.', 's' => '28', 'elevation' => '0', 'location' => 'Cape Palliser', 'lat1' => '-41' }, ...

PerlMonks XP is useless? Not anymore: XPD - Do more with your PerlMonks XP

Replies are listed 'Best First'.
Re^6: Module for parsing tables from plain text document
by LanX (Saint) on Jan 13, 2023 at 14:15 UTC

      I stand corrected.

      As for those advanced heuristics, my first instinct would be to look into the "Open/Import" functionality of all those open source Spreadsheet tools like LibreOffice. Those developers spent the last few decades writing software that can make sense of user provided, badly formatted data files.

      As far as it concerns myself, those self-"learning" AI/heuristics/statistics tools might be somewhat interesting for occasional hobby use. But i wouldn't consider them for production use. If something goes wrong (e.g. "a bug happens"), it's easy enough to debug (and verify/certify) a handcrafted parser. If an AI goes wrong, all you can do is tweak the training data, retrain the model and pray to a $DEITY of your choice that

      1. this has fixed the current problem
      2. the change in your training data hasn't introduced new problems

      Advanced statistics (including what we commonly refer to AI) is an amazing tool by itself. But when is goes wrong, you basically have to find an error (or omission) in what boils down to a formula with possibly tens of millions of variables. I mean, winning a Nobel price is nothing to sneer at, but i'm not sure how one would do it on a typical IT department budget ;-)

      PerlMonks XP is useless? Not anymore: XPD - Do more with your PerlMonks XP
        > my first instinct would be to look into the "Open/Import" functionality of all those open source Spreadsheet tools like LibreOffice.

        I already mentioned the Excel's import wizard, but it's mainly meant for CSV and not free-form tables.

        > If an AI goes wrong, all you can do is tweak the training data, retrain the model and pray to a $DEITY of your choice that

        As I said, that's also not the way I would go.

        The AI should

        • a TK window with a preview with the interpretation(s) and options to improve
        • create a "wizarded" Perl code
        This code should be validating too, hence be more fault tolerant the most hand crafted code.

        (like detecting unusual data, like text in a field which always used to be numerical)

        Ideally the Perl code could also include data to restart the Tk-Wizard to refince in case of errors.

        I think that would easily address all your concerns.

        PS: Of course this could also be a realized as a web service and skip the Tk part.

        Cheers Rolf
        (addicted to the 𐍀𐌴𐍂𐌻 Programming Language :)
        Wikisyntax for the Monastery

      The point of the OP is that he wanted an "AI/heuristic/statistical tool" do those assumptions for him

      Maybe someone should ask Github Copilot.

        > Maybe someone should ask Github Copilot.

        Please go ahead!

        Cheers Rolf
        (addicted to the 𐍀𐌴𐍂𐌻 Programming Language :)
        Wikisyntax for the Monastery

Log In?
Username:
Password:

What's my password?
Create A New User
Domain Nodelet?
Node Status?
node history
Node Type: note [id://11149564]
help
Chatterbox?
and the web crawler heard nothing...

How do I use this?Last hourOther CB clients
Other Users?
Others lurking in the Monastery: (4)
As of 2024-03-29 00:35 GMT
Sections?
Information?
Find Nodes?
Leftovers?
    Voting Booth?

    No recent polls found