Beefy Boxes and Bandwidth Generously Provided by pair Networks
"be consistent"
 
PerlMonks  

Comparing a list to a tab delimited text file

by Azaghal (Novice)
on Jan 11, 2018 at 17:18 UTC ( [id://1207122]=perlquestion: print w/replies, xml ) Need Help??

Azaghal has asked for the wisdom of the Perl Monks concerning the following question:

Hi Monks,
I want to check if a word from a very large list of words exists in the first column of a (very large again) tab delimited text file, and if so to use some of the other columns from the latter to perform some actions on the former.
I got it to work, and then optimized my code the best I could to make it quicker. Still, I'm pretty sure it could be done in a more efficient way that I can't think of, as there is a bottleneck when looping through the tab delimited text file.

Here is the function I use to go through it and check if the word exists in it :
sub lexique { foreach (@lexiques) { my @lexiks = split/\t/, $_; chomp @lexiks; my $motlexique = $lexiks[0]; my $genre = $lexiks[4]; my $nombre = $lexiks[5]; if ($motlexique eq $thewordimsearching) { #Do some things here last; } } }
Could you guide me a bit here ? Please keep in mind that I'm self-taught and not-so experienced with Perl.

Replies are listed 'Best First'.
Re: Comparing a list to a tab delimited text file
by kcott (Archbishop) on Jan 12, 2018 at 00:11 UTC

    G'day Azaghal,

    Welcome to the Monastery.

    In this sort of situation, it's best to advise us of both the size of the data and the size of available memory. You'll get very different answers for a 100Mb file and 8Gb memory vs. 8Gb file and 100Mb memory.

    It looks like you've slurped your entire "tab delimited text file" (TSV) into what appears to be a global array, @lexiques. You are then processing every TSV record every time you do a search (for another global variable, $thewordimsearching). Using global variables is fraught with problems and should be avoided wherever possible. Processing the entire TSV repeatedly for every search is a very poor choice.

    When working with tab- (or comma-, or pipe-, or whatever-) separated data, reach for Text::CSV in the first instance. This module is very easy to use and has been written specifically to handle this type of data. Except for maybe an academic exercise, this is not a wheel you should consider reinventing. If you also have Text::CSV_XS installed, it will run faster.

    In the example code below, I show how to capture the TSV data once and then use it as many times as necessary. If the script is to be run multiple times, you might want to consider serialising the hash data using something like the builtin Storable module. If the TSV data exceeds your memory capacity, you could store the processed data in a database. These are just a couple of suggestions: you haven't supplied sufficient information about the data, your environment, or the intended usage, to do anything beyond making tentative guesses as to how you should best proceed.

    Here's the dummy TSV file I used for my testing:

    $ cat pm_1207122_data.tsv A A1 A2 A3 A4 A5 A6 B B1 B2 B3 B4 B5 B6 C C1 C2 C3 C4 C5 C6 D D1 D2 D3 D4 D5 D6 E E1 E2 E3 E4 E5 E6 F F1 F2 F3 F4 F5 F6

    In the following example script, &initialise is run once to capture the TSV data, and &search is run as many times as you want. Note how the arguments are passed to the subroutines, including the use of references (\%tsv_data) so that only a single scalar is passed instead of a huge data structure. Also note the limited scope of @words, $tsv_file, and %tsv_data: they cannot be accessed directly outside of the anonymous block in which they are declared.

    #!/usr/bin/env perl -l use strict; use warnings; use autodie; use Text::CSV; { my @words = 'A' .. 'I'; my $tsv_file = 'pm_1207122_data.tsv'; my %tsv_data; initialise($tsv_file, \%tsv_data); search($words[rand @words], \%tsv_data) for 1 .. 5; } sub initialise { my ($file, $data) = @_; open my $fh, '<', $file; my $csv = Text::CSV::->new({sep_char => "\t"}); while (my $row = $csv->getline($fh)) { $data->{$row->[0]} = [@$row[1..$#$row]]; } } sub search { my ($find, $data) = @_; print "$find: ", exists $data->{$find} ? "@{$data->{$find}}[3,4]" : '<not fou +nd>'; }

    Here's the results of a couple of sample runs:

    C: C4 C5 G: <not found> E: E4 E5 D: D4 D5 D: D4 D5
    A: A4 A5 I: <not found> F: F4 F5 H: <not found> E: E4 E5

    — Ken

Re: Comparing a list to a tab delimited text file
by poj (Abbot) on Jan 11, 2018 at 18:01 UTC

    As Laurent_R said, store the words into a hash for a fast look-up

    #!/usr/bin/perl use strict; my $t0 = time; # build dictionary my $wordfile = 'words1.txt'; my %dict = (); my $count = 0; open my $in,'<',$wordfile or die "Could not open $wordfile : $!"; while (<$in>){ chomp; my ($motlexique,@cols) = split /\t/,$_; $dict{$motlexique} = \@cols; ++$count; } close $in; print "$count lines read from $wordfile\n"; # scan text file $count = 0; my $textfile = 'text1.txt'; open my $in,'<',$textfile or die "Could not open $textfile : $!"; while (<$in>){ chomp; my ($searchword,@cols) = split /\t/,$_; if (exists $dict{$searchword}){ my $genre = $dict{$searchword}[3]; my $nombre = $dict{$searchword}[4]; #Do some things here print "Matched '$searchword' to $genre $nombre\n" } ++$count; } close $in; my $dur = time - $t0; print "$count lines read from $textfile in $dur seconds\n";
    poj
      Azaghal,

      A hash is a good solution.

      An optimisation you can do is NOT to create or update genre and nombre variables UNLESS a match was found. In the above solution, these two variables are created when if( exists ...) succeeds.

Re: Comparing a list to a tab delimited text file
by Laurent_R (Canon) on Jan 11, 2018 at 17:54 UTC
    It depends how large exactly a "very large file" is.

    The typical way to do this type of work is to read your tab delimited text file and to store words you're looking for in a hash, and then to look for the word in the hash. The reason this is the good way to do that is that hash lookup is extremely fast (and does not depend on the size of the hash). Now, of course, this works only if the hash does not grow too large to fit in memory. So please provide more information about the size of your files and what you would need to store in addition to the key.

    It would be good if you could provide a small extract of both files, showing cases matching your searches.

      I apologise for the delay, and hope you're still willing to help me.

      I understand you need more details about my script and resources, here they are :

      - 8Go RAM (that is a problem for me now, as I tend to write quite greedy code, I'm getting out of memory errors)

      - a 70Ko XML file, almost 2M lines long, each line corresponding to one word / node, like this :

      <DocumentSet> <document> <w lemma="appeler" type="VER:pres">appelle</w> <w lemma="quand" type="KON">quand</w> <w lemma="gronder" type="VER:infi">gronder</w> </document> </DocumentSet>
      - a 10 Ko tabulation separated text file, 150k lines long, looking like this :
      tunisiennes tynizjEn tunisien ADJ f p 0,3 3,51 + 0 0,2 0,2 undef remplît R@pli remplir VER undef undef 61,21 81,42 + 0 0,2 0,2 "sub:imp:3s;" remuons R°my§ remuer VER undef undef 24,42 62,84 + 0,2 0 0,2 "imp:pre:1p;ind:pre:1p;" remuât R°m8a remuer VER undef undef 24,42 62,84 + 0 0,2 0,2 "sub:imp:3s;" renaudant R°nod@ renauder VER undef undef 0 2,64 + 0 0,2 0,2 "par:pre;" ébouriffées ebuRife ébouriffé ADJ f p 0,22 3,45 + 0 0,2 0,2 undef rendissent R@dis rendre VER undef undef 508,81 46 +8,11 0 0,2 0,2 "sub:imp:3p;"
      I'm using the XML::Twig module to go through the XML tree and modify nodes. I use a foreach $w instruction to loop through each <w> node and then check if its content matches a word from the first column of the tab document. If so, I want to add some attributes from the other columns to the XML node for a result like this :
      <w conjugaison="imp:pre:2s;ind:pre:1s;ind:pre:3s;sub:pre:1s;sub:pre:3s +;" genre="" lemma="appeler" nombre="" type="VER:pres">appelle</w> <w genre="" lemma="quand" nombre="" type="KON">quand</w> <w conjugaison="inf;" genre="" lemma="gronder" nombre="" type="VER +:infi">gronder</w>
      Ask me for more info if needed.
        OK, if I understand well, what you need to do is to store the content of your tab separated file into a hash where the key would be the first column of each line of this file and the value the rest of the line (or the full line, or an array of the fields, whatever suits best your needs). Your tab separated file is relatively small and should fit into memory.

        After having done that, you read your XML file and, for each node of interest, you look up the hash to see if the word exists. If it does, you retrieve the information from the corresponding value and update your XML content.

        Pseudo-code for building the hash:

        my %hash; open my $TAB, "<", $tab_file or die "Cannot open $tabfile $!"; while (my $line = <$TAB>) { my $key = (split /\t/, $line)[0]; $hash[$key] = $line; } close $TAB;
        Then, when reading the XML file, you lookup the word of interest in the hash:
        if (exists $hash{$xml_word}) { my $line = $hash{$xml_word}; # now do what you want with $line }
        You could also preprocess the $line when reading the tab separated file and store in the hash not the full line, but only the part of the data that you need, in a format that makes it easier to use (perhaps as an array ref, for example). Whether this is a good idea depends on your data (are you likely to user almost all entries of the TSV file, or only a few of them? Are you likely to use the TSV entries several times? and so on).

Log In?
Username:
Password:

What's my password?
Create A New User
Domain Nodelet?
Node Status?
node history
Node Type: perlquestion [id://1207122]
Approved by Corion
Front-paged by Corion
help
Chatterbox?
and the web crawler heard nothing...

How do I use this?Last hourOther CB clients
Other Users?
Others romping around the Monastery: (3)
As of 2024-04-25 16:03 GMT
Sections?
Information?
Find Nodes?
Leftovers?
    Voting Booth?

    No recent polls found