Beefy Boxes and Bandwidth Generously Provided by pair Networks
Perl-Sensitive Sunglasses
 
PerlMonks  

Using indexing for faster lookup in large file

by anli_ (Novice)
on Feb 27, 2015 at 17:55 UTC ( [id://1118102]=perlquestion: print w/replies, xml ) Need Help??

anli_ has asked for the wisdom of the Perl Monks concerning the following question:

Hi everybody.

I'm trying to get a grip on indexing and how it works. My current problem is that I would like to make a simple text lookup in a very large file.
My query would be something like '3005698' and the database I want to search has the following structure:

3005696;Homininae;Homo;Homo sapiens;
3005698;9606;Homininae;Homo;Homo sapiens;
3005690;90371;Enterobacteriaceae;Salmonella;Salmonella enterica
3005700;9606;Homininae;Homo;Homo sapiens;

The output I would like would be something like: Homininae,Homo,Homo sapiens

One way would be to use bash grep and do a search like:

grep "^3005698;" database.txt
Then I could parse the output to make it pretty.

Using perl, they way I would normally do it would be to generate a hash of the database and then do my lookups from that, like so

open IN, '<', "/path/to/database.txt"; my %hash; while (<IN>) { my ($first,@array) = split(/;/, $_); @{ $hash{$first} } = @array; } close IN; print $hash{'3005698'}[1] . " "; print $hash{'3005698'}[2] . " "; print $hash{'3005698'}[3] . "\n";
The problem I have with this is that the database is around 30Gb, so it would be a very slow and memory consuming process. So my question is, can I somehow index the database, so I know where in the file the query '3005698' resides, to speed up this process.
Thanks

Replies are listed 'Best First'.
Re: Using indexing for faster lookup in large file
by BrowserUk (Patriarch) on Feb 27, 2015 at 20:15 UTC

    You may find the subthread staring at Re: Index a file with pack for fast access of interest.

    The indexing mechanism discussed there isn't directly applicable to your requirements -- it indexes by line number rather than content -- but it should be adaptable to them.

    Assuming your sample data is representative -- ie. average record length 47 -- then 30GB represents ~700 million records. And assuming the key numbers are representative, you'd need a 32-bit int to represent those and a 64-bit int to represent the file offset. Hence your index could be built using:

    open IN, '<', '/path/to/the/Datafile.txt' or die $!; open out, '>:raw', /path/to/the/indexfile.idx' or die $!; my $pos = 0; print( OUT pack 'VQ', m[^(\d+),], $pos ), $pos = tell( IN ) while <IN> +; close OUT; close IN;

    The output file, with 12 bytes per record would be ~7.6GB.

    As the keys in your file appear to be out of order, you would then need to (binary) sort that file.

    Once sorted, a binary search would take an average of 30 seeks&reads to locate the appropriate 12-byte index record and another seek&read to get the data record.

    If you have a sufficiently well-spec'd machine with (say) 8GB or more of ram, you could load the entire index into memory -- as a single big string and access as a ramfile -- which would probably reduce your lookup time by much more than an order of magnitude.


    With the rise and rise of 'Social' network sites: 'Computers are making people easier to use everyday'
    Examine what is said, not who speaks -- Silence betokens consent -- Love the truth but pardon error.
    "Science is about questioning the status quo. Questioning authority". I'm with torvalds on this
    In the absence of evidence, opinion is indistinguishable from prejudice. Agile (and TDD) debunked
    ; close OUT; close IN;
      Hi, and thanks for your reply.

      The data isn't quite representable, and that was perhaps a bit stupid of me.A more proper representation is below (3 random lines).
      The data is sorted lexicographically on the first number.
      106896752;384407;root;cellular organisms;Eukaryota;Viridiplantae;Strep +tophyta;Streptophytina;Embryophyta;Tracheophyta;Euphyllophyta;Spermat +ophyta;Magnoliophyta;Mesangiospermae;eudicotyledons;Gunneridae;Pentap +etalae;rosids;fabids;Fabales;Fabaceae;Papilionoideae;Genisteae;Lupinu +s;Lupinus magnistipulatus; 124405058;5888;root;cellular organisms;Eukaryota;Alveolata;Ciliophora; +Intramacronucleata;Oligohymenophorea;Peniculida;Parameciidae;Parameci +um;Paramecium tetraurelia; 134053560;349161;root;cellular organisms;Bacteria;Firmicutes;Clostridi +a;Clostridiales;Peptococcaceae;Desulfotomaculum;Desulfotomaculum redu +cens;Desulfotomaculum reducens MI-1;
      In total there is about 160 million records
        The data is sorted lexicographically on the first number.

        You mean like this?

        C:\Users\HomeAdmin>perl -E"@s = 1..30; say for sort @s" 1 10 11 12 13 14 15 16 17 18 19 2 20 21 22 23 24 25 26 27 28 29 3 30 4 5 6 7 8 9

        Also, what are the smallest and largest keys ("first numbers") in the file?


        With the rise and rise of 'Social' network sites: 'Computers are making people easier to use everyday'
        Examine what is said, not who speaks -- Silence betokens consent -- Love the truth but pardon error.
        "Science is about questioning the status quo. Questioning authority". I'm with torvalds on this
        In the absence of evidence, opinion is indistinguishable from prejudice. Agile (and TDD) debunked
Re: Using indexing for faster lookup in large file
by atcroft (Abbot) on Feb 27, 2015 at 18:15 UTC

    My first thought would be to put the data into an actual database (even something like DBD::SQLite), then query it (but that does not appear to be a route you are looking into).

    Is the data in the file sorted on the first number in the row? If so, then one method you might consider is to take a portion of that number (for example, the first 5 digits), and use the tell() function to record the first and last line matching that portion of the number. Then, you would enter the number you are seeking, the "index" lookup would tell your script where to start/stop looking, and it would seek() to the starting location of the main file, then examine the records only to the stop location. (This might work with unsorted records as well, although the degenerate case would be that the first and last entry in the main file matched the search criteria.)

    Hope that helps.

      Hi. thanks for your reply. Turning it into a actual database would be an option, and perhaps the best route to go here. I'm basically looking for something that will both solve this problem, but that also works as a general method for these kinds of issues, as I run into them quite frequently.

      The data is sorted lexicographically on the first number, but this could of course be changed to numerical sorting if that would help. I will look into the tell() and seek() functions, as I'm not familiar with them.

Re: Using indexing for faster lookup in large file
by Your Mother (Archbishop) on Feb 27, 2015 at 23:38 UTC
    #!/usr/bin/env perl use 5.014; use strictures; use Lucy; use Time::HiRes "gettimeofday", "tv_interval"; my $index = "./lucy.index"; my $schema = Lucy::Plan::Schema->new; my $easyanalyzer = Lucy::Analysis::EasyAnalyzer ->new( language => 'en' ); my $text_type = Lucy::Plan::FullTextType ->new( analyzer => $easyanalyzer, ); my $string_type = Lucy::Plan::StringType ->new(); $schema->spec_field( name => 'id', type => $string_type ); $schema->spec_field( name => 'content', type => $text_type ); my $indexer = Lucy::Index::Indexer ->new( schema => $schema, index => $index, create => 1, truncate => 1, ); while (<DATA>) { my ( $id1, $id2maybe, $text ) = /\A([0-9]+);(?:([0-9]+);)?(.+)/; for my $id ( grep defined, $id1, $id2maybe ) { $indexer->add_doc({ id => $id, content => $text }); } } $indexer->commit; my $searcher = Lucy::Search::IndexSearcher ->new( index => $index ); print "Query (q to quit): "; while ( my $q = <STDIN> ) { chomp $q; exit if $q =~ /\Aq(uit)?\z/i; my $t0 = [gettimeofday()]; my $hits = $searcher->hits( query => $q, ); while ( my $hit = $hits->next ) { printf "%12d -> %s\n", $hit->{id}, $hit->{content}; } printf "\nMatched %s record%s in %1.1f milliseconds\n", $hits->total_hits, $hits->total_hits == 1 ? "" : "s", 1_000 * tv_interval( $t0, [gettimeofday()] ); print "\nQuery: "; } __DATA__ Your 200 lines of test data…
    moo@cow[51]~>perl pm-1118102 Query (q to quit): archaea 259697659 -> root;cellular organisms;Archaea;Euryarchaeota;Thermoco +cci;Thermococcales;Thermococcaceae;Pyrococcus;Pyrococcus abyssi;Pyroc +occus abyssi GE5; 272844 -> root;cellular organisms;Archaea;Euryarchaeota;Thermoco +cci;Thermococcales;Thermococcaceae;Pyrococcus;Pyrococcus abyssi;Pyroc +occus abyssi GE5; 289191770 -> root;cellular organisms;Archaea;Euryarchaeota;Methanoc +occi;Methanococcales;Methanocaldococcaceae;Methanocaldococcus;Methano +caldococcus sp. FS406-22; 644281 -> root;cellular organisms;Archaea;Euryarchaeota;Methanoc +occi;Methanococcales;Methanocaldococcaceae;Methanocaldococcus;Methano +caldococcus sp. FS406-22; 490653205 -> root;cellular organisms;Archaea;Euryarchaeota;Halobact +eria;Halobacteriales;Halobacteriaceae;Haloarcula;Haloarcula vallismor +tis; 28442 -> root;cellular organisms;Archaea;Euryarchaeota;Halobact +eria;Halobacteriales;Halobacteriaceae;Haloarcula;Haloarcula vallismor +tis; 493010542 -> root;cellular organisms;Archaea;Euryarchaeota;Halobact +eria;Halobacteriales;Halobacteriaceae;Natronorubrum;Natronorubrum tib +etense; 63128 -> root;cellular organisms;Archaea;Euryarchaeota;Halobact +eria;Halobacteriales;Halobacteriaceae;Natronorubrum;Natronorubrum tib +etense; 500681908 -> root;cellular organisms;Archaea;Euryarchaeota;Methanoc +occi;Methanococcales;Methanococcaceae;Methanococcus;Methanococcus aeo +licus; 42879 -> root;cellular organisms;Archaea;Euryarchaeota;Methanoc +occi;Methanococcales;Methanococcaceae;Methanococcus;Methanococcus aeo +licus; Matched 12 records in 0.4 milliseconds Query: 283552125 283552125 -> root;Viruses;ssRNA viruses;ssRNA negative-strand virus +es;Orthomyxoviridae;Influenzavirus A;Influenza A virus;H5N1 subtype;I +nfluenza A virus (A/chicken/Nigeria/08RS848-4/2006(H5N1)); Matched 1 record in 0.2 milliseconds

    Now… what are you getting me for my birthday? :P

    Reading: Lucy (lots of reading to do). I expect this will maintain search speed of a few milliseconds with your full data set. It’s designed to handle millions of much larger and more complex documents. Initial indexing will take awhile but you only have to do it once (script does it every time to make example short/simple). Presentation/splitting of the data content is up to you.

      Thanks for a great reply. This was exactly what I was looking for.

      It is quite time consuming to do the indexing, currently is taken > 2 hours (still running), but the lookups seem much faster. Testing it out on a smaller dataset, there is a 3x time reduction vs grep, and I'll have to see how that scales with the full dataset.

      I did some modifications to the code, adapting it to the Lucy::Simple module, so right now it looks like this:

      indexer.pl
      #!/usr/bin/perl use 5.014; use strictures; use Lucy::Simple; my $index = $ARGV[0]; system("mkdir -p $index"); my $lucy = Lucy::Simple->new( path => $index, language => 'en', ); open DATA, '<',$ARGV[1]; while (my $line = <DATA>) { my ($id,$taxid,$text) = split(/;/, $line, 3); $lucy->add_doc( {id => $id, content => $text} ); }

      query.pl:
      #!/usr/bin/perl use 5.014; use strictures; use Lucy::Simple; my $index = Lucy::Simple->new( path => $ARGV[0], language => 'en', ); my $query_string = $ARGV[1]; my $total_hits = $index->search(query => $query_string); #print "Total hits: $total_hits\n"; while ( my $hit = $index->next ) { print "$hit->{id}\t"; print "$hit->{content}"; }

      All the perlmonks XP is your birthday reward :)

      This Lucy-code is really nice and fast, thanks.

      However, it doesn't work as easy as-is for large files: I let it run for 3 days on a 25 GB file (just the OP-provided 200 lines, repeated) (on an admittedly slowish AMD 8120, 8 GB memory). I started it last sunday, today I had enough and broke it off.

      2015.03.01 09:35:49 aardvark@xxxx:~ $ time ./lucy_big.pl ^C real 4264m3.903s user 4205m27.322s sys 8m5.160s 2015.03.04 08:39:58 aardvark@xxxx:

      There is probably a way to do this with better settings...

      A postgres variant, loading the same full 25 GB file, was rocksolid and searched reasonable well (~20 ms per search, IIRC (had to delete it for diskspace: size in db: 29 GB)).

      Having said that, a pointer-file solution similar to one of the things BrowserUK posted would be my first choice (although I'd likely just use grep -b).

      But undoubtedly I'll be able to use your Lucy code usefully (albeit on smaller files), so thanks.

      I'd like to hear from the OP how he fared with Lucy and his large file...

        Hmmm… Indexing is expensive but it shouldn’t be so long for such a simple data set. Perhaps duplicate data in your autogeneration is harder to segment/index…? Sorry to plunk so much more down without <readmore/> but I hate clicking on them and this is likely the dying ember of the thread. Anyway, here’s Wonderwall.

        Fake Data Maker

        Took 8 minutes to generate 30G “db” that felt like a decent facsimile for the purposes here.

        use 5.014; use strictures; use List::Util "shuffle"; open my $words, "<", "/usr/share/dict/words" or die $!; chomp ( my @words = <$words> ); my $top = @words - 40; @words = shuffle @words; open my $db, ">", "/tmp/PM.db" or die $!; for my $id ( 999_999 .. 999_999_999 ) { use integer; my $end = rand($top); my $range = rand(35) + 5; my $start = $end - $range; $start = 0 if $start < 0; say {$db} join ";", $id, shuffle @words[ $start .. $end ]; last if -s $db > 32_000_000_000; }

        Indexer

        Took 5h:32m to index 30G of 126,871,745 records. This is a relatively powerful Mac. I suspect doing commits less frequently or only at the end would speed it up a bit but you can only search “live” what’s been committed during indexing.

        use 5.014; use strictures; use Lucy; my $index = "./lucy.index"; my $schema = Lucy::Plan::Schema->new; my $easyanalyzer = Lucy::Analysis::EasyAnalyzer ->new( language => 'en' ); my $text_type = Lucy::Plan::FullTextType ->new( analyzer => $easyanalyzer, ); my $string_type = Lucy::Plan::StringType->new(); $schema->spec_field( name => 'id', type => $string_type ); $schema->spec_field( name => 'content', type => $text_type ); open my $db, "<", "/tmp/PM.db" or die $!; my $indexer = get_indexer(); my $counter = 1; while (<$db>) { chomp; my ( $id, $text ) = split /;/, $_, 2; $indexer->add_doc({ id => $id, content => $text }); unless ( $counter++ % 100_000 ) { print "committing a batch...\n"; $indexer->commit; $indexer = get_indexer(); } } print "optimizing and committing...\n"; $indexer->optimize; $indexer->commit; sub get_indexer { Lucy::Index::Indexer ->new( schema => $schema, index => $index, create => 1 ); }

        Searcher

        Note, it can be used while indexing progresses. Only writes require a lock on the index.

        use 5.014; use strictures; use Lucy; use Time::HiRes "gettimeofday", "tv_interval"; use Number::Format "format_number"; my $index = "./lucy.index"; my $searcher = Lucy::Search::IndexSearcher ->new( index => $index ); my $all = $searcher->hits( query => Lucy::Search::MatchAllQuery->new ) +; print "Searching ", format_number($all->total_hits), " records.\n"; print "Query (q to quit): "; while ( my $q = <STDIN> ) { chomp $q; exit if $q =~ /\Aq(uit)?\z/i; my $t0 = [gettimeofday()]; my $hits = $searcher->hits( query => $q, num_wanted => 3 ); printf "\nMatched %s record%s in %1.2f milliseconds\n", format_number($hits->total_hits), $hits->total_hits == 1 ? "" : "s", 1_000 * tv_interval( $t0, [gettimeofday()] ); while ( my $hit = $hits->next ) { printf "%12d -> %s\n", $hit->{id}, $hit->{content}; } print "\nQuery: "; }

        Some Sample Output

        Some things that this does out of the box and can easily adapt to any prefered style: stemming, non-stemming, logical OR/AND. Compound queries are generally very cheap. Update: I do no compound queries here. That would involve multiple query objects being connected in the searcher.

        Searching 126,871,745 records. Query (q to quit): ohai Matched 0 records in 1.33 milliseconds Query: taco Matched 0 records in 0.30 milliseconds Query: dingo Matched 12,498 records in 17.69 milliseconds 79136688 -> incandescency;scratchiness;ungnarred;dingo;desmachymat +ous;verderer 78453332 -> dingo;verderer;incandescency;ungnarred;coinsurance;scr +atchiness;desmachymatous 78367042 -> verderer;ungnarred;incandescency;dingo;desmachymatous; +scratchiness Query: 78311109 Matched 1 record in 80.07 milliseconds 78311109 -> revealing;sulfocarbimide;Darwinize;reproclamation;inte +rmedial;Cinclidae Query: perl Matched 12,511 records in 34.92 milliseconds 78437383 -> unnoticeableness;radiectomy;brogger;rumorer;oreillet;b +efan;perle 59450674 -> perle;Avery;autoxidizability;tidewaiter;radiectomy;fil +thily 59125043 -> oreillet;perle;Avery;autoxidizability;filthily;tidewai +ter;radiectomy Query: pollen OR bee Matched 61,997 records in 27.14 milliseconds 127851379 -> sley;Phalaris;pollen;brasque;snuffle;excalate;operculi +genous 79011524 -> rave;uliginose;gibel;pollened;uncomprised;salve;topogn +osia 78853424 -> topognosia;gibel;rave;uncomprised;pollened;uliginose;s +alve Query: pollen Matched 24,674 records in 1.58 milliseconds 127851379 -> sley;Phalaris;pollen;brasque;snuffle;excalate;operculi +genous 79011524 -> rave;uliginose;gibel;pollened;uncomprised;salve;topogn +osia 78853424 -> topognosia;gibel;rave;uncomprised;pollened;uliginose;s +alve Query: pollen AND bee Matched 0 records in 21.61 milliseconds
        (on an admittedly slowish AMD 8120, 8 GB memory)

        I'll trade you your 3.4/4.0 GHz 8-thread processor for my 2.4Ghz 4 core if you like :)

        If you haven't thrown away that 25GB file and can spare your processor for an hour, I'd love to see a like-for-like comparison of my code in Re: Using indexing for faster lookup in large file (PP < 0.0005s/record).


        With the rise and rise of 'Social' network sites: 'Computers are making people easier to use everyday'
        Examine what is said, not who speaks -- Silence betokens consent -- Love the truth but pardon error.
        "Science is about questioning the status quo. Questioning authority". I'm with torvalds on this
        In the absence of evidence, opinion is indistinguishable from prejudice. Agile (and TDD) debunked
Re: Using indexing for faster lookup in large file (PP < 0.0005s/record)
by BrowserUk (Patriarch) on Feb 28, 2015 at 11:33 UTC

    A pure perl solution that results in an average lookup time of < 1/2 a millisecond per record.

    This indexes the 30GB/160e6 record file in around 45 minutes. (10 lines):

    And this loads the 2GB index into memory, uses a binary search to find the index entry, seek to locate and readline to read the record. (50 lines)


    With the rise and rise of 'Social' network sites: 'Computers are making people easier to use everyday'
    Examine what is said, not who speaks -- Silence betokens consent -- Love the truth but pardon error.
    "Science is about questioning the status quo. Questioning authority". I'm with torvalds on this
    In the absence of evidence, opinion is indistinguishable from prejudice. Agile (and TDD) debunked

      Thinks looks very interesting, but I cannot get the code to work for me.
      It's seems as if the indexing might not catch what I would like since it won't find anything I'm looking for. Also running the code you wrote as-is only produced "Not found" for me.

      I'm not familiar enough with this as to find anything. But I'm thinking that the first snippet of code walks through the data and gets the position, in bytes, for each starting number?

        But I'm thinking that the first snippet of code walks through the data and gets the position, in bytes, for each starting number?

        Correct. (But that "snippet" is a complete working program to create the index file. You did run that first didn't you?)

        1. It gets the position (in bytes) of the start of the record (using tell),
        2. and the first number on the line using the regex (m[^(\d+),]),
        3. and then packs the two into a 12-byte binary record and writes it to the index file.
        Also running the code you wrote as-is only produced "Not found" for me.

        The first thing that comes to mind is that you never answered my question above about the sort order of your data files.

        If they are not sort numerically, then you will need to sort the file; or the index file; before the binary search will work.

        It's seems as if the indexing might not catch what I would like since it won't find anything I'm looking for.

        I'll need a little more information to go on.

        Could you run the following steps in your console and copy&paste the output (in <code></code> tags).

        Substitute whatever names you gave to the two programs above for 1118102-indexer.pl & 1118102-indexer.pl below.

        The test file need only be a few dozen lines; but the lines must start with numbers, and it must be sorted numerically. The 200 sample lines you posted above would be ideal. :

        >perl -V >1118102-indexer smallFile.txt smallFile.idx >1118102-searcher -N=10 smallFile.txt smallfile.idx

        If you post the output from all 3 commands, it might give some clues as to what is going on.


        With the rise and rise of 'Social' network sites: 'Computers are making people easier to use everyday'
        Examine what is said, not who speaks -- Silence betokens consent -- Love the truth but pardon error.
        "Science is about questioning the status quo. Questioning authority". I'm with torvalds on this
        In the absence of evidence, opinion is indistinguishable from prejudice. Agile (and TDD) debunked
Re: Using indexing for faster lookup in large file
by erix (Prior) on Feb 28, 2015 at 02:25 UTC

    I think only the first two numbers actually matter because it seems clear from your example data that the second number is always the NCBI taxonomy ID (tax_id) [1]. The whole string behind that tax_id number follows from it.

    So you'd have to compile first a list/table of unique taxonomy lines (IIRC there are less then 2 million in NCBI Taxonomy database; of course I don't know how many there will be in your file) with tax_id as a primary key. Then make a second list/table with just the first and second number of each line. (I'd try it out but with only the 200-line file that doesn't make much sense)

    The two tables (always assuming you store them in a RDBMS) can then be joined on tax_id.

    Of course, if you don't expect memory problems the same thing can be done in hashes as well.

    Alternatively, you could make a table with your query numbers (what the hell are these numbers anyway?) together with line offsets (i.e., a variant of BrowserUK's solution). As always, storing the values and offsets in a dbms/table will be slower to search than searching them in a hash but it will be less dependent on having enough memory.

    [1] NCBI Taxonomy page: http://www.ncbi.nlm.nih.gov/taxonomy (there is also a ftp link there but the files provided are not in the form of your nice human-readable taxonomy hierarchy-enumerating lines, so you'd have to compile such lines from that data; it seems easier to get them from your own database file.)

      I thought it might be from MeSH at first but after seeing more data, I suspect you’re right. The code I provided, I think, is better (with whatever tweaks the user/dev needs) for search than RMDBS RDBMS code and certainly faster.

Re: Using indexing for faster lookup in large file
by fishmonger (Chaplain) on Feb 27, 2015 at 19:09 UTC

    I agree with atcroft about importing the data into a database, but if that's not an option you want to entertain, then another closely related option would be to use the DBD::CSV module to access the data file with sql statements.

Re: Using indexing for faster lookup in large file
by roboticus (Chancellor) on Feb 27, 2015 at 21:04 UTC

    anli_:

    If the data sample is representative (in that there's a lot of repetition in the data), then you may be able to condense it into a much smaller data structure. If you post a larger sample (200-ish lines in readmore tags) I'll take a quick look and see if I can come up with something.

    ...roboticus

    When your only tool is a hammer, all problems look like your thumb.

      Hi, and thanks for taking the time.
      I've included 200 random lines below

        anli_:

        Considering that the bulk of your data appears to be the text representation of various paths through a taxonomy tree, I thought that you might be able to fit it all into memory (taking advantage of all the redundancy) if you built a pair of trees and connected them together at the leaves. For example, if your data looked like this:

        1;2;8;root;xyzzy;cat 1;2;5;root;xyzzy;dog 1;9;root;bird

        Then we could build an index tree (on top) and the taxonomy tree (below), tying them together (shown by vertical lines) like this:

        1 / \ / \ 2 \ --^- \ / \ \ 8 5 9 | | | cat dog bird \ / / xyzzy / \ / root

        If your tree is relatively shallow but broad, you should be able to save a considerable amount of space.

        Here's some code that builds the two trees and looks up some of the numeric keys to display the data. Let me know if it does the trick for you, I'd be interested in knowing how much memory it takes to hold your database.

        The taxonomy tree has parent links in it to let you get from the leaf to the root of the tree, and the traceback function will walk the tree back to the root for you.

        Update: I added a couple comments to the code to clarify it a little.

        ...roboticus

        When your only tool is a hammer, all problems look like your thumb.

Log In?
Username:
Password:

What's my password?
Create A New User
Domain Nodelet?
Node Status?
node history
Node Type: perlquestion [id://1118102]
Approved by atcroft
help
Chatterbox?
and the web crawler heard nothing...

How do I use this?Last hourOther CB clients
Other Users?
Others meditating upon the Monastery: (4)
As of 2024-03-28 16:59 GMT
Sections?
Information?
Find Nodes?
Leftovers?
    Voting Booth?

    No recent polls found