Beefy Boxes and Bandwidth Generously Provided by pair Networks
Problems? Is your data what you think it is?
 
PerlMonks  

Working with Hashes

by LovePerling (Initiate)
on Jun 12, 2016 at 11:28 UTC ( [id://1165414]=perlquestion: print w/replies, xml ) Need Help??

LovePerling has asked for the wisdom of the Perl Monks concerning the following question:

Hi Monks,

I've a file containing mulitple hashes, say filename my_hash_list.pl .

I want to write a code to fetch those hashes based on parameter passed to a new file, say file name fetch_hash.pl Eg (my_hash_list.pl) my contain.
my %first_hash("fruit" => banana, "Vegetable" => tomato); my %second_hash("work" => office, "family" => home); my %third_hash("TV" => sony, "Phone" => apple);

now i want to get the hash from my second file, fetch_hash.pl by executing.

perl fetch_hash.pl first_hash

perl fetch_hash.pl second_hash

perl fetch_hash.pl third_hash

Can anyone help me with this. Thanks in advance.

Replies are listed 'Best First'.
Re: Working with Hashes (stored in other files)
by stevieb (Canon) on Jun 12, 2016 at 12:57 UTC

    This sounds like data sharing, for which I'd use JSON (it's cross-language).

    I've included an example script that shows you how you write JSON to a file called write.pl.

    I then show the resulting JSON data in the file, then a script fetch_hash.pl that reads in the data using a command line argument.

    write.pl

    use warnings; use strict; use JSON; my %first_hash = (fruit => 'banana', Vegetable => 'tomato'); my %second_hash = (work => 'office', family => 'home'); my %hashes = ( first_hash => \%first_hash, second_hash => \%second_hash, ); my $json = encode_json \%hashes; open my $fh, '>', 'data.json' or die $!; print $fh $json;

    data.json (JSON file)

    {"second_hash":{"work":"office","family":"home"},"first_hash":{"Vegeta +ble":"tomato","fruit":"banana"}}

    fetch_hash.pl

    use warnings; use strict; use Data::Dumper; use JSON; if (! @ARGV && ($ARGV[0] ne 'first_hash' || $ARGV[0] ne 'second_hash') +){ print "need first_hash or second_hash as arg\n"; exit; } my $want = $ARGV[0]; my $file = 'data.json'; my $json; { local $/; open my $fh, '<', $file or die $!; $json = <$fh>; close $fh; } my $data = decode_json $json; print Dumper $data->{$want};

    Output:

    $ perl fetch_hash.pl first_hash $VAR1 = { 'fruit' => 'banana', 'Vegetable' => 'tomato' }; $ perl fetch_hash.pl second_hash $VAR1 = { 'work' => 'office', 'family' => 'home' };

      Thanks for the code.

      Unfortunately in my environment I do not have JSON package installed.

      Can you help me with the steps how can I get the JSON package installed?

      Thanks again for your help.

        Above is the reply from my side, I replied without logging in :) ..
Re: Working with Hashes
by graff (Chancellor) on Jun 12, 2016 at 13:45 UTC
    If you tell us a little more about the kind of application you are trying to build, rather than a particular technique you are thinking of using, you will probably get more helpful answers.

    I gather you want a command-line script whose behavior will depend on the command-line arguments, and the different behaviors will depend on information that you want to store in distinct files that are separate from the command-line script. That's all fine and easy.

    But you seem to want these extra files to be structured as perl code to define hashes. Is there some reason for not wanting to use some other common data file format, such as yaml or json or even xml? (There are good perl modules for these formats, making it easy to store and load any perl data structure you want via a data file.) So long as it's just a matter of selecting one set of data vs. another (including, say, one hash structure vs. another hash structure), reading different data files is the normal way (rather than selecting one vs. another block of perl code to be loaded and parsed after the main script has started running).

    For any application where you might want to use one vs. another type of data structure (e.g. hash vs. array), and/or one vs. another set of subroutines, the normal way to do that is to create one or more object-oriented modules, such that there's always a consistent interface between the command-line script and each of the possible (objectified) things that it is supposed to know or do based on command-line args.

    Normally, it's fine to load all the alternative modules (via use MyModuleX; use MyModuleY; ...), so that all possible behaviors are accessible on any given run, no matter what the command-line args may be. But if you have a compelling reason to want only the selected data and/or code to be loaded for a given run, just decide how your command-line args will relate to module loading, and then do this: require MyModuleX; (or whatever module), instead of use.

      Hi,

      Thanks for the reply.

      Basically i want to write a utility which will cater to multiple scripts with its threshold values.

      So my utility will have add/update/fetch switches, add taking use input for the various threshold values and then pushing it in a separate file (say my_hashes.pl) in form of a hash (thats what I was planning to use till now).

      Fetch and update will be take argument and look for the same named hash in the file previously created (my_hashes.pl) , and display/use or update them.

        Above is the reply from my side, I replied without logging in :) ..
Re: Working with Hashes
by QuillMeantTen (Friar) on Jun 12, 2016 at 11:52 UTC

    Update:

    • I thought you wanted the hashes stored in another file, if that's not the case then have a look at getopt::long for argument passing.

      else you can just test argv0:
    • Disregard above, I got it right the first time... I think?
    • Updated the part about a separate conf language for clarification


    • It seems like it boils down to HOW you are storing your hashes.
      If you are only interested in pure data storage, then you might want to have a look at the YAML format. If you want to be able to extend your application and dont care about the computation cost, create a package, load it and export whatever you need.

      On the other end of the spectrum (I put this possibility last for a good reason) if you want to use said hashes for configuration purposes but still need some logic to be applied to them (from within so you can have a consistant api and thus blurring the line between configuration files and "proto plugins") you might want to have a look at a specialised language(lua) and its just in time compiler. As it should give you C like performances its something to explore as well. You can even get it to play nice with perl.

      Thats it for the doc part, what have you tried? You could easily use the do command after filtering the file content to select what you want to execute, or (cleaner imo) go the package way.

      Hope that helps, keep in mind its only my brand of fishing so you might want to keep asking around.

Log In?
Username:
Password:

What's my password?
Create A New User
Domain Nodelet?
Node Status?
node history
Node Type: perlquestion [id://1165414]
Approved by QuillMeantTen
help
Chatterbox?
and the web crawler heard nothing...

How do I use this?Last hourOther CB clients
Other Users?
Others wandering the Monastery: (3)
As of 2024-03-29 02:20 GMT
Sections?
Information?
Find Nodes?
Leftovers?
    Voting Booth?

    No recent polls found