Beefy Boxes and Bandwidth Generously Provided by pair Networks
Your skill will accomplish
what the force of many cannot
 
PerlMonks  

Re^2: System call doesn't work when there is a large amount of data in a hash

by Nicolasd (Acolyte)
on Apr 28, 2020 at 21:26 UTC ( [id://11116175]=note: print w/replies, xml ) Need Help??


in reply to Re: System call doesn't work when there is a large amount of data in a hash
in thread System call doesn't work when there is a large amount of data in a hash

Thanks for the reply, Perl version v5.26.2 and the O/S is Centos 7 I need these large hashes to store genetic data in a hash, it's for a genome assembly tool: I want to add a new module, but I need a system call for that, but I can't get it to work when I run it on large datasets. I am not an informatician so have a limited knowledge Any help would be greatly appreciated https://github.com/ndierckx/NOVOPlasty

Replies are listed 'Best First'.
Re^3: System call doesn't work when there is a large amount of data in a hash
by 1nickt (Canon) on Apr 29, 2020 at 01:53 UTC

    Hi,

    " I need these large hashes to store genetic data in a hash"

    That's a bit like saying " I need these hashes because I need these hashes."

    See:

    Also see:

    Does your "genome assembly tool" accept Perl data hashes as input? Of course it does not. Therefore you must be somehow serializing your massive input to the program in your system call. Perhaps you need to write a file, or provide a data stream to a server? As noted by my learned colleague swampyankee, it's hard to conceive of why you need to store 250Gb of data in an in-memory hash. There are myriad techniques to avoid doing so, depending on your context; why don't you explain a bit more about that, and show some code?

    Hope this helps!


    The way forward always starts with a minimal test.
Re^3: System call doesn't work when there is a large amount of data in a hash
by marto (Cardinal) on Apr 29, 2020 at 07:41 UTC

    I'm not a bioinformatitician either, but that repo has some problems, filenames using the : character, a single perl file > 1MB with over 23K lines, a quick glance at which shows room for improvement. I'm not sure if part of the relatively popular Bioperl suite of tools can address your requirements. Regardless all of this is good advice. You don't need to store everything in memory even if you are just planning to call some external command line tool. Consider an alternative such as a database.

      I know I could have written it better, it's a bit of a mess, but it works great so that's the most important. And I really need that hash, because I need to access that data all the time, a database would be too slow. Which file is using the : character?

      Could it be that the system call duplicates everything that is in the virtual memory to start the sister process? If that is the case I guess I just can't do system calls, any idea if there is another way

        Without seeing your code, it will be very hard to suggest things on how to make it do what you want.

        You have discarded all the obvious things that would make it easier, because you say that you really need this.

        Ideally, you show us some minimal code that reproduces the problem so that we can run it ourselves. For example, the following could be a start:

        #!perl use strict; use warnings; my $memory_eaten = 8 * 1024 * 1024 * 1024; # 8GB, adjust to fit my %memory_eater = ( foo => scalar( " " x $memory_eaten ), ); my $cmd = "foo bar"; system($cmd) == 0 or die "Couldn't launch '$cmd': $!/$?";

        Updated: Actually make the hash eat memory by creating a loong string

        "Which file is using the : character?"

        Download the repo as a zip file, try to extract under Windows, it'll report a bunch of problems caused by 'invalid' characters in filenames.

        Could it be that the system call duplicates everything that is in the virtual memory to start the sister process?

        In theory, fork (used to implement system) does exactly that. Modern kernels with virtual memory will set up COW instead of actually copying the entire address space, but this still (usually) requires duplicating page tables, which for 256GiB of data with 4KiB pages themselves fill 512MiB or so. Could you be bumping up against a resource limit? (Look up ulimit for more information.)

Log In?
Username:
Password:

What's my password?
Create A New User
Domain Nodelet?
Node Status?
node history
Node Type: note [id://11116175]
help
Chatterbox?
and the web crawler heard nothing...

How do I use this?Last hourOther CB clients
Other Users?
Others meditating upon the Monastery: (6)
As of 2024-03-28 08:51 GMT
Sections?
Information?
Find Nodes?
Leftovers?
    Voting Booth?

    No recent polls found