Beefy Boxes and Bandwidth Generously Provided by pair Networks
Perl: the Markov chain saw
 
PerlMonks  

file handle limitation of 255

by radnus (Novice)
on Jul 14, 2010 at 18:54 UTC ( [id://849608]=perlquestion: print w/replies, xml ) Need Help??

radnus has asked for the wisdom of the Perl Monks concerning the following question:

How do I overcome the limitation of opening more than 255 concurrent File handles. I have tried IO::File->new(), open, sysopen... All of them fail to open more than 255. Note that the ulimit (hard and soft) are set to 65536. That is not the issue.

Replies are listed 'Best First'.
Re: file handle limitation of 255
by Corion (Patriarch) on Jul 14, 2010 at 18:59 UTC

    See FileCache. Your OS has a hard limit on open file handles, and maybe it is 255 no matter what ulimit tells you.

      Hard to believe, considering that I am using Solaris 10. In any case, this restriction is not coming from PERL, but the underlying OS, right?

        Yes. Perl (not PERL) has no limit on the number of open file handles.

        When I logged in as a non root user to a Solaris 10 system, my default file limit (somewhat to my surprise) was 256 but I was able to change it easily enough as shown in the shell session below:

        # ulimit -n 256 # ulimit -n 1024 # ulimit -n 1024

Re: file handle limitation of 255
by johngg (Canon) on Jul 14, 2010 at 22:04 UTC

    This web page may have some helpful information.

    Cheers,

    JohnGG

Re: file handle limitation of 255
by graff (Chancellor) on Jul 15, 2010 at 00:45 UTC
    The limit on the number of open file handles is the kind of thing that would traditionally force programmers to be creative in finding algorithms that scaled nicely despite the limit.

    For example, if you store your output in an accumulating data structure instead of hundreds of files that are being kept open, and modularize the process of adding content to the data structure, there's just a little extra planning work to be done to establish a threshold for when data needs to be dumped to a given file so that the given chunk of memory can be reused. It's a matter of striking a balance between the amount of memory consumed and the number of times files must be opened and closed.

      THANKS AGAIN!!! Yes, there are ways to overcome this, by working around this dis-advantage. But first, you have to realize that this is an issue :)
Re: file handle limitation of 255
by cdarke (Prior) on Jul 15, 2010 at 08:20 UTC
    I had this problem a few years ago - though it is rare. My limit was more like 20 file descriptors, rather than 255. I was writing supporting code for an app. which was ported from a mainframe, and a redesign was not an option.

    The limit is associated with the process, so what I did was to spawn 'worker' processes to do the IO for me, and some of the processing. Essentially I split the task, with a management process which co-ordiinated everything. It was a redesign for my part of the app., but it worked very well and was scalable. Eventually I extended it to allow local processing and IO on different machines (communication used INET sockets).

    So, to beat a limitation of a single process, just create more!

      This problem of 255 limit is independent of ulimit. I set this on /etc/system :- set rlim_fd_max=65536 set rlim_fd_cur=65536 So that I get host# ulimit -n 65536 But still I have 255 limitation.

        A workaround on Solaris 10 (update 4?) and later is to run the application with this environment variable setting:
        LD_PRELOAD_32=/usr/lib/extendedFILE.so.1
        For details, see http://blogs.sun.com/mandalika/entry/solaris_workaround_to_stdio_s
Re: file handle limitation of 255
by TedPride (Priest) on Jul 15, 2010 at 05:13 UTC
    Most of these sorts of problems can be best solved by telling us WHY - why, in this case, you want to open more than 255 file handles. If we assume for the sake of argument that you need to read one line at a time from each of thousands of files, what you could do is write a function to keep track of where you are in each file and close / open new files on a priority basis - something like the following hack:
    use strict; my $limit = 250; ### Leaving a few for main program my $line; $line = readFile('inp1.txt'); $line = readFile('inp2.txt'); $line = readFile('inp3.txt'); ### etc flushFiles(); { my %files; ### Structure containing handle, position, etc. my $fcount; ### Number of open files my $opened; ### Sequential number to tell order opened sub readFile { my $file = $_[0]; my ($closer, $handle, $line); ### Need to open file, but too many open if ((!$files{$file} || !$files{$file}{'handle'}) && $fcount == $limit) { ### Find best file to close $closer = (sort { $b->{'open'} <=> $a->{'open'} || ### Must be open $a->{'accessed'} <=> $b->{'accessed'} || ### Accessed leas +t $a->{'opened'} <=> $b->{'opened'} ### Opened earlie +st } values %files)[0]; close $closer->{'handle'}; $closer->{'handle'} = undef; $closer->{'open'} = undef; $fcount--; } ### Need to open file, maybe seek old position if (!$files{$file}{'handle'}) { return if !open($handle, $file); $files{$file}{'handle'} = $handle; $files{$file}{'open'}++; seek($handle, $files{$file}{'position'}, 0) if $files{$file}{'position'}; $files{$file}{'opened'} = ++$opened if !$files{$file}{'opened'}; $fcount++; } else { $handle = $files{$file}{'handle'}; } $files{$file}{'accessed'}++; return if !($line = <$handle>); ### If we successfully read data from file $files{$file}{'position'} += length $line; return $line; } sub flushFiles { my $handle; for (values %files) { close $_->{'handle'} if $_->{'handle'}; } %files = (); $fcount = undef; }}

Log In?
Username:
Password:

What's my password?
Create A New User
Domain Nodelet?
Node Status?
node history
Node Type: perlquestion [id://849608]
Approved by Corion
help
Chatterbox?
and the web crawler heard nothing...

How do I use this?Last hourOther CB clients
Other Users?
Others exploiting the Monastery: (4)
As of 2024-03-29 10:28 GMT
Sections?
Information?
Find Nodes?
Leftovers?
    Voting Booth?

    No recent polls found