The scalar was a typo, this particular field required an @array just like you said, but it still only takes in 18 lines of data. That's not really an issue, I was just curious as to if there was a max amount of data STDIN can handle, in general. Its not an important script that I was writing, just something I was playing with that sparked my curiosity. Thanks for the ideas however, and ++ to you all!
--
Yes, I am a criminal.
My crime is that of defyance.
| [reply] [Watch: Dir/Any] [d/l] |
But if it's to be a 'small' script, then using @var will occupy a lot more memory than the loop sholud... Everything depends on what do you mean by 'quite a bit of data' :-)
Greetz, Tom. | [reply] [Watch: Dir/Any] [d/l] |
See, I'm not even worried about the script, I'll make it work, that's not a prob. I was just wondering if anyone knew of any limitations on the amount of data that STDIN can handle, I probably should have made that more clear..
And tmiklas your right about it occupying more mem, but as you know, a @var handles multiple scalars which is what I want, in order to split the imputed data..
--
Yes, I am a criminal.
My crime is that of defyance.
| [reply] [Watch: Dir/Any] [d/l] |
STDIN shouldn't have a per-line limit.
However, there may be a per-process limit to the amount of memory that you can use. Check out ulimit(1).
The little snippet below will print a string of about 80MB, then read it back into a single scalar in perl. I just tried this out on my workstation, and it seems to work just fine.
perl -e 'print "a"x85000000' | \
perl -e 'my $line = <STDIN>; print length($line), "\n";'
| [reply] [Watch: Dir/Any] [d/l] |