This works much better :)
This splits the file into 2gb chunks, I have tested on about 25-30 iso's is have stored on my PC and it works great, though sometimes writing performance is a little bit slow. You can also change how many gb's you want to split it into by changing the iterator's value.
use strict;
use warnings;
files();
sub files {
foreach (@ARGV) {
print "processing $_\n";
open my $fh, '<', $_ || die "cannot open $_ $!";
binmode($fh);
my $num = '000';
my $iterator = 0;
split_file( $fh, $num, $_, $iterator );
}
}
sub split_file {
my ( $fh, $num, $name, $iterator ) = @_;
my $split_fh = "$name" . '.split';
open( my $out_file, '>', $split_fh . $num ) || die "cannot ope
+n $split_fh$num $!";
binmode($out_file);
while (1) {
$iterator++;
my $buf;
read( $fh, $buf, 32 );
print( $out_file $buf );
my $len = length $buf;
if ( $iterator == 67108864 ) { #split into 2gb chun
+ks
$iterator = 0;
$num++;
split_file( $fh, $num, $name );
}
elsif ( $len !~ "32" ) {
last;
}
}
}
Works pretty quickly! split almost 5gb in 4.4333 mins. I do see a decrease in performance sometimes, though other times it writes very quickly. Go ahead and test it on one of your iso's. What would be the most efficient read/write buffer?
-
Are you posting in the right place? Check out Where do I post X? to know for sure.
-
Posts may use any of the Perl Monks Approved HTML tags. Currently these include the following:
<code> <a> <b> <big>
<blockquote> <br /> <dd>
<dl> <dt> <em> <font>
<h1> <h2> <h3> <h4>
<h5> <h6> <hr /> <i>
<li> <nbsp> <ol> <p>
<small> <strike> <strong>
<sub> <sup> <table>
<td> <th> <tr> <tt>
<u> <ul>
-
Snippets of code should be wrapped in
<code> tags not
<pre> tags. In fact, <pre>
tags should generally be avoided. If they must
be used, extreme care should be
taken to ensure that their contents do not
have long lines (<70 chars), in order to prevent
horizontal scrolling (and possible janitor
intervention).
-
Want more info? How to link
or How to display code and escape characters
are good places to start.