perl -Mstrict -Mwarnings -Mutf8 -CSD -e 'my $c=0; my $i=1; my $fn = sp
+rintf("%010d",$i);open(FH,">",$fn)||die"open $fn,$!";while(<>){while(
+/(\X)/g){print FH "$1";if(++$c%3000==0){close(FH);$fn=sprintf("%010d"
+,++$i);open(FH,">",$fn)||die"open $fn,$!";}}}close(FH);' < input.txt
The important switch is -CSD telling perl that all file-opens (for read/write) should be UTF-8 (the 'D') and all read/write to standard file handles (STDOUT,STDERR,STDIN, e.g. printing diagnostics) should be done with encoding UTF-8 (the 'S'). More or less (see perlrun). -Mutf8 is when you are dealing in your code with variables containing UTF-8, for example checking the length o a unicode string with or without that switch will count characters instead of bytes.
10' Update: btw, "%010d" in sprintf() tells it to create a filename with padded zeros plus the file index, which means you get 0000000001 , ...2 etc. You said long files and perl knows no limits. btw2, it reads input character-by-character and counts them up to 3000. Although Perl's IO is usually buffered, you may achieve better performance by slurping the file all at once if you can afford the RAM and process it as before.
Update:Tux's solution Re: how to split a file.txt in multiple text files is way better than mine.
bw, bliako |