ig effectively addressed your split issue.
I have a question about pushing 1552 randomly-obtained strings from @X_info onto @tmp, and then doing a splice to move the first 26 from @tmp into @PAR1. (Although jethro correctly pointed out that @tmp is never reset, so strings keep getting pushed onto it. Nevertheless, the randomness doesn't essentially change.) Why not just generate those first 26 randomly-obtained strings, one at a time, and split them as you go? In both cases--1552 using the first 26 vs. just generating 26--the 26 strings were randomly obtained from a superset. Is there a certain protocol that you're following that requires you to generate all 1552 and then grab the first 26 for processing? If not, then consider the following refactoring:
use warnings;
use strict;
my $runs = 1; # for testing code
# Program vars
my $chr_X_input = "bootstrap_data.txt";
my $range = 1552; # total number of array elem
+ents
my $pi_sum;
my $L_sum;
my $differences_sum;
my $coverage_sum;
open my $CHR_X_INPUT, '<', $chr_X_input or die "Can't open chromosome
+X input: $!";
chomp( my @X_info = <$CHR_X_INPUT> );
close $CHR_X_INPUT;
for ( 1 .. $runs ) {
for ( 1 .. 26 ) {
my ( $pi, $L, $differences, $coverage ) = split /\t/, $X_info[
+ int( rand($range) ) ];
$pi_sum += $pi;
$L_sum += $L;
$differences_sum += $differences;
$coverage_sum += $coverage;
my $PAR1_diversity = ( ( $pi_sum / $L_sum ) / ( $differences_s
+um / $coverage_sum ) );
}
}
Hope this helps!