I see following algorithm, assuming ~10G file of records of 50-150 bytes, which ordering/length can be somehow dependent:
1. Estimate count of records in result sample: $n = 1000;
2. Estimate count of records in "primar" sample: $p = $n * 100;
3. Generate array of byte positions @pos, uniformly distributed in int
+erval [1, length of file], satisfying condition $p == @pos;
4. If it is worth for seeking in the file, sort @pos.
5. my @res; my %res;
6. foreach @pos
6.1 read the record starting behind $_-th byte and remember $start, wh
+ich is position of the first byte of the record in the file
6.2 next if exists $res{$start};
6.3 $res{$start} = 1;
6.4 my $rand = some_uniform_rand()/($start - $_). It is crucial step -
+ IF ordering of records is somehow dependent on record length, it can
+ normalize it. Records following longer records have lowered probabil
+ity to be inserted into @res.
6.5 if $n < @res
insert {weight => $rand, rec => $record} into @res, keeping asc
+ending order by weight
6.6 elsif $rand > $rec[0]{weight}
shift @rec and insert {weight => $rand, rec => $record} into @r
+es, keeping ascending order by weight
7. you have @res
UPDATE: 6.5 if $n > @res