http://qs321.pair.com?node_id=31600

petemar1 has asked for the wisdom of the Perl Monks concerning the following question:

as shown here, i'm pulling individual html files into a single page using cgi. the files are pulled in alphanumeric order. i want to be able to pull the files randomly. i know i need
srand
, and some properly syntaxed function, but what am i missing?
#!/usr/local/bin/perl srand; print "Content-type: text/html\n\n"; $n=1; foreach $file (<random/*.html>) { open (FILE, "<$file")|| &ErrorMessage; @file = <FILE>; print (@file); close(FILE) }
- m peters - www gwangwa com -

Replies are listed 'Best First'.
RE: random sort of list
by BlaisePascal (Monk) on Sep 08, 2000 at 19:22 UTC
    I'd suggest using glob() to grab the file listing, randomizing that using the permutation code that floats around here alot, and then doing the foreach on that...
    print "Content-type: text/htlp\n\n"; $n=1; @files = glob("random/*.html); fisher_yates_shuffle(@files); # see the Perl Cookbook for this functio +n foreach $file (@files) { ... }
RE (tilly) 1: a random sort of list
by tilly (Archbishop) on Sep 08, 2000 at 19:47 UTC
    You don't need a call to srand in Perl since 5.004.

    As for your question, to scramble an array just use some of the code discussed at Randomize an array.

      As I understand it, you don't have to call srand because Perl seeds it to time for you, but if you are writing a script where you want to use rand for some attempt* at security then you should re-seed it yourself with something more obscure. I usually use some combination of time and place, er, pid. Although obviously you don't want to just xor the two.

      *I say attempt because in the end nothing is trully secure and I don't want to start a debate on it.

        Yes. But getting a good random seed can be hard.

        On systems that have it, sample /dev/random. Frequent CGI scripts might run out of entropy though. CPAN has some cryptography modules that do a similar thing with similar limits.

        If you need a *lot* of random data, make a large file from something volatile (/dev/kmem is a good source), compress it, throw away the start and encrypt *that*. Compress again and throw away the start if you are paranoid. Then sample.

        The reason why this works is that perfectly random data is mathematically identical to data with an information rate of 1 (one bit of info per bit of data). So you start with data people cannot easily determine. Compression tries to increase your information rate so it becomes closer to random. (Modulo necessary signatures.) However given the type of information there will be recognizable artifacts. Encryption tries to scramble your information unrecognizably. The result is unpredicatable data that should be very close to looking like white noise.

        ObTrivia: Virtually any form of encryption, even very weak ones (eg the pathetic standard Unix crypt) will be much harder to break if you first compress the data stream.

        Actually, modern Perl (since 5.004) does much better than srand(time()). The probable seed looks something like:

        srand 1000003*time() + 3*$usec + 269*$$ + 73819*${undef} + 26107*\$x
        where ${undef} is whatever integer is left on the stack and \$x is a pointer into the stack. Note that the above Perl code doesn't actually work; it is just an approximation of what the C code inside Perl is doing.

        On systems with /dev/urandom, that is just used instead, which is pretty good. Use /dev/random if you have it, though you may have to wait for enough entropy to gather. But back to the case of systems without /dev/*random...

        Although the code is described as a "quick hack" (because it doesn't do some fancy summing but just multiplies and adds), it would be hard to do much better portably from within a Perl script.

        But this still isn't enough for cryptographic uses. Repeated runs of the same script might well yield the same values for the "what is left on the stack" and the "address into the stack" while the other values can be predicted to a certain extent.

        So if you come up with something that seems really hard to predict, just add it into Perl's seed rather than replacing it. In other words:

        srand( fancyseed() );
        is probably not nearly as good of an idea as, for example:
        srand( rand(~0) ^ fancyseed() );
        Suggestions for better ways to add randomness in are welcome.

        The documentation on srand() in perlfunc.pod is also worth reading.

                - tye (but my friends call me "Tye")
Re: a random sort of list
by little (Curate) on Sep 09, 2000 at 03:15 UTC
    #!/usr/local/bin/perl use strict; # always use strict use Carp; my @file; my $file; # instaed of srand simply chose randomly from the number of your sort +methods my $sorttype = int(rand(2)); my @methods = ("sort","reverse sort"); # better leave output to cgi.pm I'd suggest print "Content-type: text/html\n\n"; # f.w.u. ?? my $n=1; # your dir including path to it relative to this scripts path my $dir = "../random/"; #read the dir and put all files to a list # even if above sample using glob looks also interesting :-) opendir DIR, $dir || die "blah!"; my @allfiles = readdir DIR; close DIR; # remove those files from list that don't seem to be html my @randomlist = grep /.?\.(s||p)?html?/i, @allfiles; # here specify your sort method for the remaining files my $do = $methods[$sorttype]." ".'@randomlist;'; @randomlist = eval($do); #start output foreach $file (@randomlist) { my $tmp = $dir.$file; open (FILE, "<$tmp")|| die "failed"; @file = <FILE>; print (@file); close(FILE) } #thats it
    so far just for now