Beefy Boxes and Bandwidth Generously Provided by pair Networks
Pathologically Eclectic Rubbish Lister
 
PerlMonks  

Re: Confirming what we already knew

by Anonymous Monk
on Mar 06, 2003 at 12:33 UTC ( [id://240870]=note: print w/replies, xml ) Need Help??


in reply to Confirming what we already knew

it needs to finish all of its work in under 20 hours or so

You definately chose right in the end. I have a 1 hour rule for Perl. If it takes longer than an hour to run, I rewrite it in another language. How someone can wait 20+ hours for their code to run is beyond me. Mind you, hardware's pretty cheap these days so if that can solve the problem, I'm all for it.

Replies are listed 'Best First'.
Re: Re: Confirming what we already knew
by Elian (Parson) on Mar 06, 2003 at 15:06 UTC
    I prefer time limit rules to be flexible based on the input data. I had a program that had to walk a directory structure, un-gzip each of the 50M+ files, parse the headers for information, and find them in a database. All remotely. Over NFS.

    I'm pretty sure nothing could've managed that one in only an hour... (I, for one, was happy with the 5 day run time it had)

      Situations like that are why I'm glad Sun still exists ;-)

Re: Re: Confirming what we already knew
by AssFace (Pilgrim) on Mar 06, 2003 at 13:47 UTC
    This does analysis on the closing data of stocks and needs to finish by the next time around to be useful - 20 hours was the extreme cut off, 10 hours is much more favorable - and then anything below that is great. (also the number of stocks that we are looking at changes - it currently is actually below 2000 and is closer to 1000 - but over time that figure will grow to be over 2000 as more data is collected - faster hardware over time will help in that, but I still wanted to plan with the idea in mind that we would have to do around 2000 of them)
    But like I said, once it gets down to the difference between 30 mins and an hour, it doesn't matter much to me.

    I have a cluster of nodes that currently price out at about $350 each - I could build them even cheaper, but I use silent components to try to reduce noise levels when working near them - and those also tend to use less power in order to be quieter.
    With the cluster it makes it feasible to take "slow" code and spread it out (as long as the task at hand lends itself to that) over several machines and get it done much faster.
    But it is certianly nice to have it run quickly on a single machine and not need the cluster tied up for that amount of time and instead have a single node cruise through it while the other nodes can work on other things.

Log In?
Username:
Password:

What's my password?
Create A New User
Domain Nodelet?
Node Status?
node history
Node Type: note [id://240870]
help
Chatterbox?
and the web crawler heard nothing...

How do I use this?Last hourOther CB clients
Other Users?
Others imbibing at the Monastery: (5)
As of 2024-04-25 09:07 GMT
Sections?
Information?
Find Nodes?
Leftovers?
    Voting Booth?

    No recent polls found