Beefy Boxes and Bandwidth Generously Provided by pair Networks
No such thing as a small change
 
PerlMonks  

Re: (OT) Programming languages for multicore computers

by Jenda (Abbot)
on May 04, 2009 at 14:58 UTC ( [id://761743]=note: print w/replies, xml ) Need Help??


in reply to (OT) Programming languages for multicore computers

How often do you think it can be expected that your program will have the whole of the computer to (ab)use? How often do you think the computer doesn't have several other tasks to run?

IMnsHO for most applications, one core will be plenty and the multiple cores will be used by multiple processes. For most of the rest, splitting the task into a few parts and running several instances of the program will be both good enough and easiest. So no, I do not think "multicore" programming is something most developers will need to care about in the near to mid future.

There is one big problem with the (even partially) automatic concurrency. If you split the task into too many too small pieces then you waste more than you gain. A perfect example would be the paralelization of something like @result = map {$_*2} @source;. The task you need to do with the items is simply too simple and inexpensive. Even if you split the (huge) @source into just a few parts and parallelized those, the overhead would slam any gain. But once the chunks get sufficiently big, they usually also get too complex to parallelize automatically. So I'm a little skeptic.

P.S.: I believe the concurrent stuff has been taken out of Clean. It did use to be called "Concurrent Clean" originally, but the concurrent evaluation annotations had been removed in version 2.0. Not sure what's the current status of things.

Jenda
Enoch was right!
Enjoy the last years of Rome.

Replies are listed 'Best First'.
Re^2: (OT) Programming languages for multicore computers
by salva (Canon) on May 04, 2009 at 16:28 UTC
    How often do you think it can be expected that your program will have the whole of the computer to (ab)use? How often do you think the computer doesn't have several other tasks to run?

    Programs should not bother about limiting their CPUs/CPU time usage. Actually it is quite the opposite, they should use as many (concurrent) CPU resources as possible in order to finish sooner.

    It is the operative system who has the responsibility to ensure that some program does not degrade the performance of any other application running concurrently by just creating too many threads or any other mean.

      I'm not talking about limiting themselves artificially. I'm talking about bending backwards to escape the single core limitation. I do believe that most often it's not needed. That most often there's plenty other processes to use the other cores. And that if all those processes attempt to use several cores, they will all finish later than if they did not bother.

      Jenda
      Enoch was right!
      Enjoy the last years of Rome.

Re^2: (OT) Programming languages for multicore computers
by Zen (Deacon) on May 04, 2009 at 15:57 UTC
    There are already optimizations made with respect to pipelining and with instruction sets themselves. To say the world will remain single core is to assume that every instruction is the direct ancestral result of what came before, every time. I don't think this is a reality of computing more than a perception of what we're used to. You need multiple cores for so many reasons these days: anything with java including firefox(firehog IMHO), windows, daemons, indexing, databases, and so on. Within each of these lies the opportunity to parallelize.

Log In?
Username:
Password:

What's my password?
Create A New User
Domain Nodelet?
Node Status?
node history
Node Type: note [id://761743]
help
Chatterbox?
and the web crawler heard nothing...

How do I use this?Last hourOther CB clients
Other Users?
Others sharing their wisdom with the Monastery: (5)
As of 2024-03-28 16:35 GMT
Sections?
Information?
Find Nodes?
Leftovers?
    Voting Booth?

    No recent polls found