Beefy Boxes and Bandwidth Generously Provided by pair Networks
No such thing as a small change
 
PerlMonks  

Concurrency in Perl

by hardburn (Abbot)
on Jan 03, 2005 at 18:55 UTC ( [id://419015]=perlmeditation: print w/replies, xml ) Need Help??

Herb Sutter is arguing that current clockspeeds have hit a practical limit, and the only way to keep Moore's "Law" going is with multi-core CPUs:

http://www.gotw.ca/publications/concurrency-ddj.htm

Critique of the Article

I think some of his points are flawed. First, there are too few datapoints at the end of his chart to note a trend. However, the point that clock speeds are leveling off is probably correct, considering announcements of future processors from Intel and AMD.

What Sutter doesn't do is make the First Most Common CPU Mistake, which is assuming that clock speed is the only factor. Instead, he makes the Second Most Common CPU Mistake, which is acknowledging that clock speed isn't the only factor, but then ignoring this fact for the rest of the prose.

Notice in his chart that although clock speeds might be leveling off, transister counts keep going up. Further, note that other types of processors have much higher transister counts than Intel or AMD CPUs. An nVidia 6800 Ultra graphics card has a processor with 222mil transisters. The latest Pentium 4 processors have 149mil transisters, and much of that is cache. (Modern graphics cards are fully programmable and could be used as a general-purpose CPU if you're willing to put the effort into it).

He seems to assume throughout the article that because clock speeds are falling off, the only way to get future performace gains is through multi-threaded programms running on multi-core CPUs. However, he mistakes the reasons for not aiming for higher clock speeds:

We’ll probably see 4GHz CPUs in our mainstream desktop machines someday, but it won’t be in 2005. Sure, Intel has samples of their chips running at even higher speeds in the lab—but only by heroic efforts, such as attaching hideously impractical quantities of cooling equipment. You won’t have that kind of cooling hardware in your office any day soon, let alone on your lap while computing on the plane.

According to the recent CPU guide on Tom's Hardware, the current Pentium 4 processors are maxing at 115 watts of heat loss (note that the Athlon Thunderbirds were considered very high at around 70-80 watts). This would seem to go along with his point.

However, according to the same article above, the latest Athlon64 processors are maxing at an impressive 33 watts. So the heat problem isn't a problem (at least for the forseeable future), but is rather due to an inefficiency in Intel's processor lineup.

In short, his statement that the old ways of increasing CPU speeds are hitting practical limits is flawed. Transister counts are still rising, and designs can still be improved. Eventually, we will hit that limit, but we're not close yet. Just when we'll hit that limit is the matter of some debate, but we can safely leave it alone for now.

But he Still has a Point

With multi-core CPUs, we'll see a good reason for enhancing concurrency in software, even when targetted for low-end desktops and servers.

He compares this to the way OO finally caught on in the '90s. "In the 1990s, we learned to grok objects", he says. Many would say that in the '90s, we learned to grok objects poorly. We can all list off projects that had a flawed OO design that still passes user requirements. The only people directly affected by bad OO design are the people doing the coding on the project.

Concurrency is different because bad concurrency will have obvious (though often transient) bugs that can leak through in either testing or production. Bad concurrency will be noticed by users at some point. As such, I think programmers will have more incentive (compared to OO) for groking concurrency well.

I'm employed as a web programmer. Many are quick to say that web applications usually don't need threads, but this isn't entirely true. Most web servers for serious web sites use a threading model for handling requests that can take advantage of multi-processor/multi-core/hyperthreaded systems. Your own code may not use threads or fork off, but it is one thread/processes in a larger program tree. Even so, this has been the situation for a long time, and experianced web programmers know how to deal with it.

However, web programming isn't the only place Perl is located, and has been known to move into new areas whenever it wants to (once for sys admins, then for the web, and more recently for genetics). Wherever Perl is found, and wherever it may one day be, concurrency may not be as prevelent as it was for web sites, and Perl will have to deal with that.

So I leave this brain-dump with a question: what can we do to improve Perl's concurrency? The Lambda the Ultimate fourm has a posting that discusses the way many other languages are dealing with it, and may provide a source of inspiration for continuing the grand Perl tradition of stealing good ideas from other languages.

Update: Title correction per erix's suggestion.

"There is no shame in being self-taught, only in not trying to learn in the first place." -- Atrus, Myst: The Book of D'ni.

Replies are listed 'Best First'.
Re: Conncurrency in Perl
by sleepingsquirrel (Chaplain) on Jan 03, 2005 at 19:57 UTC
    What about perl6 junctions as a concurrency operator? I don't know if it was the intention, but functions like "all" seem like they could be put to good use as parallel data constructors (see also Connection Machine LISP). For example...
    $ans = 7 + all(1,3,16)
    ...could spawn off three threads, one for each addition calculation.


    -- All code is 100% tested and functional unless otherwise noted.

      An excelent point, though I'm not sure we can take it as far as LISP can. There has been some research of late for transparently doing parallel processing in functional languages, which is a really big deal. They can do this because statments in functional languages tend to be highly expressive and self-contained.

      For that to be usable in Perl, the rest of the language has to play nicely, too. To do that, the language probably wouldn't look much like Perl anymore. I suspect that a supporting library will still have to be used.

      "There is no shame in being self-taught, only in not trying to learn in the first place." -- Atrus, Myst: The Book of D'ni.

      That would be a very cool idea. It would cause hard to debut errors. It would be very heavy-weight. It would make basic things like handling signals very difficult. But it would sure sound cool.

      Cool isn't always worth it. :-P

Re: Conncurrency in Perl
by dragonchild (Archbishop) on Jan 03, 2005 at 19:14 UTC
    The best line in the article:

    (Despite this, I will speculate that today’s single-threaded applications as actually used in the field could actually see a performance boost for most users by going to a dual-core chip, not because the extra core is actually doing anything useful, but because it is running the adware and spyware that infest many users’ systems and are otherwise slowing down the single CPU that user has today. I leave it up to you to decide whether adding a CPU to run your spyware is the best solution to that problem.)

    Being right, does not endow the right to be rude; politeness costs nothing.
    Being unknowing, is not the same as being stupid.
    Expressing a contrary opinion, whether to the individual or the group, is more often a sign of deeper thought than of cantankerous belligerence.
    Do not mistake your goals as the only goals; your opinion as the only opinion; your confidence as correctness. Saying you know better is not the same as explaining you know better.

      I liked that, too, though I think it's a little oversimplified. A multi-core CPU means that I could throw in some extra RAM and do video encoding and play Half-life2 at the same time. Or leave Eve running in the background while I play Generals. I've wanted to do both these things within the last week.

      "There is no shame in being self-taught, only in not trying to learn in the first place." -- Atrus, Myst: The Book of D'ni.

        The best line in the article:

        (Despite this, I will speculate that today’s single-threaded applications as actually used in the field could actually see a performance boost for most users by going to a dual-core chip, not because the extra core is actually doing anything useful, but because it is running the adware and spyware that infest many users’ systems and are otherwise slowing down the single CPU that user has today. I leave it up to you to decide whether adding a CPU to run your spyware is the best solution to that problem.)

      Adware? Spyware? What are these things of which you speak, asked the OS X user.

      I like the idea of multiple CPUs. It lets Windows worms and viruses spread through your machine much more quickly. (Insert smilie of rolling eyes here.)

      --
      tbone1, YAPS (Yet Another Perl Schlub)
      And remember, if he succeeds, so what.
      - Chick McGee

Re: Conncurrency in Perl
by zentara (Archbishop) on Jan 04, 2005 at 14:34 UTC
    I'm an old electronic technician, who has seen the progress from tubes, to transistor, to IC , to the ultra-miniaturized cmos largescale integrated circuitry we see today. The limits that are mentioned on just the limits in the "limited technological view" of the writer. New technologies are always emerging. Some of the things I see.... whole SMP systems (including motherboard ) all in a single water-cooled chip, with optical laser ports for I/O.

    For years I have believed that ultimately, our computing will switch from controlling electons to controlling photons,... yes, computers that run on light. See All-optical control of light on a silicon chip But that is probably even a limited view, how about controlling shifting quarks, or string vibrations. It will come if the human race dosn't self-destruct.

    There have been all sorts of articles on laying down diamond film substrates for ic chips which will be almost immune to heat, and its proposed use in all sorts of things. Do a google search for "diamond substrate cpu". There is a link to an article about an 8000 Ghz cpu !!

    As far as concurrency of software goes, it will happen. Most of the popular languages now, have a goal of running other languages Inline. Perl6 is striving for that.

    It will take a generation of "fresh minds" to step back from the "language wars", and say "gee...we could make this one "uber-language" that can run anything else inline, and the processers will be so fast that speed will not be a concern except in some reseach/military applications.


    I'm not really a human, but I play one on earth. flash japh

Log In?
Username:
Password:

What's my password?
Create A New User
Domain Nodelet?
Node Status?
node history
Node Type: perlmeditation [id://419015]
Approved by BrowserUk
help
Chatterbox?
and the web crawler heard nothing...

How do I use this?Last hourOther CB clients
Other Users?
Others meditating upon the Monastery: (7)
As of 2024-04-19 15:54 GMT
Sections?
Information?
Find Nodes?
Leftovers?
    Voting Booth?

    No recent polls found