Beefy Boxes and Bandwidth Generously Provided by pair Networks
Pathologically Eclectic Rubbish Lister
 
PerlMonks  

Re^2: Worst blog post ever on teaching programming

by Anonymous Monk
on Apr 03, 2006 at 16:15 UTC ( [id://540968]=note: print w/replies, xml ) Need Help??


in reply to Re: Worst blog post ever on teaching programming
in thread Problematic post on teaching programming

They all assume you'll be doing wierd, theoretical work, preferably, in CS grad school.

You see, you might need calculus for non-linear optimization work, and you might need linear algebra and field theory if you decide to go into crytography research. So everyone has to take it.

And if you decide to just do a job doing boring, practical things like writing programs that work, instead of clever, abstract things like proving neat, theoretical boundries on toy problems for systems that can never actually exist in the real world, you're derided for not being clever and abstract and academic enough.

I don't know how many young kids I've seen wander out of a CS degree thinking a Turing Machine is Something Important(TM), as applied in the real world. They think that constants are irrelevant, because they learned order notation, but didn't learn quite enough.

In reality, all those boring little constants in front of your abstract little order notation symbols mean the difference between "highly profitable" and "completely worthless". In the real world, people need constant time speedup: and the difference between running in twenty minutes and running in a hour can make or break a program. In the real world, you worry about the scalablity of your algorithm only after it meets the inital performance requirements to begin with. If the the hardware to make the problem fast is too expensive to do what the business requires, it's a no go, no matter how much nicer your algorithm scales "towards infinity".

I don't know how many people have tried to use "Turing completeness" as a way to explain what a computer can or can't do, and gibber on and on about halting problems and so forth. It's far simpler than that: any computer you find in the real world will have finite memory, finite run time, and finite amount of cash available to construct and run it. No computer can do more than a finite state machine can, if only for reasons of economics. But finite state machines are boring, and we don't have a neat paradox for them, so I'm left listening to boring undergrads drone on and on about "undecidablity" as if it's a real world problem...

When they get to the real world, they'll learn that no one else cares about CS theory. No one else cares about whether P=NP. They just want the billing system to run, the accounting ledgers to add up, and the reports to look pretty, with colourful graphs that show the wiggly line going upward. If you do what the rest of the world needs, you get paid; if you don't, you don't.

Universities are largely in the business of training grad students to become professors; any other education they provide is mostly just incidental.
--
Ytrew

  • Comment on Re^2: Worst blog post ever on teaching programming

Replies are listed 'Best First'.
Re^3: Worst blog post ever on teaching programming
by dragonchild (Archbishop) on Apr 03, 2006 at 18:19 UTC
    Understanding the concept of a Turing machine is actually quite important in real-world programming. A little digression -

    Nearly everyone on this site will be able to point to an experience in their careers where they wrote something in Perl and it took them a couple days. Turned out to be really really useful and the PHB had it rewritten in Java. Took 15 people 6 months and it still doesn't work right.

    Why is that? Perl isn't inherently a better language than Java. In fact, there are many things Java has better support for than Perl. However, Java projects generally take longer than the equivalent Perl projects and generally require more people.

    My feeling is that Perl programmers tend to be more capable than Java programmers, precisely because we tend to have a stronger grasp of the fundamentals. Things like a Turing machine. In fact, I once implemented a Turing machine in production code because it was the correct and cost-effective solution to the requirements. It's easy to deride the theoreticals, but they're extremely useful, just not as presented. You actually have to think about how to apply them. :-p


    My criteria for good software:
    1. Does it work?
    2. Can someone else come in, make a change, and be reasonably certain no bugs were introduced?
      I once implemented a Turing machine in production code because it was the correct and cost-effective solution

      I think it is all too easy to forget how much practical experience it requires to be able to reach that determination.

      Just as theory without practice--real-world practical application--is just so much hot air; so you can practice as much as you like, but without the theory to back you up and allow you to choose the right starting point, the likely outcome of your practice is that you will become very good at doing the wrong thing.

      From previous discussion, I think that you are likely in tune with the theory and practice of 'balance in all things'.

      In programming as in life, balance is everything, and inbalance--the over concentration on one aspect to the exclusion of others--is the source of most woes.


      Examine what is said, not who speaks -- Silence betokens consent -- Love the truth but pardon error.
      Lingua non convalesco, consenesco et abolesco. -- Rule 1 has a caveat! -- Who broke the cabal?
      "Science is about questioning the status quo. Questioning authority".
      In the absence of evidence, opinion is indistinguishable from prejudice.
      In fact, I once implemented a Turing machine in production code because it was the correct and cost-effective solution to the requirements.

      Really? Please explain where you got the infinitely long tape, and how your software made markings on it. If you didn't do that, then you didn't make Turing's machine; and any computing device with an infinite datastore that we can concieve of is computationally equivalent to a Turing machine.

      Turing machines are just a theoretical device for discussions of computational equivalence; you can't "implement" one in any sense of the word. You can create a state machine with an associated finite datastore, but we tend to call those devices "computers"; the hardware already does that for us.

      There's no sense of the word in which I can find it meaningful to claim one has "implemented" a Turing machine; it's a thought experiment, not a device you can actually build. --
      Ytrew

        There's no sense of the word in which I can find it meaningful to claim one has "implemented" a Turing machine; it's a thought experiment, not a device you can actually build.

        A thought experiment? You mean like Schrödinger's half-dead cat in a box?

        Please explain where you got the infinitely long tape...

        You are mistaken. There is no requirement in the definition of Turing machine that the tape be actually infinite in size. It merely needs to be unbounded. As long as an implementation, in its execution, doesn't exceed the limits of its "tape", there's no reason it can't be a Turing machine.

        We're building the house of the future together.
Re^3: Worst blog post ever on teaching programming
by adrianh (Chancellor) on Apr 04, 2006 at 09:20 UTC
    When they get to the real world, they'll learn that no one else cares about CS theory. No one else cares about whether P=NP. They just want the billing system to run, the accounting ledgers to add up, and the reports to look pretty, with colourful graphs that show the wiggly line going upward. If you do what the rest of the world needs, you get paid; if you don't, you don't.

    True.

    Of course sometimes knowing about N=NP, big O notation, etc. is exactly what you need to get the job done.

    I've encountered my fair share of fresh CS graduates who think they know everything and are terrible at their job. The thing is I've also encountered my fair share of non-graduates who've been working in the industry for years and think they know everything and are terrible at their job. I think a lot of this has to do with the person - rather than whether they come from an academic or industry background.

    Universities are largely in the business of training grad students to become professors; any other education they provide is mostly just incidental.

    Back when I was at uni I remember being taught tons of purely "academic" content. People I knew who were working in industry told me I'd never use it in the real world. Silly things like object orientation, virtual machines and garbage collection.

    I certainly don't think a university education provides you will all of the skills needed to do the job. I'm actually glad that they don't since I don't think universities should be in the job of just vocational education. They do provide a bunch of useful skills though. IMHO as ever ;-)

      Of course sometimes knowing about N=NP, big O notation, etc. is exactly what you need to get the job done.

      When? Academic learning isn't at all bad, but those two examples are a very poor choice. Measuring computational complexity at all well requires far more than order notation (and perhaps more than graduate level computational complexity theory). Performance modeling remains a poorly understood and active field of research, and the gains are being made quite slowly.

      So in practice, the concept of "P vs NP", and order notation are largely useless. "Polynomial space/time" explodes quadradically for a polynomial as low as two! The choice of P vs NP boils down to "too slow to be workable" versus "really too slow to be workable". The exception, of course, if the constants are nice, and N is small enough to be workable: but that's exactly what order notation and most computational complexity theory ignores in the first place!

      Personally, I found that while academic learning is interesting, it's rarely useful. It's nice that you can write your own compiler, but your job will involve producing graphs and reports, not writing compilers. And when and if some of that deep, complex academic learning is required, your company will just hire a PhD: so unless you're willing to give your life to CS theory, there's no great benefit to a mere undergrad degree. Perhaps that's why there's so many open source languages: people desperate to find an excuse to write their own compiler, now that they've wasted thousands of dollars learning how!

      One guy I worked with was so desperate to do something "academic" with his job that he wrote his own recursive descent parser ... for a configuration language that he invented himself ... for an EBICDIC to ASCII translator ... which only needed a very limited set of options ... and which never actually changed. But hey, he got to be all "academic"; and now I've got a tonne of painfully useless code to untangle if I ever have to maintain his over-engineered monstrosity.

      Back in school, I took a lot of courses in things like multidimensional calculus, vector algebra, and group theory. None of it is terribly useful for producing billing reports and the other assorted drudge work that actually pays the bills. In some sense, I understand why my co-worker decided to waste company funds on his wierd design; but I certainly can't condone it.

      In any case, I've been left with a distaste for breathless undergrads, and people who think that "more complicated is better", or people who think "new is better": most of the time, the boring, obvious encoding is the most maintainable encoding, and when it's not, you can at least understand what was done, and slot in your clever little algorithm where it's needed.

      --
      Ytrew

        So in practice, the concept of "P vs NP", and order notation are largely useless.

        I've not found this to be so. For example I can remember one instance of a problem that, in a blinding flash of the obvious, could be mapped onto an NP problem - which immediately told us that we wanted to be doing something like simulated annealing to give us a reasonable answer in a reasonable time.

        Use big O notation all of the time when I did work in the financial sector. Very handy in giving us clues on how things will scale up on very large data sets.

        Personally, I found that while academic learning is interesting, it's rarely useful. It's nice that you can write your own compiler, but your job will involve producing graphs and reports, not writing compilers.

        Depends on your job. I've been paid to write compilers several times during my career. The ones I did after my undergrad compiler course were a lot better than the ones I wrote before it ;-)

        In any case, I've been left with a distaste for breathless undergrads, and people who think that "more complicated is better", or people who think "new is better": most of the time, the boring, obvious encoding is the most maintainable encoding, and when it's not, you can at least understand what was done, and slot in your clever little algorithm where it's needed.

        s/undergrads/youngsters/ and I'll agree with you 100%. For me its a symptom of age and experience rather than an academic or industry background.

Re^3: Worst blog post ever on teaching programming
by BrowserUk (Patriarch) on Apr 03, 2006 at 16:24 UTC

    Exactly.

    The most succinct encapsulation I've seen of your oh so eloquant dissertation above, is something I read in someone tag line somewhere. From memory it went something like:

    In theory, theory is enough. In practice, it isn't!

    Examine what is said, not who speaks -- Silence betokens consent -- Love the truth but pardon error.
    Lingua non convalesco, consenesco et abolesco. -- Rule 1 has a caveat! -- Who broke the cabal?
    "Science is about questioning the status quo. Questioning authority".
    In the absence of evidence, opinion is indistinguishable from prejudice.
      The quote I heard was:
      In theory, theory is the same as practice. In practice, it's not.
      --
      Ytrew

        attributed to Yogi Berra.

        merlyn likes to give an alternative phrasing:

        The difference between theory and practice in theory is much less
        than the difference between theory and practice in practice.
         — e.g. in this clpm post
        We're building the house of the future together.

Log In?
Username:
Password:

What's my password?
Create A New User
Domain Nodelet?
Node Status?
node history
Node Type: note [id://540968]
help
Chatterbox?
and the web crawler heard nothing...

How do I use this?Last hourOther CB clients
Other Users?
Others chilling in the Monastery: (4)
As of 2024-04-18 07:30 GMT
Sections?
Information?
Find Nodes?
Leftovers?
    Voting Booth?

    No recent polls found