"... so by-golly ... (Heck, maybe we're just crazy)."
Watch a lot of Scrubs by any chance? :)
First. I completely agree. Each to their own and no harm no foul. That said, there is a lot of Haskell I like, and despite billing myself upfront as a failed Haskell programmer, I still hold out hope of making progress in using it. Though I may need to transition to it through one or two other languages first. Clean is at the top of my list at the moment. The biggest problems I have with clean are the IDE and the library documentation/tutorials which concentrate too much on the graphical IO capabilities.
If you are the same anonymonk who wrote this, then the paper you cite is far and away the best example of the type of criticism I was levelling against many of the Haskell tutorials. Of course, it isn't a tutorial and isn't aimed at a non-math audience, so it can be forgiven for its use of notation and abstract theory without explanation. None the less, it serves as an example.
To quote, selectively from that document:
The above functional program is thus both a mathematical definition of fib and at the same time an algorithm for computing it. One of the enduring myths about functional programming languages is that they are somehow non-algorithmic. On the contrary, the idea of functional programming is to present algorithms in a more transparent form, uncluttered by housekeeping details.
The seemingly close fit between program text and mathematical reasoning accounts for a large part of the appeal of functional languages (together with their conciseness and expressive power) especially in a pedagogical context.
One of the things we say about functional programming is that it’s easy to prove things, because there are no side effects.
Much of which is summed up by another quote:
"The seemingly close fit between program text and mathematical reasoning accounts for a large part of the appeal of functional languages...."
This is my primary angst with Haskell, and the whole FP movement in general. The theory goes that by making a language resemble existing math notations, and have the compiler use reduction to produce the code, one can go directly from mathematical proof to bug-free code and achieve programming utopia. (Yes. I've exaggerated for effect.)
But let's think about mathematical proofs for a few moments.
In the summer of 1993, after seven years of dedicated work, and more than 30 of increasingly serious and adept casual interest, Andrew Wiles announced that he had climbed the Mount Everest of mathematics and solved the problem that had alluded all of the best minds in his field for the preceding 200 or more years. He had proved Fermat's Last Theorem.
It was hailed by the press, and within the mathematics community as a triumph. But, over the next few months, it was Wiles himself that discovered the flaw in the proof. Of course, he then went on to correct the proof, and get it verified by his peers, and will go down in history as the man that climbed that mountain.
But, how many peers has he? How many people are there that can actually say they understand his proof enough to verify it? Another way of asking that question is how many people could, if presented with the original and the final proofs, could work out which was which, unless they were already familiar with them? 1000? 100? 10?
Again, that is an extreme example, but the point is that 'proofs' can be wrong. And it is much harder to verify a proof than a program. You can run a program and subject its results to tests. Something you cannot easily do with formal mathematical notation. Of course, a big part of the desire for FP is the ability to have a compiler that takes standard mathematical notation and converts it to a running program. But then, you not only have to verify the notation, you also have to verify the compiler that does the conversion, and the results it produces, and the results that what it produces, produce.
And that's where I was coming from when I wrote in my post above:
Chicken & egg
So, it's a chicken and egg situation. If you had a provable implementation of a compiler that was built from provable descriptions of its algorithms, then you could use it to build (implement) programs from provable descriptions of provable algorithms.
Until then, you will need to test programs--statistically. And as long that is true, there will never be a 100% guaranteed, bug-free program.
But it is stated more formally at the end of the Total FP paper you cited:
Theorem: For any language in which all programs terminate, there are always-terminating programs which cannot be written in it - among these are the interpreter for the language itself.
Going on to conclude:
We can draw an analogy with the (closely related) issue of compile-time type systems. If we consider a complete computing system written in a typed high level language, including operating system, compilers, editors, loaders and so on, it seems that there will always be at least one place – in the loader for example – where we are obliged to escape from the type discipline. Nevertheless many of us are happy to do almost all of our programming in languages with compile time type systems. One the rare occasions when we need to we can open an escape hatch, such as Haskell’s UnsafePerformIO.
There is a dichotomy in language design, because of the halting problem. For our programming discipline we are forced to choose between
A) Security - a language in which all programs are known to terminate.
B) Universality - a language in which we can write
(i) all terminating programs
(ii) silly programs which fail to terminate
and, given an arbitrary program we cannot in general say if it is (i) or (ii).
Five decades ago, at the beginning of electronic computing, we chose (B). If it is the case, as seems likely, that we can have languages of type (A) which accommodate all the programs we need to write, bar a few special situations, it may be time to reconsider this decision.
And that's my problem with much of the hyperbole that surrounds and infuses FP. This paper is saying that "we don't need to deal with errors, exceptions, dirty data etc.", or "need a language that is Turing complete" (elsewhere in the paper) except on "rare occasions", but that just doesn't make sense to me.
Eg. There is that co-recursive proof that a number is odd (or even). And that converts nicely into a pure Haskell program. But, even if we assume that the compiler will convert that program into correct code, as soon as we need to fetch that number into the program, rather than embed it, all bets are off. The program may not terminate, because the user never enters a number and hits enter; or the file make contain text, or be empty; or the socket connection may have dropped; or...
In a practical programming language, as opposed to an academic research language, you cannot ignore the inconvenient truth of reality.
I remember Pascal in its early days where 'files' were just internal data structures and you didn't have to deal with the outside world. great for teaching,but totally impractical for anything commercial. Inevitably, along came Borland with Turbo-Pascal and there were some amazing and important programs written using Pascal. Later (I think), that became Delphi, and (I think) that is still being used.
Haskell is already a powerful, elegant, practical programming language. It doesn't need to sell itself on the basis of lofty, theoretical(*) goals. It is already "condemned to be successful". Like Quantum::Superpositions, once the hype has faded and gone, what you are left with are some very important, and very practical, useful ideas and code that handle the real world with aplomb and stand up along side other real world languages on that basis.
(* And I would say, unobtainable--but there are a lot of very bright minds, much brighter than me, that are pursuing them, so I'm probably going to be proved wrong!)
What is missing, to my mind and in my experience of looking for on-line Haskell tutorials, is a description of a moderately complex, real-world, warts-and-all problem, and a worked solution. Forget all the cutesy, convenient (even if theoretically interesting and important) examples of Fibonacci series, and quick-sorts, and NDA language parsers, and tree structures. These are just a bad as corporate/genetic hierarchies examples in the OO world. Work through something useful.
In one or two of SPJ papers there is reference to a Haskell programmed HTTP server. 1500 lines long, with near Apache like performance but less features. Now that would make a fine basis for a tutorial with fully worked example. The source code is probably around somewhere for download, but that's much less than half of what's required.
What is needed is insight into the mind of the expert Haskell programmer on how they approach tackling such a project. Given the HTTP/1.1 spec as a starting point and the standard library, where do they start? How do they proceed? What mistakes did they make; how did the compiler inform them of those mistakes; how did they interpret those error messages and how did they resolve them? Now that would be a tutorial that might allow me to make the transition to the mode of thinking required.
F*** all the theory behind monads, or the type system, or strict -v- lazy -v- pure. Show me the code. But more importantly, show me how you arrived at it.
Examine what is said, not who speaks -- Silence betokens consent -- Love the truth but pardon error.
"Science is about questioning the status quo. Questioning authority".
In the absence of evidence, opinion is indistinguishable from prejudice.