Beefy Boxes and Bandwidth Generously Provided by pair Networks
XP is just a number
 
PerlMonks  

Re^3: Is it worth using Monads in Perl ? and what the Monads are ?

by BrowserUk (Patriarch)
on Jun 11, 2007 at 23:12 UTC ( [id://620594]=note: print w/replies, xml ) Need Help??


in reply to Re^2: Is it worth using Monads in Perl ? and what the Monads are ?
in thread Is it worth using Monads in Perl ? and what the Monads are ?

Unfortunately, I/O is probably the worst example of a monad there is. The next worse example is the state monad. The "Maybe" monad is much better example, especially for beginners.

As I hope I demonstrated above, I understand what monads, and the IO Monad, are and do.

The problem I have with monads, is not the name, or their obscure math derivation, or using them (much, though I admit I probably couldn't derive my own any time soon).

The problem I have with them, is the need for, and the benefits of, having them at all.

They are, emotionally, just a way of concealing the fact that Haskell programs contain code that has state, has side effects and is procedural. Technically, they are a way of indicating to the compiler which functions have no side-effects and can therefore be safely re-written at compile time; and which aren't. And that's makes them a really awkward way of setting a flag.

It would be so much easier to simpler to have a keyword that could be placed at the top of a module or even half way down that says any definitions above this line are purely functional and anything below it is procedural code that has side effects. The 'monadic code' then just becomes normal procedural code that carries an implicit 'world state' from one step/statement to the next; it could be coded in a normal, procedural style with loops and state and side-effects; and without all that faffing around that monads introduce.

No need to (badly) re-invent all the standard procedural constructs--like try/catch et al.

At this point, I'm gonna stop, and ask you to email me (address on my home node) if you feel like continuing this, because this has become about me and Haskell (and Clean--which I do like more, though it lacks the community and available tutorial information that Haskell has. Eg. A search for lang:clean on google codesearch finds zero hits!), and has ceased to have any relevance to Perl. I'd enjoy the rebuttal :)


Examine what is said, not who speaks -- Silence betokens consent -- Love the truth but pardon error.
"Science is about questioning the status quo. Questioning authority".
In the absence of evidence, opinion is indistinguishable from prejudice.
  • Comment on Re^3: Is it worth using Monads in Perl ? and what the Monads are ?

Replies are listed 'Best First'.
Re^4: Is it worth using Monads in Perl ? and what the Monads are ?
by gaal (Parson) on Jun 14, 2007 at 06:24 UTC
      They are, emotionally, just a way of concealing the fact that Haskell programs contain code that has state

    Rephrasing to mitigate the emotional stress: monads encapsulate effects. They aren't there to conceal in the sense of deception, but rather in the sense of information hiding.

    I agree with the other poster who said the IO monad's the poorest one to look at, because I agree with you when you said a Perl program is in a(n IO) monad. But that's trivially true; the value comes when you look at the richness of different monads, and when you can take your code and separate functions out of it into those that really are pure and those that can be made to fit in monads a, b, c.

    Here's a simple example.

    ruleSubName :: RuleParser String ruleSubName = verbatimRule "subroutine name" $ do twigil <- option "" (string "*") name <- ruleOperatorName <|> ruleQualifiedIdentifier return $ ('&':twigil) ++ name

    This is part of the Pugs parser, the code to parse out the name of a sub. Inside the RuleParser monad, sequencing means "demand the following parse to work". But it also means take care of bookkeeping (source code position, name of what we're trying to do for use in error messages in case we fail). If a parse fails, the associated further bookkeeping is automatic. Here we say "first of all, look for a '*' string, but it's okay if we don't find it". The function string is monadic; if it failed then the function 'option ""' works much like a try/catch and provides the fallback. Anyway, now twigil is either "" or "*", and the position has advanced by 0 or 1 columns.

    Now we try to parse a "name". We try a first parser, ruleOperatorName, and if it fails, ruleQualifiedIdentifier. If that one fails too, ruleSubName will fail. The actual behavior of that failure depends on who called us; if they were ready for us to fail (for example, by modifying us with "option" or "<|>") then a different parse might result higher up. Rewinding the position happens automatically, for example if we had already consumed a "*". But if not -- if we're in part of the Perl 6 grammar where the only conceivable term is a SubName and one could not be obtained -- then the user will get an error message saying Pugs was looking for a "subroutine name".

    What I'm hoping I've shown is that all these bits of pieces of combining small monadic functions are common to the domain of parsing. Haskell isn't being deceptive in not making me put try/catch blocks over every line. It's encapsulating very specific and repetitive functionality.

    I think one great difficulty in understanding monads is they take "if you look at it that way" to the extreme; and Haskell programmers often look at problems in surprising ways. Since parsing is a very practical problem, and familiar to Perl programmers at that, it's a good monad to learn with.

      Thanks Gaal++ Thanks for not saying: Oh, you just don't understand monads. And thankyou for giving a non-standard library example that made me really think.

      .... Inside the RuleParser monad, sequencing means "demand the following parse to work".

      .... But it also means take care of bookkeeping (source code position, name of what we're trying to do for use in error messages in case we fail).

      .... if they were ready for us to fail...But if not...the user will get an error message.

      .... It's encapsulating very specific and repetitive functionality.

      Okay. From your description (I glanced at the link but turned away because it would take me a long time to wrap my mind around the complexity in there), this is pretty much the sort of thing you are talking about--simplistically implemented in Perl:

      So, in this context, a monad is being used as a container to prevent the need to perpetually carry a state variable across sequenced functions calls (per the fileposition and world variables that you see passed in and out IO code in clean programs); and also--as the monad returned from a function is a different one to the one going in--if the function throws an exception, the return never happens, no new one is created and the pattern matching can (optionally), go ahead 're-using' the original monad, effectively rewinding any partial state changes that may have occurred prior to failure.

      In procedural terms, the retention of state can be object instance vars, or closures or global variables.

      The rewinding of object state after an exception can be achieved through try/catch (laborious), or through taking a (instance local) copy and installing a class-global exception handler (

      local $SIG{__DIE__} = sub{ @{ $self } = @{ $copy } }
      ) to restore the copy. The OO purist would freak, but what's new :)

      Relatively inelegant in Perl 5, but with PRE and POST handlers and similar mechanisms, maybe less so in Perl 6. I've not got that far in my exploration of Perl 6.

      In the context of the OP, this shows Perl doesn't need monads, Haskell does. And, in the lower half of my first post, I was trying to show why the second part of that statement is so.

      The point is that underlying the Haskell/monadic hype (emotive term, but I talking to a Perl audience, not the Haskell community), are familiar and regular mechanisms. There is nothing magic, or deeply mathematical or that requires a doctorate in chaos category theory to understand enough to use these 'monad' thingies. And neither do you need to understand the ins and outs and intricacies of Hindley-Miller type systems to do so either.

      Monads can be likened to objects, and classes of objects, and classes of classes (meta classes), that are familiar enough (even if not totally accurate), that they would be accessible to many more people who are already familiar with these concepts.

      The elegance of monads in Haskell, is in getting the compiler to take care of these mundanities on the programmers behalf. Just as objects in Perl 5 are cumbersome when compared to languages that set out to be object oriented, so attempting to tack on monads to Perl 5 would be equally cumbersome. But more importantly, it would be introducing both terminology and the (difficult mathematical concepts behind it) to a language that has no need for either.

      ...the value comes when you look at the richness of different monads, and when you can take your code and separate functions out of it into those that really are pure and those that can be made to fit in monads a, b, c.

      Haskell isn't being deceptive...

      I don't mean to imply that [the] Haskell [community] is either deceiving other people, or themselves. The deception (a term I wouldn't have used, but you have so lets run with it), is of the HM type system, and the 'pure functional' ethic.

      The problem that functional programming struggled with for a long time is that there are situations in which retained state and side-effects are unavoidable. Pretty much anything to do with IO is a good example. (Because it is necessary for pretty much any useful program, but also familiar to pretty much every programmer.)

      But, the ethics make it emotionally unsatisfactory, and the type system makes it syntactically difficult, to have and use retained state and side-effects without the need to 'invent' a way of getting the retained state past the type system, so as to avoid the need to corrupt the elegance and provability of the type inferencing system.

      Monads, as described by that obscure math, allow that to be done in an elegantly mathematical way. I don't understand the math, but like imaginary numbers (or even the number zero), I can see how they facilitate things that don't work well without them.

      But Perl doesn't have those ethics, nor that type system--so Perl doesn't need the concept, regardless of whether it could be implemented, elegantly or not.

      The source of much confusion for imperative programmers coming to Haskell, and of much of my disquiet about it, is that the Haskell community seems to be defensive about the need for monads. When asked questions like: why does Haskell need monads when other languages don't; they resort to baffling the guys with science about types systems and provability and category theory and so on.

      What is sadly lacking, at least as far as my investigations are concerned, are tutorials that set out to show you how to solve a problem using Haskell (oh, and by the way, that's a monad; and so's that; and that), rather than those that set out to explain what monads are and why they exist.

      If they would come straight out and say, it needs them because its necessary to make retained state and side-effects compatible with the type system and functional purity, and analogise them with familiar concepts, and then say, this how they work, and this is what you can do with them, rather than why they are needed to avoid breaking the rules, then they would become much more accessible and raise much less fuss.

      They would stand on the merits of their own power and elegance, rather than needing the support of math that few understand.

      Whether those things are beneficial in the wider world, and whether a language should or shouldn't have those things, are different arguments that will probably rage on for the rest of my lifetime and possibly well beyond that. I've pretty much nailed my colors to the mast on that one :)


      Examine what is said, not who speaks -- Silence betokens consent -- Love the truth but pardon error.
      "Science is about questioning the status quo. Questioning authority".
      In the absence of evidence, opinion is indistinguishable from prejudice.
          Monads can be likened to objects, and classes of objects, and classes of classes (meta classes), that are familiar enough (even if not totally accurate), that they would be accessible to many more people who are already familiar with these concepts.

        Yes, or like Aspects of AOP: an organizational unit of behavior or functionality. If you approach them from the software engineering perspective and not from the theoretical CS/mathy perspective, one rough way to think of them is as overloading the meaning of the semicolons that separates statements.

        Regarding coming out straight, I'm really not sure what to say. Some people who talk enthusiastically about Haskell come from a different background than advocates of many other languages you're familiar with, and that's part of why they take some things for granted and stress others. My own experience hasn't been easy, but it certianly hasn't been that somebody's evasive or dishonest about the design goals. So if you read Wearing the Hairshirt or Tackling the Awkward Squad, for example, I don't think you get the sense that somebody's bluffing (well, I didn't!). Sure, uncontrolled effects are presented as a bad thing, but part of what's good about monads is that they let you tag the bits that are and those that aren't pure, and the typechecker doesn't let you mix them incorrectly.

        then they would become much more accessible and raise much less fuss.
        Would you mind creating a list of the worst Haskell tutorials that you found?
Re^4: Is it worth using Monads in Perl ? and what the Monads are ?
by Anonymous Monk on Jun 16, 2007 at 16:49 UTC
    They are, emotionally, just a way of concealing the fact that Haskell programs contain code that has state, has side effects and is procedural.
    I'd reword that a little differently. The type system loudly proclaims to everyone that these particular functions are contaminated by state and side effects, so by-golly you'd better don the bio-hazard suit and procede with caution.
    it could be coded in a normal, procedural style with loops and state and side-effects; and without all that faffing around that monads introduce.

    No need to (badly) re-invent all the standard procedural constructs--like try/catch et al.

    I think there's probably a large subjective element in choosing a favorite programming. Some people like cats, others like dogs. When I first encountered Haskell, my initial reaction was, "Yes! This is how programming was intended to be done". I no longer needed to faff (?) around with side effects willy-nilly everywhere.

    It sounds like you've had the opposite reaction. No harm, no foul. If you don't like broccoli, there is no sense in feeling bad just because others rave about it. And anyone who nags you about the health benefits of broccoli is a bore.

    Haskell programmers truly think it is a better way to program. No apologies. (Heck, maybe we're just crazy). And some of us think that Haskell doesn't even go far enough.

      "... so by-golly ... (Heck, maybe we're just crazy)."

      Watch a lot of Scrubs by any chance? :)

      First. I completely agree. Each to their own and no harm no foul. That said, there is a lot of Haskell I like, and despite billing myself upfront as a failed Haskell programmer, I still hold out hope of making progress in using it. Though I may need to transition to it through one or two other languages first. Clean is at the top of my list at the moment. The biggest problems I have with clean are the IDE and the library documentation/tutorials which concentrate too much on the graphical IO capabilities.

      If you are the same anonymonk who wrote this, then the paper you cite is far and away the best example of the type of criticism I was levelling against many of the Haskell tutorials. Of course, it isn't a tutorial and isn't aimed at a non-math audience, so it can be forgiven for its use of notation and abstract theory without explanation. None the less, it serves as an example.

      To quote, selectively from that document:

      The above functional program is thus both a mathematical definition of fib and at the same time an algorithm for computing it. One of the enduring myths about functional programming languages is that they are somehow non-algorithmic. On the contrary, the idea of functional programming is to present algorithms in a more transparent form, uncluttered by housekeeping details.

      The seemingly close fit between program text and mathematical reasoning accounts for a large part of the appeal of functional languages (together with their conciseness and expressive power) especially in a pedagogical context.

      One of the things we say about functional programming is that it’s easy to prove things, because there are no side effects.

      Much of which is summed up by another quote:

      "The seemingly close fit between program text and mathematical reasoning accounts for a large part of the appeal of functional languages...."

      This is my primary angst with Haskell, and the whole FP movement in general. The theory goes that by making a language resemble existing math notations, and have the compiler use reduction to produce the code, one can go directly from mathematical proof to bug-free code and achieve programming utopia. (Yes. I've exaggerated for effect.)

      But let's think about mathematical proofs for a few moments.

      In the summer of 1993, after seven years of dedicated work, and more than 30 of increasingly serious and adept casual interest, Andrew Wiles announced that he had climbed the Mount Everest of mathematics and solved the problem that had alluded all of the best minds in his field for the preceding 200 or more years. He had proved Fermat's Last Theorem.

      It was hailed by the press, and within the mathematics community as a triumph. But, over the next few months, it was Wiles himself that discovered the flaw in the proof. Of course, he then went on to correct the proof, and get it verified by his peers, and will go down in history as the man that climbed that mountain.

      But, how many peers has he? How many people are there that can actually say they understand his proof enough to verify it? Another way of asking that question is how many people could, if presented with the original and the final proofs, could work out which was which, unless they were already familiar with them? 1000? 100? 10?

      Again, that is an extreme example, but the point is that 'proofs' can be wrong. And it is much harder to verify a proof than a program. You can run a program and subject its results to tests. Something you cannot easily do with formal mathematical notation. Of course, a big part of the desire for FP is the ability to have a compiler that takes standard mathematical notation and converts it to a running program. But then, you not only have to verify the notation, you also have to verify the compiler that does the conversion, and the results it produces, and the results that what it produces, produce.

      And that's where I was coming from when I wrote in my post above:

      Chicken & egg

      So, it's a chicken and egg situation. If you had a provable implementation of a compiler that was built from provable descriptions of its algorithms, then you could use it to build (implement) programs from provable descriptions of provable algorithms.

      Until then, you will need to test programs--statistically. And as long that is true, there will never be a 100% guaranteed, bug-free program.

      But it is stated more formally at the end of the Total FP paper you cited:

      Theorem: For any language in which all programs terminate, there are always-terminating programs which cannot be written in it - among these are the interpreter for the language itself.

      Going on to conclude:

      We can draw an analogy with the (closely related) issue of compile-time type systems. If we consider a complete computing system written in a typed high level language, including operating system, compilers, editors, loaders and so on, it seems that there will always be at least one place – in the loader for example – where we are obliged to escape from the type discipline. Nevertheless many of us are happy to do almost all of our programming in languages with compile time type systems. One the rare occasions when we need to we can open an escape hatch, such as Haskell’s UnsafePerformIO.

      There is a dichotomy in language design, because of the halting problem. For our programming discipline we are forced to choose between

      A) Security - a language in which all programs are known to terminate.

      B) Universality - a language in which we can write

      (i) all terminating programs

      (ii) silly programs which fail to terminate

      and, given an arbitrary program we cannot in general say if it is (i) or (ii).

      Five decades ago, at the beginning of electronic computing, we chose (B). If it is the case, as seems likely, that we can have languages of type (A) which accommodate all the programs we need to write, bar a few special situations, it may be time to reconsider this decision.

      And that's my problem with much of the hyperbole that surrounds and infuses FP. This paper is saying that "we don't need to deal with errors, exceptions, dirty data etc.", or "need a language that is Turing complete" (elsewhere in the paper) except on "rare occasions", but that just doesn't make sense to me.

      Eg. There is that co-recursive proof that a number is odd (or even). And that converts nicely into a pure Haskell program. But, even if we assume that the compiler will convert that program into correct code, as soon as we need to fetch that number into the program, rather than embed it, all bets are off. The program may not terminate, because the user never enters a number and hits enter; or the file make contain text, or be empty; or the socket connection may have dropped; or...

      In a practical programming language, as opposed to an academic research language, you cannot ignore the inconvenient truth of reality.

      I remember Pascal in its early days where 'files' were just internal data structures and you didn't have to deal with the outside world. great for teaching,but totally impractical for anything commercial. Inevitably, along came Borland with Turbo-Pascal and there were some amazing and important programs written using Pascal. Later (I think), that became Delphi, and (I think) that is still being used.

      Haskell is already a powerful, elegant, practical programming language. It doesn't need to sell itself on the basis of lofty, theoretical(*) goals. It is already "condemned to be successful". Like Quantum::Superpositions, once the hype has faded and gone, what you are left with are some very important, and very practical, useful ideas and code that handle the real world with aplomb and stand up along side other real world languages on that basis.

      (* And I would say, unobtainable--but there are a lot of very bright minds, much brighter than me, that are pursuing them, so I'm probably going to be proved wrong!)

      What is missing, to my mind and in my experience of looking for on-line Haskell tutorials, is a description of a moderately complex, real-world, warts-and-all problem, and a worked solution. Forget all the cutesy, convenient (even if theoretically interesting and important) examples of Fibonacci series, and quick-sorts, and NDA language parsers, and tree structures. These are just a bad as corporate/genetic hierarchies examples in the OO world. Work through something useful.

      In one or two of SPJ papers there is reference to a Haskell programmed HTTP server. 1500 lines long, with near Apache like performance but less features. Now that would make a fine basis for a tutorial with fully worked example. The source code is probably around somewhere for download, but that's much less than half of what's required.

      What is needed is insight into the mind of the expert Haskell programmer on how they approach tackling such a project. Given the HTTP/1.1 spec as a starting point and the standard library, where do they start? How do they proceed? What mistakes did they make; how did the compiler inform them of those mistakes; how did they interpret those error messages and how did they resolve them? Now that would be a tutorial that might allow me to make the transition to the mode of thinking required.

      F*** all the theory behind monads, or the type system, or strict -v- lazy -v- pure. Show me the code. But more importantly, show me how you arrived at it.


      Examine what is said, not who speaks -- Silence betokens consent -- Love the truth but pardon error.
      "Science is about questioning the status quo. Questioning authority".
      In the absence of evidence, opinion is indistinguishable from prejudice.
        That said, there is a lot of Haskell I like, and despite billing myself upfront as a failed Haskell programmer, I still hold out hope of making progress in using it. Though I may need to transition to it through one or two other languages first.
        Have you taken a look at Prolog? It requires a very different mindset, yet without static typing or monads.
        And it is much harder to verify a proof than a program. You can run a program and subject its results to tests. Something you cannot easily do with formal mathematical notation. Of course, a big part of the desire for FP is the ability to have a compiler that takes standard mathematical notation and converts it to a running program. But then, you not only have to verify the notation, you also have to verify the compiler that does the conversion, and the results it produces, and the results that what it produces, produce.
        Hmm. You don't have to prove anything. You can still run your functional program and test it like you normally would (also take a look at: QuickCheck). But now you *get* the option to prove (informally) and reason about your programs if you so desire. It's an extra bonus feature that you don't get with an imperative program (New and Improved! Now with 20% more features!)

        Although I'm probably getting *really* OT, here's an excerpt I like from "The Way of Z: Practical Programming with Formal Methods" by Jonathan Jacky:

        Many programmer believe that fomal specification are not useful. They believe that the program text -- the code itself -- can be the only really complete and unambifuous description of what a program does. This vew holds that a formal specification is nothing more that the program written over again in another language. It misinterprets Z to be some kind of very high-level programming language.

        This example shows they are wrong. See for yourself; Here is the code in C.

        int f(int a) { int i, term, sum; term=1; sum=1; for (i=0; sum <= a; i++) { term=term+2; sum=sum+term; } return i; }
        The code couldn't be simpler. It is well structured and very brief -- in fact it looks trivial. But what does it do? Is seems to be adding up a series of number -- but why? And it returns the counter, rather than the sum -- is that a mistake? Try to answer before you turn the page.

        You can find the answer on page 34 by searching for the book on books.google.com.
        And that's my problem with much of the hyperbole that surrounds and infuses FP.
        I think our biases must be pretty different. Oh, sure, there are going to be some enthusiastic advocates of any language, but other than a few fly-by-night blog posts, I have a hard time seeing the hyperbole that surrounds and infuses FP.
        This paper is saying that "we don't need to deal with errors, exceptions, dirty data etc.", or "need a language that is Turing complete" (elsewhere in the paper) except on "rare occasions", but that just doesn't make sense to me.
        Hmm. Maybe experience comes to play here also. I 100% agree with the paper (incidently it one of my favorite CS papers, my number one favorite probably being Can Programming be Liberated from the von Neumann Style?). Most of the programs I write (in any language, and I use plenty of Perl) are about (guestimating) 90+% purely functional in nature (engineering analysis mostly). In fact, I don't think I've probably ever professionally written a program where I didn't think I had at least a rough idea complexity of the algorithm.
        Show me the code.
        Maybe something like xmonad, is real world, yet small enough to get your feet wet?
Re^4: Is it worth using Monads in Perl ? and what the Monads are ?
by Anonymous Monk on Jun 12, 2007 at 10:41 UTC
    I don't view monads as just a way of telling the compiler what effects I'm using (it doesn't really do that anyway). It's a way of tell *me* (or someone else reading the code) that this is effectful code, and that care must be taken.

Log In?
Username:
Password:

What's my password?
Create A New User
Domain Nodelet?
Node Status?
node history
Node Type: note [id://620594]
help
Chatterbox?
and the web crawler heard nothing...

How do I use this?Last hourOther CB clients
Other Users?
Others scrutinizing the Monastery: (1)
As of 2024-04-19 00:33 GMT
Sections?
Information?
Find Nodes?
Leftovers?
    Voting Booth?

    No recent polls found