Beefy Boxes and Bandwidth Generously Provided by pair Networks
Your skill will accomplish
what the force of many cannot
 
PerlMonks  

Re^7: Specializing Functions with Currying

by jryan (Vicar)
on Aug 06, 2004 at 22:21 UTC ( [id://380754]=note: print w/replies, xml ) Need Help??


in reply to Re^6: Specializing Functions with Currying
in thread Specializing Functions with Currying

Why be functional when you can be efficient? Recursion might be elegant, but computers don't really understand elegance. :) Functional programming has a lot of great and useful concepts in it, and luckily we use a language that doesn't *force* us to be purists about it.

P.S.: The variable assignment is very necessary in my second example, because otherwise you're going to get a circularly referenced subroutine, which will make perl hit a infinite recursion warning pretty damned quick.

P.P.S: And, you're right, your last version is even better!

  • Comment on Re^7: Specializing Functions with Currying

Replies are listed 'Best First'.
Re^8: Specializing Functions with Currying
by tilly (Archbishop) on Aug 06, 2004 at 22:29 UTC
    Actually programming languages that understand tail-recursion would have made stvn's solution both elegant and efficient.

    That said, some algorithms just cannot be efficiently implemented in a "pure" functional manner. A good example is the Sieve of Eratosthenes.

      Something I've noticed in some of my own (brief) studies into functional programming is that it tends to make one think in algorithms that are not always the most efficient, and sometimes tail-recursion won't save you.

      For instance, in a SoPW a while back, a poster wanted an efficient way to get the highest number in a list (or something along those lines). I posted this solution:

      my $highest = pop sort { $a <=> $b } @list;

      For some reason, my brain was convinced that this was the most efficient solution. While it may be a wonderful bit of functional code, it is not nearly as efficient as the iterative solution I knew quite well when I was a new C programmer:

      my $highest = pop @list; foreach (@list) { $highest = $_ if $_ > $highest; }

      Just what was I thinking?

      I guess this is a warning not to rely on any one tool too much.

      "There is no shame in being self-taught, only in not trying to learn in the first place." -- Atrus, Myst: The Book of D'ni.

        Oh, that solution can be quite efficient. You just have to generalize the problem in the right way. :-)

        An example where the moral equivalent of your approach is efficient is if you want the top m elements out of a set of n. Make the sort be a lazy sort (eg heapsort) so that you only do the necessary comparisons. The efficiency is then O(n+m*log(n)), which is about as good as you will get.

        If you write bad functional code, that does not imply the of failure functional programming.

        What were you thinking, indeed?

        use List::Util qw(reduce); my $highest = reduce { $a > $b ? $a : $b } @list;

        Of course, in practice, you'd use the specialized max() function from the module — but the above is how you'd write this algorithm in any functional language.

        The fact that you wrote the sort solution in functional style is a red herring — you can just as well write a sort solution in imperative style.

        Makeshifts last the longest.

        While it may be a wonderful bit of functional code, it is not nearly as efficient as the iterative solution

        While it is functional, I wouldn't describe it as wonderful :-) A functional developer would use reduce to do it efficiently

        I sometimes fall foul of that too. My most recent example was:

        my $first = first{ length > 3 } @array; print $first;

        Versus:

        my $first; for ( @array ) { next unless length > 3; $first = $_; last; } print $first;

        where, despite the compiled-loop nature of List::Util::first, the latter was considerably more efficient than the former if the required value was close to the front of the array.

        Nearly twice as efficient with an array of 100,000 items and the thing I was looking for about a 10th of the way in.

        Funnily enough, even if the item I was looking for was the last item in the array, the iterative approach was still quickest. I guess the penalty of entering and exiting the block scope is the cause?


        Examine what is said, not who speaks.
        "Efficiency is intelligent laziness." -David Dunham
        "Think for yourself!" - Abigail
        "Memory, processor, disk in that order on the hardware side. Algorithm, algorithm, algorithm on the code side." - tachyon
Re^8: Specializing Functions with Currying
by stvn (Monsignor) on Aug 06, 2004 at 23:04 UTC
    Why be functional when you can be efficient

    Spoken like a true Electrical Engineering Major ;-)

    Functional programming has a lot of great and useful concepts in it, and luckily we use a language that doesn't *force* us to be purists about it.

    I agree with you on both points. However, as tilly points out, recursion (and other functional programming goodies) don't have to be inefficient. I would encourage you to check out Standard ML, and in particular its compilation model (which I can't find any good links to at the moment, but I will post them when I find them). ML is a fast functional language, and according to the (inherently flawed) computer language shootout one of the fastest (faster than C++ in some cases). In addition there are some really fast LISP implementations out there too as well.

    Basically with a good compiler, you can be functional and efficient.

    -stvn

      Hey, don't take me wrong, I'm all about elegance. My argument isn't against functional langauges (hell, I used reduce, which is the same thing in lisp! lisp, baby! how's that for functional!), it was that recursion in Perl is crazy inefficient, and to use it just because its "functional" is kind of silly. :) Tilly's point was about tail-recursion, which is an optimization that Perl doesn't support (for some reason, I honesetly don't have a clue why).

        Tail recursion has a seldom-noted cost. Sure, it speeds up some recursive code. But it also means that if you get into trouble, a stack backtrace will be missing a lot of potentially useful context.

        That said, you can manually do tail-recursion in Perl with goto. Unfortunately it is not as efficient as it should be, and I'm not entirely sure why.

        ... it was that recursion in Perl is crazy inefficient, ...

        Is it really? I mean, its certainly not as efficient as the equivalent iterative code. And I know that subroutine calls have a non-trivial overhead (setting up lexical scope and all), but being that perl does not tie itself to the C-stack like some other langauges do (*cough* Python *cough*) I would think that it wouldn't really be all that inefficient. Of course, I have never benchmarked it, so who knows.

        ... and to use it just because its "functional" is kind of silly ...

        But if we can't get silly with Perl at 7 o'clock (EST) on a Friday night, then what kind of world is this! ;-P

        about tail-recursion, which is an optimization that Perl doesn't support (for some reason, I honesetly don't have a clue why)

        I don't know why either, I once looked into trying to "implement" it with the B:: modules. But once was all it took for me to to decide that wouldn't work.

        -stvn

Log In?
Username:
Password:

What's my password?
Create A New User
Domain Nodelet?
Node Status?
node history
Node Type: note [id://380754]
help
Chatterbox?
and the web crawler heard nothing...

How do I use this?Last hourOther CB clients
Other Users?
Others chanting in the Monastery: (8)
As of 2024-04-25 08:02 GMT
Sections?
Information?
Find Nodes?
Leftovers?
    Voting Booth?

    No recent polls found