Beefy Boxes and Bandwidth Generously Provided by pair Networks
Welcome to the Monastery

Re^9: Modernizing the Postmodern Language?

by chromatic (Archbishop)
on Jul 06, 2020 at 16:35 UTC ( #11118974=note: print w/replies, xml ) Need Help??

in reply to Re^8: Modernizing the Postmodern Language?
in thread Modernizing the Postmodern Language?

I chose my words deliberately.

  • Comment on Re^9: Modernizing the Postmodern Language?

Replies are listed 'Best First'.
Re^10: Modernizing the Postmodern Language?
by b2gills (Novice) on Jul 06, 2020 at 22:50 UTC
    Can you explain why Perl is sometimes slower than Raku then?
    $ time perl -e 'for (1..1_000_000_000) {}' real 0m30.291s user 0m30.256s sys 0m0.020s $ time raku -e 'for (1..1_000_000_000) {}' real 0m12.407s user 0m12.436s sys 0m0.040s

      Because Raku has a better internal representation for integers than Perl's SvIV and can manage ranges lazily without reifying a large data structure. (I can't remember right now if Perl optimizes this in recent releases.)

      I don't know what doing nothing a billion times in 12 or seconds has to do with my point that the semantic mismatch between a language and a target platform is difficult to manage, however.

      You can port Raku to LLVM or Node or Inferno or whatever platform you want, but unless that platform can optimize grammars that require dynamic dispatch for every individual lexeme, you're going to end up with a slow Raku.

        I can't remember right now if Perl optimizes this in recent releases.

        I don't think so - and same goes for raku, apparently.
        On Ubuntu-20.04 (perl-5.32.0):
        $ perl -le '$x = time;for (1..1000000000) {}; print time - $x;' 51
        On Windows7 (perl-5.32.0):
        C:\>perl -le "$x = time;for (1..1000000000) {}; print time - $x;" 13
        The Windows box is about twice as fast as the Ubuntu box, so I'm not sure why the difference in this case is a factor of 4.

        Anyway, thankfully perl has XS/Inline::C at hand to enable sane and efficient handling for cases such as these.


        A question in my mind is: can Perl's internals be re-written for more efficiency, given all this experience gained over the years in these parallel attempts? Equally important: an API to access the internals a la XS. Obviously easier, user-friendly, perhaps "isolating" the core better?

        One point of view is by salva here Re^4: Modernizing the Postmodern Language?. Is yours different? Is there hope?

        So in other words, Raku is a better designed language.

        Actually no. That is not the correct view.

        Raku is actually a designed language. Perl is an accumulation of parts that mostly work together.

        The main reason grammars are slow is because basically no one has touched the slow parts of it for the better part of a decade. We have some knowledge about how to speed it up because earlier prototypes had those optimizations.

        The thing is, that it isn't that slow. Or rather it isn't that slow considering that you get an actual parse tree.

        If you must know, the main reason it is slow is probably because it sometimes looks at particular tokens perhaps a half-dozen times instead of once. (This is the known optimization that was in one of the prototypes that I talked about.)

        It has absolutely nothing to do with being able to replace what whitespace matches. That is already fairly optimized because it is a method call, and we have optimizations which can eliminate method call overhead. Since regexes are treated as code, all of the code optimizations can apply to them as well. Including the JIT.

        Really if Perl doesn't do something drastic, in five to ten years I would suspect that Raku would just plain be faster in every aspect. (If not sooner.) The Raku object system already is faster for example. (And that is even with MoarVM having to be taught how Raku objects work every time it is started.)

        Something like splitting up the abstract syntax tree from the opcode list. That way it can get the same sort of optimizations that Raku has that makes it faster than Perl in the places where it is faster.

        Imagine if the code I posted would turn into something more like this:

        loop ( my int64 $i = 1; $i <= 1_000_000_000; ++$i ) {}
        Or rather transform that into assembly language. Which is basically what happens for Raku. (Writing that directly only reduces the runtime by a little bit more than a second.)

        It seems like every year or two we get a new feature or a redesign on a deep feature that speeds some things up by a factor of two or greater. Since Perl is more stratified than designed, it is difficult to do anything of the sort for it.

        Also I don't know why we would want to downgrade to LLVM. (Perhaps it can be made to only be a side-grade.)

        As far as I know LLVM only does compile-time optimizations. The thing is that runtime optimizations can be much much better, because they have actual example data to examine.

        Perl is an awesome language.
        Raku is an awesome language in the exact same ways, but also in a lot more ways as well.
        Many of those ways make it easier to produce faster code.

Log In?

What's my password?
Create A New User
Node Status?
node history
Node Type: note [id://11118974]
and the web crawler heard nothing...

How do I use this? | Other CB clients
Other Users?
Others having an uproarious good time at the Monastery: (4)
As of 2020-09-27 01:16 GMT
Find Nodes?
    Voting Booth?
    If at first I donít succeed, I Ö

    Results (142 votes). Check out past polls.