Beefy Boxes and Bandwidth Generously Provided by pair Networks
P is for Practical
 
PerlMonks  

Re^12: Modernizing the Postmodern Language?

by b2gills (Novice)
on Jul 14, 2020 at 22:06 UTC ( #11119324=note: print w/replies, xml ) Need Help??


in reply to Re^11: Modernizing the Postmodern Language?
in thread Modernizing the Postmodern Language?

So in other words, Raku is a better designed language.

Actually no. That is not the correct view.

Raku is actually a designed language. Perl is an accumulation of parts that mostly work together.

The main reason grammars are slow is because basically no one has touched the slow parts of it for the better part of a decade. We have some knowledge about how to speed it up because earlier prototypes had those optimizations.

The thing is, that it isn't that slow. Or rather it isn't that slow considering that you get an actual parse tree.

If you must know, the main reason it is slow is probably because it sometimes looks at particular tokens perhaps a half-dozen times instead of once. (This is the known optimization that was in one of the prototypes that I talked about.)

It has absolutely nothing to do with being able to replace what whitespace matches. That is already fairly optimized because it is a method call, and we have optimizations which can eliminate method call overhead. Since regexes are treated as code, all of the code optimizations can apply to them as well. Including the JIT.


Really if Perl doesn't do something drastic, in five to ten years I would suspect that Raku would just plain be faster in every aspect. (If not sooner.) The Raku object system already is faster for example. (And that is even with MoarVM having to be taught how Raku objects work every time it is started.)

Something like splitting up the abstract syntax tree from the opcode list. That way it can get the same sort of optimizations that Raku has that makes it faster than Perl in the places where it is faster.

Imagine if the code I posted would turn into something more like this:

loop ( my int64 $i = 1; $i <= 1_000_000_000; ++$i ) {}
Or rather transform that into assembly language. Which is basically what happens for Raku. (Writing that directly only reduces the runtime by a little bit more than a second.)

It seems like every year or two we get a new feature or a redesign on a deep feature that speeds some things up by a factor of two or greater. Since Perl is more stratified than designed, it is difficult to do anything of the sort for it.


Also I don't know why we would want to downgrade to LLVM. (Perhaps it can be made to only be a side-grade.)

As far as I know LLVM only does compile-time optimizations. The thing is that runtime optimizations can be much much better, because they have actual example data to examine.


Perl is an awesome language.
Raku is an awesome language in the exact same ways, but also in a lot more ways as well.
Many of those ways make it easier to produce faster code.

Replies are listed 'Best First'.
Re^13: Modernizing the Postmodern Language?
by chromatic (Archbishop) on Jul 15, 2020 at 01:23 UTC
    Also I don't know why we would want to downgrade to LLVM.

    That wasn't the point of my post, but it was also exactly the point of my post, so I'm not sure why we're having a discussion on how Raku will someday eventually be faster than Perl, because that's irrelevant to my point that the semantic mismatch between a language and its implementation is really, really important to performance.

    The main reason grammars are slow is because basically no one has touched the slow parts of it for the better part of a decade.

    I remember profiling and optimizing grammars in an earlier version a little over a decade ago, so.

    It has absolutely nothing to do with being able to replace what whitespace matches.

    I don't believe this, because:

    • Like I said, I spent a lot of time looking at this.
    • Doing nothing is faster than doing something. A JIT is not magic fairy dust that makes everything faster. Even if you can get this codepath down to where you can JIT across a monomorphic call site, the resulting code is still not faster than a single inlined lexeme, especially if you account for the time and memory overhead of JITting at all. The semantic mismatch between a language and its implementation is really really important to performance.
    Really if Perl doesn't do something drastic, in five to ten years I would suspect that Raku would just plain be faster in every aspect.

    I've heard this every year for the past 10 years, but I respect that you're not promising it in the next year, like Raiph always used to. I'll believe it when I see it.

      You do realize that there exists a project which acts like a JIT for compiled code right?

      It exists because a JIT has more information available to than the compiler, it can do a better job at optimization.

      The way Raku does it is even better than that because the JIT can actually sort-of ask the compiler what it really wants. Or rather the compiler gives the JIT enough hints ahead of time.


      The reason I actually gave a timescale, instead of just saying “future”, is because of the RakuAST project which will end up cleaning a lot of semantic mismatches in the process. It should also make a lot of optimizations easier to perform.

      The plan I believe is for Rakudo to switch to it within a year. Which allows 4 to 9 years for optimizations. Again those optimizations should be easier than the ones that already made Raku faster in some cases.
      (By faster I mean faster than Perl and C/C++ for some cases.)

      MoarVM is also getting a new dispatcher that should also be easier to add optimizations to. I don't recall seeing a timescale on that though.
      (At least some of those optimizations will probably happen before it gets completely switched to.)

      So two of the slowest parts are getting replaced with much more optimizable designs.


      An optimization is just a way to push the implementation of a language as far as possible from the semantics of that language without it being noticed.
      So you were sort-of right, the semantic mismatch between a language and its implementation is really really important to performance, only you had the argument backwards.

      Of course you want as little semantic mismatches that doesn't allow for optimizations, because that is still code.

      Rakudo is made of layers where each layer only has a slight semantic mismatch from its next higher or lower neighbor.
      This allows for much larger shifts of semantics at the lowest layer without it being noticed at the top layer.

      With perl there is pretty much exactly one layer, and it is the top layer. Which means you can't really change it all that much without changing semantics and thus breaking existing code. So there is a vast sea of optimizations that are just not possible.

      Also I would really like to know how allowing you to change what is considered whitespace as a semantic mismatch. Because it really isn't.


      The semantic mismatch between what is in my head and Raku is less than the mismatch with Perl.

      That is the most important mismatch to reduce, because it is the only one that can't be optimized away.

        It should also make a lot of optimizations easier to perform.

        I had that discussion a lot with Raku's lead developers in 2009 and 2010.

        Which allows 4 to 9 years for optimizations.

        We'll see, but I heard that in 2009 and 2010 too.

Log In?
Username:
Password:

What's my password?
Create A New User
Node Status?
node history
Node Type: note [id://11119324]
help
Chatterbox?
and the web crawler heard nothing...

How do I use this? | Other CB clients
Other Users?
Others about the Monastery: (3)
As of 2020-10-23 00:14 GMT
Sections?
Information?
Find Nodes?
Leftovers?
    Voting Booth?
    My favourite web site is:












    Results (232 votes). Check out past polls.

    Notices?