Beefy Boxes and Bandwidth Generously Provided by pair Networks
Pathologically Eclectic Rubbish Lister
 
PerlMonks  

Re^9: Does Perl Have a Business Plan?

by BrowserUk (Pope)
on Apr 10, 2013 at 13:57 UTC ( #1027973=note: print w/replies, xml ) Need Help??


in reply to Re^8: Does Perl Have a Business Plan?
in thread Does Perl Have a Business Plan?

As every (byte)compiler generates code, by your definition, ALL code on this planet is crap. Certain metrics applied, you are right. Can you imagine that in the 90ies, a very competent m68k Assembly/C guru told me about his observation, how the GCC already at that time, made surprisingly efficient machine code (using surprising register overflows) an assembly programmer probably wouldn't have thought of?

Chalk and cheese.

Yes. Modern compiler technology -- include JIT variants -- can produce some very efficient machine code. But a big part of why they can do so is because what they produce does not have to be read, understood or maintained by human beings. If you doubt this, spark up gcc or MSVC on a piece of moderately complex source code and enable /E /Ox /FAs (or the gcc equivalent) and try and relate the resultant optimised assembler back to the C sourcecode.

And if you find yourself thinking "That's not so bad!", then do the same thing with a piece of moderately complex C++ code.

The basic problem is that human beings can only juggle 4/7/8/? things in their short term memory at a time, but compilers can remember everything for as long as they need to.

Machine generated source code is crap

Having had to try and understand and maintain the output of three different sourcecode to sourcecode converters in my career -- albeit in the late 80's and the 90s -- I urge you to both canvass opinion of those with experience of them; and take a look around at the output of tools that generate source code. Take a look at the output from tools like FrontPage and Dreamweaver and even GUI click&paste IDEs like MSVC. The source code output is almost impossible to follow.

With all of those, maintenance *must* be done on the pre-translated input, not the output, otherwise changes are discarded, or the two diverge.

So, unless it is your intention that maintenance continue to be performed on the pre-converted perl5 code -- hopefully not; shipping Perl6 libraries in P5 and making every user redo the conversion is a complete non-starter -- then human beings are going to have to deal with the post-conversion Perl6 code. And that will inevitably be crap.

Why

The very things that make computers so good at producing optimised machine code are what make them so bad at producing readable, understandable and maintainable source code. Take a look at some raw beginners Perl code -- there are good (bad) examples all over this site. Note the lack of structure; confusing ordering; bad naming of variables and functions; and the general lack of coherent flow of the code.

Proof.

Now imagine computer generated variable names; functions/ & methods laid out according to some programmed ordering -- alphanumerically or worse -- rather than logically grouped according to human expectations; function bodies laid out with all the variables declared up front rather than in-line; statements ordered according to time-line requirements rather than human expectations of logical flow; long lines wrapped at arbitrary (numerically calculated) points rather than the structural requirements of the conditions or events they contain; multi-statement chunks of functionality ordered by how they come off a push-down queue, rather than logical flow as a human being might construct it.

What you end up with is working code that is not understandable or maintainable by human minds. And from experience I can tell you that is a disaster for both productivity and ongoing development for in-house-only projects. For widely disseminated libraries that will serve as examples to human beings trying to either evaluate or learn a new programming language it would be much worse!.

Re-stated

So I'll restate my opinion in more specific terms: machine generated source code intended for human consumption, understanding and maintenance -- as opposed to machine-read and executed, and never maintained machine code -- is not yet within the capabilities of programming to make a descent fist of. This is not due to any constraint that can be solved by throwing CPU cycles or clever algorithms at; because it comes back to our continuing inability to understand how the human brain works.

It is still impossible to categorise what lifts code from 'functionally complete' to 'intuitively good', much less great. Whilst we can teach new programmers to write the former; we have no way to teach them how to produce the latter. We (science) still have no handle on what forces and and processes allow the human brain to innovate. To make that leap from the step-by-step following of the recipe, to seeing the better, clearer, 'new' solution. And if we cannot categorise it; we cannot algorise it.

Computers can only do what they have been programmed to do; they cannot invent or innovate.

The bottom line is that your converter will produce perl5 idioms in Perl6 code. You may think it might be possible for the converter developers to program in boiler-plate Perl6 replacements for Perl5 idioms such that you will end up with half-descent Perl6 code; but it will not happen. At best, any such substitutions will only be possible at the individual statement level.

If you doubt this, consider the task of trying to convert the perl5 code snippets that implement (say) matrix multiplication -- as written by a dozen different Perl5 programmers -- to Perl6 using the hyper-operator syntax.

Try it out.

Ask the monks here, or your colleagues, to write a simple Perl5 function that takes references to two, 2D arrays and returns a reference to a third 2D array that is the product of the inputs. Then, when you get back their code, sit down and consider what would be involved in trying to program your converter to recognise all those disparate versions -- some using nested for loops; some map; some while; and the various combinations of those and more.

That is the level of task (and just one of many) that you would need to be able to solve in order for your converter/translator to be able to produce good Perl6 code. It is a task that is more complex than writing a Perl6 compiler from scratch and much harder than writing a VMI to run it on.

It isn't going to happen any time soon. It would (will?) be just another side project that will go nowhere and ultimately be a waste of resources that would be better served by assisting the Perl6 team getting a Perl6 compiler & VM working.

Sorry, but that is my conclusion. (Note: What you and others do with your time is none of my business.)


The above was meant to be the end; but I simply cannot bring myself to let this go:

Aren't you aware how technic-centric your arguments are? How limited that view is? You really think it is the developer, the early adopter who is responsible for the adoption of a new technology? Oh man... Hint: Have a look how new programming languages like Go or Dart are being prepared for adoption. And LEARN for gods sake!

Sorry, but yes. It *IS* technicians that drive the adoption of programming languages.

Fanbois, Mommas&Pappas and Silver Surfers, have no interest in the languages in which the code that: feeds their TwitBook addictions; delivers their offspring's emails, baby/holiday snaps; or their on-demand TV programs; is written in. So marketeering to them is a bust.

And it isn't business management that drives the adoption of new languages. At least not until there is sufficient eveidence (in the business press and newsfeeds) that other managers and companies have already made the leap and run with if for long enough to make it a 'safe bet'. Managers resist risk -- and that means change -- at all costs -- until forced by weight of wider opinion, or their relative decline in revenue, market share or share price -- to accept the risk is manageable or impossible to avoid

Technicians drive early adoptions

It is technicians -- programmers -- that drive the early adoption of new programming languages -- often through stealth. Whilst in the later stages, programmers are drawn to new languages by weight of numbers -- what's hot right now -- in order for that to becomes relevant, you need to achieve that weight on numbers. And that means someone (and many someones) have to be first, and that is driven entirely by technical issues and innovations.

Comparative study

And yes. I have GO and Dart on my machine. I've played fairly extensively with the former, but it has fallen into disuse because it cannot do dynamically link modules (.dll/.so) and according to my conversation with one of the lead developers, it never will. Its a plan9 cultural thing.

I've only just started looking at Dart; and so far I fail to see anything there that leads me to believe that it is a major step forward over any of half a dozen existing languages. I'm still open to seeing where it will go, but I'm not rushing into making great efforts at this stage.

But, if GO is so good, why do they need Dart? And if Dart is so much better; how long will GO persist?

But, more worryingly, will either or both of them make it through next year's Google spring clean? Or the year after that? Belts are tightened, the competition is getting better, and the revenue growth from the Ompa Loompa's money machine -- on-line advertising -- is beginning to slow as competitors are beginning to catch up and purchasers are beginning to question their ROI. Google have been and continue to be ruthless in pruning their experimental projects; even those that have acquired a large, and often vociferous public following. (The latest casualty if there long running and widely used Reader service)

Basis for my conclusions.

So yes, I'm more than just aware of developments in new programming languages, I have been actively following along, acquiring, installing and testing them for the last 10 years or so. It is one of if not my main interest and I've been pursuing it vigorously expending a large percentage of my time on exactly that.

I've drawn three conclusions from my past 10+ years of investigating computer languages:

  • There are a lot of new programming languages popping into existence every year -- 10 or more and the rate is increasing by my estimation -- but of those, fewer than 1 a year will ever achieve anything more than a tiny percentage of 'the market'.

    And even those that do so may only do so for a very short ( < 5 years) period of time before falling away back into obscurity. Maintained or not.

  • No new language will ever gain more than a sub percentage point of the market unless it has a 'killer application'.

    Ie. Some usage that is unsatisfied by other existing languages. And that usage has to be high-level, with a low-learning curve and (greatly) increased productivity than anything else available.

    My example is Ruby-on-Rails.

  • Evolution always trumps revolution where programming (languages) is concerned.

    C++, Java, EMCA/Javascript, C, COBOL & FORTRAN are not going away any time soon. They will continue to evolve and acquire new skills and modes of use; and they will continue to dominate the numbers game in perpetuity.

    This is not because they are perfect; but because it is far easier to get new facilities and paradigms into the world's code bases by getting 1% of the 50% of programmers already using C++, than it is to get the same number to switch to a new language that provides those same facilities and paradigms; even if they do it far better.

    Hence, lambas functions and expressions will see far more use by their inclusion into C++11, than they ever have or ever will through all of the FP languages -- Clean, Curry, Haskell, Miranda, Erlang, F#, Lisp, ML, OCaml, Q, Pure, Scala et al. -- combined.

    It is simple math: If only 2% of C++ programmers (which represent say 50% of all programmers), use them, that is still more than the sum of 100% of 50 languages that each only used by 0.1% of all programmers. (Don't take those numbers literally; they are made up for the example, but are not orders of magnitude out; which they would need to be to invalidate the argument.)

My point above is that whilst I am just one man looking from just one POV, my earlier response to you was not made up in the spot, but was the product of wide reading, extensive hands-on evaluation and a great deal of experienced thought over many years.

Whilst I make no claim for my conclusions being definitive, know that they are neither casually drawn, nor a simple regurgitation of those of others. I have no axe to grind nor anything to gain or lose from anything you do. Do not dismiss them, nor adopt them, lightly. Just factor them into your thinking -- or not -- when reaching your own.

Good luck.


With the rise and rise of 'Social' network sites: 'Computers are making people easier to use everyday'
Examine what is said, not who speaks -- Silence betokens consent -- Love the truth but pardon error.
"Science is about questioning the status quo. Questioning authority".
In the absence of evidence, opinion is indistinguishable from prejudice.
s not so bad!h3

Replies are listed 'Best First'.
Re^10: Does Perl Have a Business Plan?
by Propaganda.pm_rj (Acolyte) on Apr 12, 2013 at 10:34 UTC

    Sorry for answering with a certain lag, contrary to popular belief, I'm not just busy babbling around, but actually do something. ;-)

    Machine generated source code is crap

    For a certain definition of "crap" - yes. If your sole definition of "being crappy" is readability/maintainability, then you are right. In this limited context. If you extend this view by e.g. "performance", "development time", you get different - if not contrary results.

    Normally, maintainability of computer generated code is waived, because it's ... COMPUTER GENERATED ... and should be easily generated again if required. E.g. after your input data have changed, or you improved/fixed your generator/converter.

    What we could talk about would be if the effort writing a Perl5 to Perl6 converter would outweigh the effort writing the 100 to 500 useful CPAN modules from the start. Plus of course the unknown number of bits and pieces of Perl5 code out there people would like to be able to migrate with low cost (think ROI).

    So I'll restate my opinion in more specific terms: machine generated source code intended for human consumption, understanding and maintenance ... is not yet within the capabilities of programming to make a descent fist of.

    I never stated that maintainability, readability would be an a priori requirement or concern of such a code. My point was (besides making clear that this converter is feasible), that this converter would give Perl6 adopters in spe with the generated code at least a quick, low/no cost migration path. I mean hey, what if it would convert that legacy Perl5 library no one ever wanted to touch and it would work? Albeit being still unreadable/unmaintainable? I see no loss there only gain.

    It isn't going to happen any time soon. It would (will?) be just another side project that will go nowhere and ultimately be a waste of resources that would be better served by assisting the Perl6 team getting a Perl6 compiler & VM working. Sorry, but that is my conclusion. (Note: What you and others do with your time is none of my business.)

    Well, I'm not surprised. Because you start from the wrong assumptions, then make right deductions, you still end up with the wrong conclusions. Without thinking more on the transition cost and being fixated on the "base technology", you are basically the prototype of the "pack of the hopeless" who actually do have hope to pull (or push?) Perl6 one fine day to a state where it will be happily adopted. I hate to crush motivations (you can easily escape that by positioning my opinion as invalid), but under these circumstances and with this mindset, the Perl6 project will fail.

    What's even worse, you see a resource clash where there is none. People writing such a converter (actually the only thing I'm not sure about is if it should be written in Perl6 or Perl5) are not necessarily the people working on the VM and the compiler.

    Sorry, but yes. It *IS* technicians that drive the adoption of programming languages.

    :-D Sorry but no, this is what technicians tend to think. Technicians are responsible for the first 1% of the bootstrap process of a new technology. Building up the bare foundations. Almost everything else is actually not driven(!) by these people. Only modified.

    But, if GO is so good, why do they need Dart? And if Dart is so much better; how long will GO persist?

    Ehm... Go is being used at Google in production systems, Go and Dart target different application areas. While Go tries to be Googles C, Dart tries to replace JavaScript for good. I am not that interested in Go, because that is a real self-purpose language for Google, but I suspect Google has the power to push JavaScript with Dart out of the way. And THAT is an undertaking one should learn from. Also, because this process will take some time, it leaves opportunities for others. (hint hint)

    Actually there is not much to comment for the rest of your post, because you virtually paraphrase my statements. (Evolution vs. "Revolution"?) But somehow indicate you do not see reason for Perl6 development to act accordingly. That puzzles me, but let's leave it as is.

    Regarding Perl6 development, I have great respect for the people doing it. Very bright minds, doing bleeding edge technology. However, from the viewpoint of "regular" software development, the Perl6 project shows all signs of a failing software project. One of the most important signs is the self-delusion acompanying the development. You hear clearly wrong statements regarding as WHY it was started at all, you see clearly wrong priorities and adoption assumptions/hopes.

    I can admit, I didn't contribute more to Perl6 than just installing it and trying it out here and there. Giving it my time and evaluating it. Attending - more often than not - disturbing talks when it was presented in the past 8 or 9 years. I do have the audio of the original "State of the Onion" speech from Larry Wall when he announced it - including drums.

    For me, 42years old, 16years Perl, managing a company of ~70, mainly Perldevs, loving Perl, thinking it deserves better than it has now, being INTJ, pedantic, paranoid and what not ... it simply hurts, because it has now been 12 long years. It may take another 12 years until someone (with definite authority on that matter) says: "project failed". All that time and effort spent in it could be wasted. Maybe not all, threaded Parrot could happily live as a Perl5.80 or Python7 foundation, but I do not want to elaborate on that until we don't have a threaded Parrot.

    You know what I heard in a talk about Perl6 - held by a Perl6 protagonist - at the GPW2013? "The biggest killer feature of Perl6 ... (grammar) ... may be the unique sales point for that language - which eventually may not even be Perl6 by then."

    So

    Good luck.

    Thank you, the same to you! Actually I think the Perl6 troppers will need more of it than me, so fingers crossed, four-leaf clover, pigs and what not.

    propaganda.pm - Not just another Perl Mongers Group.
      If your sole definition of "being crappy" is readability/maintainability,

      That was not my sole definition.

      What is the point in moving to a new version of a language, if the libraries you need to do anything productive are written in the old version and the new compiler runs them half as fast as the old one?

      What is the purpose of re-writing a language if it has to be bug compatible with the old one?

      What is gained by writing the new version -- adding all the new features and facilities; cleaning out all the old anomalies, disorthogonalities, historical detritus and cruft -- if the libraries required to make productive use of it, not only cannot take advantage of those f&fs, but also force you to add back support for everything that you were trying to get rid of? (Who would buy a Bugatti Veron if they were required to employ a man with a flag to walk in front?)

      I never stated that maintainability, readability would be an a priori requirement

      No. I did. Without maintainability of the generated code, the migrated code can never evolve.

      Everyone using it will be stuck in transition, requiring two sets of tools (+the translator); two sets of skills for tracking down bugs -- do they originate in the pre-translated code or are the introduced by the translator; or are they in the in the new compiler/VM. The idea is a nonsense.

      a quick, low/no cost migration path.

      There is no purpose in migrating if all you get from the exercise is what you have now without it.

      Is is cost for no benefit. A make-work exercise.

      Technicians are responsible for the first 1% of the bootstrap process

      That sounds an awful lot like you just agreed with me, and then dismissed it without reason.

      Without the first layer of bricks, the wall doesn't get built. But it goes (metaphorically) deeper than that. You need foundations.

      And it isn't business managers, or marketeers, or HR or money men that go out looking and downloading and appraising and providing feedback to new languages. It is technicians. If language development had been left to business managers, we'd either all still be using COBOL; or every IT business, finance house, government body and conglomerate would be using its own proprietary languages for commercial advantage and/or security reasoning.

      Technicians may need the sign off from business and money guys to adopt new languages for production -- and that is probably a good thing :) -- but they need a very clear and undeniable technical & business case before they will ever get that sign off.

      And the technical case must come first; and must be sufficiently technically compelling to gain their interest in the first place. Without it capturing the hearts and minds of the technicians first. new languages are still-born.

      (The rare exception are the military with things like Ada. But that is a one-off thank god.)

      Well, I'm not surprised. Because you start from the wrong assumptions, then make right deductions, you still end up with the wrong conclusions.

      Right back at yer.

      you see a resource clash where there is none.

      Not a clash, just a waste that could be redirected to better use. But it is your time and money and your decision.

      Once again. Good luck.


      With the rise and rise of 'Social' network sites: 'Computers are making people easier to use everyday'
      Examine what is said, not who speaks -- Silence betokens consent -- Love the truth but pardon error.
      "Science is about questioning the status quo. Questioning authority".
      In the absence of evidence, opinion is indistinguishable from prejudice.

Log In?
Username:
Password:

What's my password?
Create A New User
Node Status?
node history
Node Type: note [id://1027973]
help
Chatterbox?
and the web crawler heard nothing...

How do I use this? | Other CB clients
Other Users?
Others studying the Monastery: (5)
As of 2020-12-02 07:46 GMT
Sections?
Information?
Find Nodes?
Leftovers?
    Voting Booth?
    How often do you use taint mode?





    Results (35 votes). Check out past polls.

    Notices?