Beefy Boxes and Bandwidth Generously Provided by pair Networks
laziness, impatience, and hubris
 
PerlMonks  

Re^4: How will Artificial Intelligence change the way we code?

by hdb (Monsignor)
on Jun 11, 2018 at 11:29 UTC ( [id://1216379]=note: print w/replies, xml ) Need Help??


in reply to Re^3: How will Artificial Intelligence change the way we code?
in thread How will Artificial Intelligence change the way we code?

I do like this definition of AI, but it is probably more a definition "self-awareness".

With deep learning we do automate a lot of tasks that only humans were able to do until then, so in my opinion many of those should be called intelligent. Usually, the definition of intelligence is "anything that machines cannot do" and therefore there never will be real AI. But saying a machine is intelligent because she does something unexpected out of boredom...

Replies are listed 'Best First'.
Re^5: How will Artificial Intelligence change the way we code?
by BrowserUk (Patriarch) on Jun 11, 2018 at 13:51 UTC
    I do like this definition of AI, but it is probably more a definition "self-awareness".

    I agree that for a program to decide "I'm bored" -- or angry or in-love or scared -- is really a definition of consciousness; but I wonder if you can have intelligence without consciousness?

    There is no doubt that some of the more sophisticated "AI" applications now coming on line -- like Google's AI assistant booking a hair appointment by phone -- are really blurring the lines between what constitutes AI.

    For me, anything that is simply a decision tree -- no matter how deep and wide that tree is, or how many billions of decisions it makes per second -- is just programming. Very clever programming perhaps; but still just programming. A human (or many humans) have constructed algorithms to process inputs and produce outputs. And for any given a set of possible inputs, the resultant outputs can be anticipated; even if it requires another computer program (constructed by humans) to make the predictions. And ultimately, given a fixed set of inputs, you can expect the same outputs.

    Even if the AI has been programmed with a learning capacity -- so that it learns from one run to not make the same decisions during the next if given the same inputs -- if you consider two consecutive sets of identical inputs as a single input, then the final output again becomes predictable.

    So maybe that constitutes a better definition of AI that does not go as far as consciousness. If the program can produce outputs that are unpredictable.

    But then that falls down because it is easy to see that just throwing in a few (statistically driven) random possibilities -- eg. two possible decisions generate equal scores from the fitness algorithm, then you toss a coin to chose between them -- and you can approximate unpredictability by dint of chance. But does that constitute intelligence?

    I'm somewhat undecided on that because there are a huge number of human 'discoveries', including many that have seen their discoverers awarded Nobels and similar, that came about as a result of chance. (24 accidental scientific discoveries). The real intelligence behind many of those discoveries, is not the discovery itself, but the ability to recognise the potential of something that you encounter when you weren't looking for it.

    And that is where I think I would hang my hat for a definition of AI. The ability to recognise potential solutions to problems you aren't trying to solve. Because I think this is actually feasible.

    There are also many cases of 'discoveries' having been encountered, but simply not recognised until years or decades later; because they weren't seen by the right person. And this is where software in a connected world could really make progress.


    (WARNING! A somewhat abbreviated and contrived example coming up.)

    Imagine that big pharma is probably currently running targeted, but brute force examinations of millions of compounds, looking for: new antibiotics; (many different types of) cancer drugs; anti-aging compounds; better (targetted) pain relief for specific conditions (migraine, arthritis); and those to cure, reduce or relief a bazillion other conditions from Alzheimer's to Sickle Cell to depression to bulimia to osteoporosis to whatever.

    Imagine if instead of them all operating their sequencing and gene folding and compound characterisation suites in proprietary isolation targeting individual cures; new compounds were characterised collectively, and as widely and generally as possible, and then the resulting characteristics were freely exchanged and the targeted, proprietary searches were applied to the mass of new compounds as they were formulated, generated, discovered or hypothesized.

    Individual researcher's, experts in their particular fields, write (and continuously develop) AIs to filter the stream of generalised characteristics from all sources, to produce a subset of it worthy of further examination and characterisation for their specialist discipline. And then, any of that subset that failed to meet their needs once further characterised, had those extra characteristics added to the original set, and then the compound is fed back into the head of the stream to pass through all the other filters again.

    This is using the power of software/algorithms to perform fixed sets of identical operations repetitively and quickly to its fullest advantage; doing each combination of operations only once and then storing the results -- no matter how unpromising they may be -- until someone (or some algorithm) finds a use for them.


    Bring this back to the thread subject. The most recent example of hotbed software development was in JavaScript engines, when Chrome, Firefox, IE/Edge and (for a while) Opera, went head to head to improve the performance of JS engines. And the results of the competition were remarkable. Orders of magnitude improvements in a whole raft of specific areas, as they jockeyed for bragging rights over a few years.

    Now imagine not just them sharing their individual steps forward so everyone benefited from each; but taking the time to train an AI to recognise the thought processes of the developers that allowed those steps forward to be achieved. As the individual developers and teams move on to other things, leave, retire or die, some of their intuition is encapsulated into an algorithm, decision tree or inferencing engine and can be left running on idle in the background going over and over the code looking for new opportunities to apply that intuition.

    And each change to the sources made by any one AI -- or indeed any human developer -- might create new opportunities for any of the other AIs to apply its 'intuition' at a new site in the sources. That would be a 'continuous development' process worthy of the name.

    When I've brought up similar ideas of cross-proprietary development in the past, the argument against it has always been; why would companies give away their advantage; but it is really a non-argument. The life blood of most technology companies -- especially the bigger ones with deep research teams and budgets -- is licensing. And cross licensing. And there is no reason why that couldn't not just continue, but be automated.

    The other argument leveled against it is that you would end up with one browser being written by 3 or 4 different companies and innovation would soon cease; but it is another non-argument. Rather than 3 or 4 companies all trying to write the one browser that everyone uses to the exclusion of all others, even though it is an imperfect fit for all of them -- as has been the history of browser development to date -- you enable the situation where lots of smaller companies can develop specialist versions that target vertical groups of users, whilst still benefiting from the general developments as they become available.

    This is already happening to an extent: Opera, Vivaldi, SlimJet are all Chromium-based browsers that seek to differentiate themselves by targeting smaller subsets of users rather than the entire world; but they are mostly based upon the OSS idea of feeding back developments into a common source that then becomes the base for all downstream spin-offs. And that does tend to lead to a single source with little more than themes and decorations to distinguish them. (Or the Linux distribution model where you have a bazillion different versions each with 2 or 3 developers targeting their own preferences and requirements; and no way for others to mix and match features to get what they want, without forking YALD.)

    If new developments and innovations could be encapsulated as AIs; that could be 'applied' to different sets of sources looking for opportunities to make changes; then the owners/developers of those sources could then elect which if any of the possible changes they wish to adopt. And along the way, their combination of chosen applications might breed new innovations that they can feed back to the wider development process for a micro-payment from each other browser that adopts it.

    They say I'm a dreamer; but am I the only one? :)


    With the rise and rise of 'Social' network sites: 'Computers are making people easier to use everyday'
    Examine what is said, not who speaks -- Silence betokens consent -- Love the truth but pardon error.
    "Science is about questioning the status quo. Questioning authority". The enemy of (IT) success is complexity.
    In the absence of evidence, opinion is indistinguishable from prejudice. Suck that fhit
Re^5: How will Artificial Intelligence change the way we code?
by LanX (Saint) on Jun 11, 2018 at 15:47 UTC
    Boredom results from a psychological drive, like hunger or desire for sex.

    I don't see a reason why it shouldn't be implemented or result from a genetic algorithm.

    For instance was AlphaGo trained by playing countless games against other instances of itself and for efficiency they included a "fairness threshold", to stop continuing obviously decided games.

    Alternatively a reflection layer in a more sophisticated product could identify such needless games (lack of input, lack of novelty), and a human observer might call this concept "boring".

    Since animals can get bored too, it's problematic to connect this to consciousness.

    I'm skeptical about this anthropocentric need to define intelligence by human concepts, which are a result of our evolution.

    But "boredom" might be just an universal need to select good training input for a neuronal network.

    Cheers Rolf
    (addicted to the Perl Programming Language :)
    Wikisyntax for the Monastery

      Since animals can get bored too, it's problematic to connect this to consciousness.

      Do you not distinguish between "being bored" and deciding "I'm bored and I'm going to do something about it."?

      Zoo animals often display signs of what we perceive as boredom; but I don't recall any case of them inventing a game to combat it.

      Corvids certainly have the skills and intelligence to solve problems, but with the rare and singular exception of a Skiing Crow; they only seem to use it for goal-oriented -- get the food -- activities; not play or boredom relief.

      Dolphins have been playing tag with seaweed which definitely constitutes inventing a game and playing it.

      And they are only two of several species that are known/have been demonstrated to be self-aware in as much as they can recognise their own reflections.

      I think your implication that animals can not be conscious is specious. (Or would that be specist :)


      With the rise and rise of 'Social' network sites: 'Computers are making people easier to use everyday'
      Examine what is said, not who speaks -- Silence betokens consent -- Love the truth but pardon error.
      "Science is about questioning the status quo. Questioning authority". The enemy of (IT) success is complexity.
      In the absence of evidence, opinion is indistinguishable from prejudice. Suck that fhit
        > I think your implication that animals can not be conscious is specious.

        That's not what I said. Animals know curiosity and boredom. Both are related to stimulus.

        > Do you not distinguish between "being bored" and deciding "I'm bored and I'm going to do something about it."?

        For instance: Bored cats "creating their own version of stimulation." *

        Over-training - one of the problems of machine learning - sounds pretty much like something best solved by defining and minimizing a "boredom" factor.

        Cheers Rolf
        (addicted to the Perl Programming Language :)
        Wikisyntax for the Monastery

        update

        *) "Bored cats sometimes create their own entertainment—such as playing with toilet paper rolls, climbing the curtains or engaging in other unappreciated behaviors."

Log In?
Username:
Password:

What's my password?
Create A New User
Domain Nodelet?
Node Status?
node history
Node Type: note [id://1216379]
help
Chatterbox?
and the web crawler heard nothing...

How do I use this?Last hourOther CB clients
Other Users?
Others rifling through the Monastery: (6)
As of 2024-03-28 14:08 GMT
Sections?
Information?
Find Nodes?
Leftovers?
    Voting Booth?

    No recent polls found