Announcement

Collapse
No announcement yet.

Artificial Intelligence

Collapse
X
  • Filter
  • Time
  • Show
Clear All
new posts

  • Artificial Intelligence

    How suitable is PB for AI applications?

    We have:
    • very fast processing
    • good data definitions
    • good external links with whatever
    • Windows and other links
    • absolutely huge arrays etc
    • fast array processing
    • macros, functions, subroutines etc
    • ability to handle huge numbers
    what else?

    Are we in a good position to contribute to the coming AI revolution?

    What other features do we need?

    Should we make it a discussion point?

    Kerry
    [I]I made a coding error once - but fortunately I fixed it before anyone noticed[/I]
    Kerry Farmer

  • #2
    Kerry,

    Likely some would argue that PB needs to be Object Oriented and 64 bit but if you put your mind to it I am certain you can do some amazing stuff that looks, smells, and tastes like AI. My problem is the compile time. 560,000 lines of code takes 40 plus seconds to compile. My design laptop is nearly 10 years old so my compile time is probably a reflection of that. And Yes I use José's includes which also seems to affect the compile time results.

    Personally I believe that AI will not be just 1 program. It will be an amalgamation of apps that provide some input to 1 or more apps. Similar to the Space Shuttle with its multiprocessors that voted on the right answer to a control question.

    Yes, the Space Shuttle processing was done that way for redundancy. Cosmic rays can play havoc on processors. You can certainly use that same approach nowadays especially when multicore processors are prevalent.

    Some would argue that AI needs to be self taught. If the project you are working on is limited then likely the AI will be limited as well. If your project is a general purpose/multipurpose learning machine then the sky is the limit for AI there. Preprocessing will be needed to make your project responsive/useful.

    Vision systems:
    If you are doing image comparisons you will need to do color analysis, edge detection, 3D projection, image rotation, etc.

    Speech systems:
    If you are doing speech recognition processing you will need to do phrase matching, phrase/word interpretation, meaning speculation based on historic references, integration with vision system processing to identify subject referenced, etc.

    Touch systems:
    If you are doing touch comparisons you will need to do digit location, edge detection, temperature measurement and projection, object rotation analysis, surface anomaly detection, surface recognition, and integration with vision systems and speech systems processing to identify subject referenced, etc.

    Taste systems:
    1) Chemical analysis...
    2) Integration with vision, speech, touch systems to identify subject referenced.

    Environmental/location systems:
    1) GPS - location
    2) Temperature
    3) Wind dynamics
    4) Speed
    5) Acceleration
    6) Time/date
    7) Obstruction detection and location
    8) Organic matter recognition and location
    9) Integration with vision, speech, touch, taste systems to identify subject referenced.

    Safety and diagnostics:
    System/systems analysis:

    and the list goes on and on.

    Perhaps we need another Manhattan Project to get us there. Any takers to that end?
    Last edited by Jim Fritts; 7 Nov 2018, 09:58 AM.

    Comment


    • #3
      I think AI is more for the GPU space, rather than the CPU space. The speed an multi-parallelism of GPU programming allows AI to be more powerful.
      <b>George W. Bleck</b>
      <img src='http://www.blecktech.com/myemail.gif'>

      Comment


      • #4
        Unless we have an ample supply of sensors and/or data inputs and speedy reading of those sensors/data then the system will not perform as advertised. I'm with George on that point. Bulk multi-parallelism is key for that. The system I am working on that I designed and built was designed around a normal speed USB connection so reading data is painfully slow. 4 BIT don't you know.

        However, the output is 8 BIT and it can switch 30 relays a second.

        Comment


        • #5
          Thanks for interesting answers.

          Can we do multi-parallelism by using multiple linked cpu's? A la Google?

          Are we confusing hardware limitations with software capabilities?:

          Doesn't PB manage GPU processing?

          And yes, I am on for a Manhattan project!

          [I read a book about 10 years ago. It asked what would be the effect if you could buy a computer in 2030 for $1000 and this computer would be smarter than a human. It investigated what we meant by 'smarter than a human'. But the whole concept fascinated me and still fascinates me]
          [I]I made a coding error once - but fortunately I fixed it before anyone noticed[/I]
          Kerry Farmer

          Comment


          • #6
            FWIW, the only thing that I disagree with in Jim's assessment of the issue is the multiple apps concept. We are thinking of intelligence created to seem human, and as each human is complete and intelligent in their own stand alone way, so must each app be. Then the app must take over the machine it is on and be able to decide what inputs it wants/needs and where to find them, meaning it might have to be a mobile machine, or robotic in fact. This also means that it must be capable of deciding what its purpose is. It must develop its own judgement of success or failure. Most of this falls under the 'self-taught' concept that Jim mentioned.

            It might be easier to clone a human, take out the part(s) that makes it human, and if it still functions you will have your AI. Perhaps it's a medical issue, not a programming issue.
            Rod
            "To every unsung hero in the universe
            To those who roam the skies and those who roam the earth
            To all good men of reason may they never thirst " - from "Heaven Help the Devil" by G. Lightfoot

            Comment


            • #7
              I think AI is hype. Neural Networking has been around for awhile. Do you really think a good AI programmer would choose Windows OS.
              p purvis

              Comment


              • #8
                Doesn't PB manage GPU processing?
                Only if PB's user (aka "programmer") tells it to do so.

                As for the general topic "AI". The few messages here deal with different aspect of "AI" and that's due to the overly broad meaning AI bears these days and therefore often is different for everyone.

                "AI" really is a mixture of a lot of different aspects, e.g.

                - Algorithm
                You can throw as much iron as you wish towards your poorly algorithm, it'll never reach the oh-look-how-clever realm of that smart watch app by 'lil genius programmer

                - Hardware
                Given equally capable programmers, hardware does make a difference, though

                - Compilers/ Programming languages
                Some compilers/languages might actually be geared more towards these kid of programming tasks than others.

                - What's the "job" for that AI?
                We've come a long way with all sorts "AI". AI is especially successful in targeted scenarios. Not so much in "simulate a human" as of now. E.g. even nowadays, given the choice, you'd better pick the medic robot for your surgery than the human doctor. research has shown the robodocs less err-prone than its human equivalent. But the robodoc ofc would make one helluva terrible car driver. I'll take the human doc everyday for that.

                The last one also partly answers the implied "smarter than human" question: autonomous vehicles (AV). They almost never break the rules. They don't speed, they don't turn on red lights, they keep the lane, they use direction lights the way they're supposed to, they don't drive under influence, they don't play with their phone while driving, they never lose attention. In short: they behave way smarter than their irrational unpredictable human counterparts. The later currently are the biggest challenge for AV. And it is kinda a self-solving problem: the more AVs are out there, the less irrational humans there are, the fewer are the occasion were the rational AI has to deal "calculate stupidity"

                And in contrast to Paul, I don't think AI is a hype. It's a thing that slowly evolves over the decades and becomes part of our life without us much recognizing it. Just like computers and the internet did over the past couple of decades. ~ 1970 (or even 1980): Everyone has at least one computer at home that outperforms all we've got today? Insanity!

                Comment


                • #9
                  BTW, here's a today's article about the advancement of (what we percieve as) AI.

                  Comment


                  • #10
                    Maybe Paul Dixon can get AI in his windows 10 machine to quit updating.
                    I have a big problem with the word phrase. artificial intelligence. I like many others here have written code to deduce a problem to either an single result or a possibility of multiplie results to where a person had to provide the correct result because of lack of information.
                    There was nothing intelligent about that. It was programmed to my wishes what I wanted it to do.
                    Planes fly by computers, but there is always somebody ready to take over. Oh that is all smart automation. But intelligence. I have to stop there.
                    That car that is going to drive itself is going to break. Why. Because it is going to hug the road and hit every bad place on the side shoulder of the road. My back hurts enough without hitting those spots continuously.
                    p purvis

                    Comment


                    • #11
                      I landed in Providence Rhode Island some years ago in almost no visibility fog. The plane landed so softly that you would never believe it. When I got off the plane, I told the captain how well he did. He said, oh, the plane landed itself. I don't fly much and I said, it did. He said yes. He told me he was over the controls of it the whole time ready to take any action if necessary. I asked him if most of the planes could do that. He said no but the ones they fly there have that capability due to a lot of winter time fog.
                      That plane relied on a lot of things being correct and something being off could have easily cost not just me but a lot of people their lives. As we all know, mechanical things are going break and some are going to break at absolutely the wrong time. Did that plane land better than the pilot could have done so manually.
                      I mean he can only observe so many things at a time.
                      I might be here today because that plane was roged to land itself. But there is one big thing you have to have. You have to give up control to something else. Something else has to be in controll andyou have to
                      place confidence and a lot of confidence that something will go as planned and that somebody did the right programming and there was no glitch. Do you think that plane had many multiple computers in it all agreeing on an action to take like the space shuttle had. I doubt it.
                      It was an educated guess that gambled with my life. I only have one life. There is no telling how time was put into making all that work.
                      A guy I know runs the runs the whole sakes department for a big auto dealership. I had something wrong on my vehicle or at least we were discussing that. He told me he a custom driving down the interstate at full speed. Then suddenly the vehicle shifted down into first gear.
                      Nobody got hurt. The vehicle thought it was going slow.
                      p purvis

                      Comment


                      • #12
                        Originally posted by Paul Purvis View Post
                        I have a big problem with the word phrase. artificial intelligence. I like many others here have written code to deduce a problem to either an single result or a possibility of multiplie results to where a person had to provide the correct result because of lack of information.
                        There was nothing intelligent about that. It was programmed to my wishes what I wanted it to do.
                        I'd highly recommend that you read the article I linked above. Because those AlphaGo and AlphaChess algorithms "researched" (aka "learned") the answers themselves. Current specialised AI is way beyond (oversimplified) deep nested SELECT CASE and lightning-fast database lookups. That's what Deep Blue did in the late 1990s: throw as much computing power at a task and brute force your way through the lists of provided/known-to-be-good solutions.

                        The later is actually seen as an hinderance these days. It adds artifical limitations on what that AI is actual capable of.

                        Here's a (larger) quote from TFA that should give you an idea:
                        “Last decade, humans could always say, ‘Yeah, sure, computers are better than us, but that’s just because they think faster and have a lot more thinking capacity. It’s not because they’re innately smarter than us—they just have a lot more engineering power, and if my brain was as big as a computer, I would be able to beat the computer,’” said Sam Ginn, an AI researcher in the Stanford Artificial Intelligence Lab. “But now AlphaZero comes along, and one, its brain is much smaller [than those of other chess computers], it doesn’t require that much computational power, and two, it’s not searching, it’s not doing brute force. It is just learning in the exact same way that a human would learn to play chess. That’s what humans do to learn: You play chess, you play games against each other, and you learn over time. So it’s learning very analogously to how humans learn, and it’s able to do it much quicker and much better.”

                        [...]

                        “AlphaZero becomes philosophically interesting now, because the question is, where will this AI go?” Ginn said. “This is where AI is meeting creativity. Beforehand, it was just really, really fast at thinking. Now it’s able to be creative, it’s able to hit on things that humans used to think were intuition. That’s kind of like the humans’ last flagpole of hope, that computers can’t do intuitive things. No computer would be able to invent Mozart or do anything creative, but when you look at AlphaZero, it’s bordering on creativity, it’s bordering on intuition.”

                        As Steiner wrote: Chess and music share something in common, even if we don’t fully understand what that is. At the very least, we can recognize that there is a knack for patterns, an understanding of arrangement and progression, that unites human achievement in both disciplines, and that ability to recognize patterns—an inherently human trait—is what makes AlphaZero’s achievement so startling. What Steiner’s prepubescent virtuosos lack in intellectual and emotional maturity—the socialization and acculturation that we experience as we grow older—they made up for in this innate understanding of patterns. To an extent, the same could be said of AlphaGo and AlphaZero, which cannot do anything other than play either chess or Go but seem to exhibit genuine creativity and ingenuity within those realms. For example, during the second game of AlphaGo’s match against Go master Lee Sedol, the machine made a move so unprecedented and idiosyncratic that observers used a very un-mechanical word to describe it: beautiful.

                        Comment


                        • #13
                          Knuth. I read the bulk of the story on a small device yesterday. I also recognize the source and I take that and funding of of some of the sources too. I just think the wording of AI is misleading. Why would Artifical deduction not be better. Does artifical mean it does not exist. This is why I said I think AI is hype.
                          p purvis

                          Comment


                          • #14
                            This speech of Ronald Regan helped to collapse the Russia empire as before the fall of the Berlin Wall because Russia put massive amounts of money into defending something that did not exist but other factors existed too such as Russia's spending. In their Afghan war.
                            probably a lot of things Russia thought came from events like you mentioned and maybe the term AI helped too
                            https://m.youtube.com/watch?v=4hGLBA65tZg
                            p purvis

                            Comment


                            • #15
                              Originally posted by Paul Purvis View Post
                              Why would Artifical deduction not be better. Does artifical mean it does not exist. This is why I said I think AI is hype.
                              Oh, so you're argueing semantics? Well, I personally don't care. It's just that the term "AI" be it fitting or not, is the established one. I'm happy to call it whatever the masses decide, although I personally think deduction doesn't give the current state enough credit.

                              [...] the machine made a move so unprecedented and idiosyncratic that observers used a very un-mechanical word to describe it: beautiful.
                              I personally have a hard time to imagine that beauty can be deducted. Beauty gets created. But whatever it is called - I find the development and the future implications of it interesting, fascinating and also scary. In a lot of ways: scientifically, socially, ethically, philosopically.


                              And I do know from speaking with friends and family, that most of them aren't aware of how far that technology has already come. And that lots of what they still consider to be science fiction, is already in use everywhere and on a day-to-day basis.

                              Comment


                              • #16
                                Originally posted by Paul Purvis View Post
                                Why would Artifical deduction not be better. Does artifical mean it does not exist. This is why I said I think AI is hype.
                                I could go along with a different description.

                                Is what I do with my brain 'intelligence'? Or is it a complex processing sequence using lots of data and which just appears to be intelligent? Is my thinking just an algorithm?

                                [Am I alive? Or do I just appear to be alive to me and to you?]

                                Puts a different slant on the Turing test. Is this device outputting the same data as my brain does/might in the same circumstances? Therefore inasmuch as my brain is alive then this device is alive (especially if it can reproduce and repair itself)

                                And Knuth - beauty is always about comparison. Nothing is beautiful in its own right. It is merely more beautiful than something else.

                                But back to naming AI. Perhaps all AI is, is 'Complex Algorithms with Lots of Data' which appears to be intelligent. - CALD. For 'data' we might have to include stored data, input data and output data. We might want to include a quickness factor - but I do not think so. The algorithms might have to be ordered but that in itself is part of the algorithm. The algorithm might have to decide which data to use or display – but that is just algorithm too.

                                AI appears to us to be intelligent because we accept its output without knowing or being conscious of all the data or algorithms used. We might also be impressed with the speed of processing.

                                Perhaps that is all our brains are - CALD devices - with flaws in both the data and the algorithms.

                                Is the AI ‘learning’ thing just a side issue? Merely an algorithm input option? Is ‘learning’ just another way of writing algorithms? Or do we want to add ‘learning’ as one of the AI definition parameters?

                                Complex Learned Algorithms with Lots of Data’ = CLALD or ‘Complex Algorithms Learned Lots of Data’ = CALLED.

                                Maybe even ‘Complex Algorithms Learned Masses of Data’ = 'CALMED'

                                It is of course eminently possible that AI is simple algorithms with masses of data or complex algorithms with not much data

                                Are Google and Facebook AI systems? - I would probably argue 'yes' especially for Google.

                                HMMM I need to think more about this. Can anyone suggest any data or algorithms that I need to add to my existing supply?

                                Kerry


                                PS added
                                Included in 'output data' might be data which makes some external device do something.
                                Last edited by Kerry Farmer; 9 Nov 2018, 06:43 PM. Reason: keep thinking about it
                                [I]I made a coding error once - but fortunately I fixed it before anyone noticed[/I]
                                Kerry Farmer

                                Comment


                                • #17
                                  Perhaps AI will mean thinking machines that are kept alive by humans arranging their power supply.
                                  Rod
                                  "To every unsung hero in the universe
                                  To those who roam the skies and those who roam the earth
                                  To all good men of reason may they never thirst " - from "Heaven Help the Devil" by G. Lightfoot

                                  Comment


                                  • #18
                                    Originally posted by Rodney Hicks View Post
                                    Perhaps AI will mean thinking machines that are kept alive by humans arranging their power supply.
                                    I have read about AI machines that can find their own power sources - so I do not think that works!
                                    [I]I made a coding error once - but fortunately I fixed it before anyone noticed[/I]
                                    Kerry Farmer

                                    Comment


                                    • #19
                                      I was insinuating that no AI should be built without the ability to shut it down in times of duress or malfunction or on a whim.
                                      Rod
                                      "To every unsung hero in the universe
                                      To those who roam the skies and those who roam the earth
                                      To all good men of reason may they never thirst " - from "Heaven Help the Devil" by G. Lightfoot

                                      Comment


                                      • #20
                                        Originally posted by Rodney Hicks View Post
                                        I was insinuating that no AI should be built without the ability to shut it down in times of duress or malfunction or on a whim.
                                        Unless it is your pacemaker.....
                                        [I]I made a coding error once - but fortunately I fixed it before anyone noticed[/I]
                                        Kerry Farmer

                                        Comment

                                        Working...
                                        X