What do you mean by “us”?
you and me …,…20xr
OK, I don’t see that happening
This post turned out to be more of random ramblings about what I learned about the current state of AI this past year mostly; sorry if it’s not really going anywhere.
I believe we’re really really really far from general purpose AI. I read dozens of machine learning papers on Arxiv and, while there are great ideas and huge improvements, much of what we have are just newer and newer tricks to teach the networks for different kinds of pattern recognition, regression, generation problems faster and faster from less and less data with deeper and deeper networks for smaller and smaller improvements. If we’re looking at it from the point of developing a Super AI, these are some of the very early steps of preparation, gathering the hay for the bricks for the castle.
The most lifelike application is reinforcement learning, where the rewards are more separated from the actions (just like in reality) than with more direct ways of learning, where the feedback is immediate. There are really cool improvements here, especially from last year, and when you see a computer playing e.g. a FPS like a pro, that’s kinda creepy. Still, there is no real model for “thinking” in the conscious human sense, to have aspirations, aesthetics, goals.
Another huge area that’s missing (though work is being done) is longer term planning. Right now, things that require hierarchic plans have to use some way to store state outside the network. It’s not necessarily a problem, but we humans can somehow manage to do complex stuff without jotting down every step in a notebook, so this is clearly an area for improvement.
By the way, we don’t even have a real mathematical framework for artificial neural networks. I’m not talking about the basic matrix calculus of course, but many of the algorithms and practices that work do so for no apparent reason, or even against it. I read/heard Geoff Hinton say things like (and I’m paraphrasing) “I believe max pooling is a huge mistake; it’s a shame it’s working so well” and “everybody thinks we had a good reason to use the sigmoid for the nonlinearity, but really it just looked like a good idea.” Similarly, Ian Goodfellow wrote it (I think) as an answer during his Reddit AMA that he’s continuously trying countless ideas in parallel (i.e. even during answering the questions), and then some of them happen to work better and those can be investigated.
@whiteoutmashups, while this movie is very dated (old) it should help reinforce your nightmare and make them screammares
We built a super computer with a mind of its own and now we must fight it for the world!
The movie used the fears people had in the 70’s about computers taking over the world. This fear of AI is nothing new, very old in fact. People fear other tribes taking them over, progressing up to other countries/religions taking them over, then after WWII it was the aliens, then computers.
We have now just narrowed it from the broad concept of computers to AI.
Its a fear that humans have had for all of human history. We fear what we do ourselves - takeover.
“I for one welcome our new overlords” infamous quote.
Instead of becoming stand-alone entities with their own “will” (or whatever we call what would model a close enough concept) I think AI agents can become useful helper in many fields.
For example, I can see AI behind sophisticated control systems that are unimaginable today. I’m not an expert on horseback riding, but I’m sure it’s different than driving a car: the “vehicle” has its own mind, which may be a problem at times, but it can take a load of worries off the rider at others. Also, while any rider can ride any horse in theory, it helps if they are used to each other.
As an example for that, I can imagine an AI co-pilot who learns the favorite moves of a fighter pilot and, over time, as they both learn to work together, it can take over some of the more mundane flight details (or even correct mistakes or smooth out some moves) without getting in the way or overriding something it shouldn’t.
@Tim87, what you describe is a where a lot of AI research is going into. Not specifically called AI though.
And Elon is (now) with Kurzweil, who I’ve been following for a long time and has great insights as a long term forward looking thinker.
Elon only recently started to realize and express these lines of thought. Glad he’s come over to this side though! I knew he would eventually.
The good thing is that we humans have a head start on the AI and have ample time to make the merge with them ahead of time.
I’m not sure I understand what you mean by “not specifically called AI.” Unless we want to restrict the I from it to human-like intelligence, but I don’t think that fits the accepted terminology.
Something “smart” like what I wrote would require the kind of high level pattern recognition that was unimaginable before the recent (i.e. last decade) recurrence of neural networks, and more specifically convolutional networks, which make a lot of things much more efficient.
I can’t see it happen without reinforcement learning either, and that became feasible only these past 1-2 years, especially since A3C and the improvements on top of it appeared. In short, putting together something I described is a classic AI/ML problem.
Research is calling it assisted technology or similar names to remove the term AI. Because really it is not true AI that has totally independent thought/actions[quote=“Tim87, post:30, topic:12538”]
I can’t see it happen without reinforcement learning either, and that became feasible only these past 1-2 years,
Umm we were building reinforced learning programs in the 70’s and there were text books on the theory
Sorry, I thought you meant the AI label was too specific, not too general
Thanks, I’ll look into what I can find about it; I’ve been playing with these thoughts but I never had to look into it deeper for what I use AI for.
That’s why I used the word “feasible” – there was no way to do something like the ATARI games or Doom, or using a high-dimensional continuous action-space, e.g. for simulating a robot in 3D. Much of the theory existed, but the technology was behind, and then also some tricks had to be learned/discovered/invented.
Narrow AI is the terminology that I’m familiar with for these application specific implementations. On the other hand, general AI is more advanced and can do multiple tasks or any task.
@Tim87 I understand your logic, which is also why Will says we have lots of time to merge. We may well do.
My thought is that, yes, you’re likely right if things carry on at the pace they are now but IMO that is not certain. I see both that things are speeding up, and have ideas about how things could suddenly go far faster than we expect.
What Google have done recently with Google translate is an expample. Even the most optimistic of their engineers were shocked at how one innovation created not the incremental improvement they were hoping for, but a leap none of them imagined was possible.
Where I think things have yet to even get going is in systems that self learn, through evolution inside the software itself - without humans in the loop. We have had simple limited attempts at this for decades (genetic algorithms, for example), but it looks to me like computing and other AI might be lining up to create something of a chain reaction in this area.
Consider the acceleration that happened when life made the leap from genetic based evolution to thought and cultural evolution. Hundreds of millions of years to reach modern humans. Then 100,000 years to reach what we have now. There was a sudden and dramatic acceleration with a single innovation, the modern human brain suddenly spawning innovation after innovation each speeding up development further.
But the shift from one platform (genetic evolution) to another (mental and cultural evolution) was the big shift, and it’s another one of those that I’m talking about. If that happens it could be so fast we may not even notice what it spawns.
So I think the next leap could be even more dramatic. Whether it is imminent (me right) or not (you and Will right) remains to be seen!
I still can’t imagine why you want it to happen like that. It would be a disaster and extremely dangerous / almost guarantee extinction.
If we are creating something so powerful why would we not (by then) be incrementally plugging ourselves into it and enhancing our own brainpower with it, and having billions of people around the world plugged into it and evolving?
Computation, internet, etc are all becoming increasingly more accessible to all 7billion (just look at smart phones) and this technological merge will only increase faster and faster.
Such capable humans wouldn’t let such opportunities pass them by!
I think I posted this before, but I made this viral (2mil views) timeline of Kurzweil’s predictions (who Elon Musk now bases his AI businesses off of):
Humans don’t control evolution so it’s not a matter of what I want.
Modern humans were a new species and have wiped out many cousins recently. There’s no reason to expect that from here on the will be a gradual and unbroken line from humans to humans2 then humans3.
At some point I will die, often hard for me to accept, but I can’t deny it and personally have chosen to accept it. Not everybody does though.
One reason that cyborgs, or uploading ourselves to computers, are popular ideas is because we don’t want to die, and similarly are also against the idea of humans being superceded by something we create rather than evolve into in a more traditional way.
But that won’t make it so. It’s not a reason not to try either - I’m all for that, but I’m not attached to that idea.
My father (a fan of science fiction in his youth) recently gave me a copy of “Colossus: The Forbin Project” as a gift, during my last visit back home to the U.S.
I have yet to watch it, but intend to soon—and it is the sole film on my computer desk at the moment. It’s kind of a neat coincidence to see you mention it here.
If you are interested in old tech then you will like the movie. Even if its just for the old tech.
There is a lot of the movie that is consistent with the tech of the time, if we were to build a computer of that magnitude. But obviously not the programs We cannot even do that yet.
Mind you the Cray 1 would have had the same computing power as that “Colossus” of the movie, and at a fraction of the size. Also the fastest modems of the era were in terms of Kbits/sec and so the communication between the two computers could not have been achieved that we see in the movie
But the point of the movie was a warning. SkyNet if you will.
My take @whiteoutmashups
Like we have world super powers we will have several large world wide AI systems. US and western allies will have big ones, China will have their own, India will have their own etc.
I think personally we already have AI, it’s already here just not form this time and maybe not in a form we would recognise…I am almost certain of it.
I also think some of us will merge with it and others will stay completely natural human beings.
There is no such thing as closed source AI, all the big companies working in this area are aware they have to have open sourced AI. Even if they have closed sourced projects (I work with a company that works on AI).
AI is closer than we all can imagine. And once it gestates it will change everything we know very very quickly from a perspective of less than a decade. It will change so much of our day to day lives that some people will not be able to cope. All of the big tech companies (both hardware and software) are throwing money and resources at this as well as governments and other bodies because this is the new battleground for supremacy. And there are partnerships forming behind the scenes that many people do not know about which are acting to leap frog competitors in the field and fast forward development.
If only SAFE had an angle into this…