So just had a fun little chat in the Matrix room and came to the conclusion we should probably contact the guys at Boston Dynamics and get them integrating Atlas to run fully on the SAFE network from the start
For a bit more background:
This is the Atlas robot from Boston Dynamics (the most recent video)
for previous videos just search “atlas robot” on youtube.
So a lot of my spamming ideas in the chat revolved around how basically this robot we can all see will have better agility than actual humans in probably the next 3~ years. Now you stuff a bunch of different A.I. in its head for doing 90% of the world’s jobs that humans currently do and… well someone is going to need to move their ass faster with getting Universal Basic Income up and running globally for when we all have 1 of these robots in our house and we are mostly all jobless.
However… do I want to live the movie ‘I, Robot’ ? Will Smith was cool and all in that movie but that was a big ass centralized computer for anyone to take control of all the robots with and enslave humanity, so well I should stop here but I’d love to hear what @dirvine thinks on getting these robots running off the SAFE network as early as possible
I’m interested to hear people’s takes on this because these robots have come on incredibly, and that will continue, although i think there’s a lot more to human labour than agility.
One of the characteristics of robots that intrigues me will be their ability to learn, and do that faster by learning in parallel (sharing their ‘experiments’) and then download learned skills to many robot bodies Matrix style.
I’m curious as to what those and other differences might mean as much as straight substitution for existing human roles. And of course what new roles humans will find are created - think avatar military robots, and of course Iron Man. When is Elon Musk going to show us his Iron Man suit!
He needs to finish the mini handheld arc/fusion reactor first.
Machines have been taking manual labour jobs for literally centuries; I doubt if it will ever be a big deal. The big disruption will be when doctors and solicitors and accountants and other more highly skilled and well paid jobs that people spent a longer time training for start going to AIs.
But if that happens UBI won’t save us. People need a purpose. With mass unemployment we won’t all dedicate our lives to the arts and philosophy - we’ll have huge numbers of people joining extremist groups of all kinds which offer them a powerful sense of purpose. Drink and drug addiction will go through the roof and with it crime of many kinds. Providing a basic living through something like universal basic income will be the least of our worries I think.
Your right that a centralized computer controlling an army of robots is scary though, but is an army of robots nobody can control any less scary? I’d be just as worried about the threat posed by some bug or some novel context coming up that the original programmers didn’t forsee making them behave in unintended ways as about someone controlling them. After all, we’ve been enslaving each other since before the beginning of human history and we’ve never needed robots to do it…
I believe AI is a tool.
Farming, construction, manufacturing, etc… is much more efficient with mechanical machines, but there is a human behind it.
AI will help doctors treat more people, and with better treatments. I expect a human to sit there behind AI l watching over the AI results.
AI/Machine Learning technology is to data activities what mechanical machine technology is to manual labour.
Bindly trust AI without trained people would not be progress.
But to play devil’s advocate a little: a farmer can see exactly what his combine harvester is doing and he knows whether its doing a good job or not. He is also very much in control of the machine. It is still the farmer who farms and the machine is unable to do the job alone but is a tool for the farmer.
A doctor watching over an AI will have a much harder time understanding why it came to a certain conclusion. The AI would also be capable of performing the job alone if people wanted it to. A combine harvester doesn’t know when to go out into the field, and can’t do so of its own volition. People can easily go to see an AI doctor which would do the whole job from start to finish. The doctor will perhaps become little more than a human interface, which itself becomes redundant as people begin to trust the machine more than the human.
There is a qualitative difference between a physical machine which implements human decisions, and an AI which makes decisions for humans.
I guess we will have fun finding out. I’m expecting machines in this case come to a conclusion and point out verifiable conclusions in medicine. Research takes ages… a machine could help us reach cures faster.
I think we have the overall control when it comes to AI. Farmers are already training AI by showing it good and bad produce, the AI learns and then helps pick out the bad.
I think we either define the training, or we have control over the interpretation of the results.
To build an AI that drives a car, we choose example of good driving to train it. We wouldn’t use that trained brain to cook a meal.
We have a lot of say and input in what these algorithms do.
Holywood AI… we are a long way away from that I think.