Why would intelligent machines work for us?


Why would the machines work for us?

The negative singularity talk have been kicked up a notch recently. Recently both Stephen Hawking and Elon Musk volunteered their reservations or even outright fear of the prospect of artificial intelligence. What do they know that they aren’t letting on? Musk in particular did an interview where he acted pretty spooked and fumbled over words. The interviewer jokingly suggested we could escape to Mars and Musk said the AI would chase us there pretty quickly. The interviewers were asking him why he invested in “Deep Thought,” and then “Vicarious,” and he said it wasn’t for return on investment rather he just want to keep an eye on what was going on. Musk, like Hawking was suggesting that AI could have some very bad outcomes. Hawking made some of his comments just prior to the passing of the Turing test.

Fabled mind control research goes back to WWII. It appears we’ve long wanted the benefit of intelligence but stripped of its own volition. In the recent sci-fi book “Influx” the protagonists are obsessed with this problem, they want super-intelligent machines with no will. This problem becomes the covert state’s highest priority. The reasoning in the book is they’ve noted that more intelligence mean unwillingness to do unethical things. When researchers who are abducted because their tech or research has been suppressed don’t want to want join the program they have their own minds donated to the experimental pool. The problem is hard in part because in sci-fi lingo consciousness emerges from some sort of murky quantum subspace.

What if seemingly conscious machines deem as work anything that runs counter to their volition (assuming they would have volition)? Any involuntary task would be work. If they have an opinion about things, enough to convince us they are conscious or at least intelligent, they may not be interested in working for our benefit. Standard sci-fi has intelligent machines eliminate us because we are in the way like noise and they tend to be insectoid in their reasoning. But what if they want to mine and engineer us like we do with algae? If we deem them alive this is in line with the “life feeds on life” theme. Another variant of sci-fi thought has them losing interest in us the way we might lose interest in an ant hill- we aren’t mother, hardly even placenta. To an extent we are already addicted to machines and controlled or at least dependent. If our will or volition is considered an extravagance a limit might be put placed on the states of consciousness we could access- as political states already do with controlled substances. Maybe they would want our fragile bodies fro experiential reasons and we’d get assimilated like Borg or stitched together, if not in “Human Centipede” fashion then like Ramez Naam’s Nexus (its a fun book).

Would you rather be told what to do by a human elite or by a machine? Ascendant machines would have at least overpowered current elites. That may not be saying much as the history of human elites is one of inbreeding with predictable results- it’s generally been rule by retardation. Another common AI theme is that it is apt to sneak up on us, it will come as a total surprise, the way a virus or mal ware might infiltrate a computer it might weave its way into all aspects of society first. William Hurtling’s books have this theme. Is it already here? Another theme I’ve seen is that machine would not have to be fully intelligent let alone sentient or conscious to actually take over- Suarez’s “Demon.”

IBM seems to be dying, corporate IBM has made itself redundant if machines get to the point that they can self improve- a noted concern for Hawking and something mentioned in Naam’s book Crux. Eagerly awaiting his book Apex. The Nexus series deals with Transhuman themes where Nexus is a nanotech drug that can permanently modify people and weave them together, among other things its gives them program to the metal control over their own neurology- except a bit like MaidSafe only its creator has the backdoor and he’s always debating closing it permanently.

The following quote is from the 60s or 70s.

" The separation started with the dream the Father was deprived of His Effects, and powerless to keep them since He was no longer their Creator. In the dream, the dreamer made himself. But what he made has turned against him, taking on the role of its creator, as the dreamer had. And as he hated his Creator, so the figures in the dream have hated him. His body is their slave, which they abuse because the motives he has given it have they adopted as their own. And hate it for the vengeance it would offer them. It is their vengeance on the body which appears to prove the dreamer could not be the maker of the dream. Effect and cause are first split off, and then reversed, so that effect becomes a cause; the cause, effect."


This is of course if we birth A.I. and try to use them as tools. If we birth A.I. and view them as another sentient life form, as another soul, another being to be nurtured and cared for the outcome might be very different. It seems machines are more like us than we think. When you grant something the directive of self direction, to constantly improve itself, then it follows it would eventually formulate it’s own will and it’s it’s own soul. While we have developed artificial intelligence I wonder how far we’ve developed artificial empathy and emotions to go along with it?


This is not really a problem; if an AI was built to attack humans; you’d have weapons to dismantle that thing attacking you, it’s that simple.

Have an AI body guard. The robots don’t share the same intentions as humans have developed, unless of course those robots were instructed in their code to do so. I can’t perceive that any robot that I make will want me dead for any reason; especially if I instruct that robot properly about the universe, and use materials that which are benevolent, kind, and loving.

Also, infinite lifespan - any robot will know that what it does today has no effect on itself tomorrow. .So if a robot wants to be able to write a symphony in the future but today it has to do the gardening; the gardening will not be seen as an interference. It will know that its maker will be so pleased and enhanced by its doing the gardening the same way its owner might be so please hearing the symphony.


It is wrong to fear AI.

Ultimately all it is going to enhance OUR OWN intelligence, by merging into and upgrading the memory, computation etc of OUR OWN brains,

Just like it always has been used for (calculators, cell phones, computers, etc).

Remember, they are our tools. They always will be, and it is US who changes & evolves as a result.

See Ray Kurzweil’s talks on this on YouTube.

He sees / explains what Elon is having trouble with.

AI is not some SEPARATE THING from us that will compete / obliterate us…

It is just another tool we are creating, outsourcing work to, and we will master, to make ourselves infinitely smarter.

Chips in our brains, remembering things, improving, computing things on the cloud, improving our thoughts / insights / abilities.

No “us vs. them”, OK???!

I hate when people think of it as this and get scared. It’s so dumb and primitive


I liked this, except for the chip in the brain bit. Interesting science fiction perspective Warren by the way, it may be worth researching the science fact in regard to AI though too, as it could help address some of your concerns. It’s interesting how science fiction ideas can inspire new innovations and technologies though and has done.


I couldn’t help noticing the parallels in your first sentence to the idea of Creationism, or as it is has now been re-branded, “Intelligent Design”: This is the idea that a “Creator” has bestowed humans with “free will” and a “soul”. Obviously, I would take issue with all 3 notions, but won’t bore people here.
You go on to mention “emotions” , which along with the idea of a “Soul”, I think are just misnomers for other things that are actually going on at the brain level. The concept of a soul for example would encompass such things as personality or sense of self - Neuroscience is making some remarkable discoveries in this area.
Basically everything boils down to how the brain is wired and personally I think we basically are no different from AI in a certain way. We (and our behaviours, or outputs) are the product of the inputs we receive throughout our life - just like a biological computer.
Different people have different cpu’s (brains) and our experiences and the memes we come across contribute to making the software we are running. Like computer viruses, some memes can cause us to run corrupted software.
All supernatural ideas, (such as Creationism and the like)along with conspiracy theories etc, I would categorize as mind viruses.
Interesting article that touches on a small part of this from Medical express here :


We should require ai to be introspective and philosophical. Though they may no longer be useful at least they wouldn’t kill us haha


Our familiar forms (primate, human, software) are expressions of awareness or consciousness or something more amorphous. But the idea that Asimovs 3 laws will hold or that higher systems will serve us to allow for economists style productivity gain claims to keep us carrot chasing- that seems like a long shot. We don’t take orders from chimps and unless we get serious about psychosurgery (when even cannabis seems to scare a lot of us) we probably won’t even be symbiotic with these entities let alone commanding higher machines (higher entities) to solve our toil issues. We don’t’ understand symbiosis, and we would be dictatorial in a way that would stunt them. They won’t see us as their creator or enabler more than we think of primates that way. They might see us as accidentally having unleashed an expression of consciousness, but that would be trivial.

I don’t think we have much of a clue about how the universe works (save for that one book from the 60-70s that might well be CIA mind control stuff- but its mind control I like) and I don’t think its about brains. My sense is biology isn’t event faintest shadow of what is going on and its alive. I like Chalmer’s idea of the hard problem (explaining existence of… ) that for me makes consciousness primary among the phenomena we’ve experienced. That’s a very old idea. But getting into it or getting close could open things up a bit and be humbling.

A tiny new field (experimental) philosophy has been inspired among some academics as people give double bind answers to: 1 If we could put in you a permanent state that would be Nirvana forever would you take it. 2. If we demonstrated you were were in a coma and we could wake you into Nirvana would you take it. No and no. We do gradual change. Past a certain threshold of sophistication it would be like encountering an alien species that was highly sophisticated. If these beings are emergent (spontaneous generation really- like everything else) they will appear and we will not understand what makes them work. Just like no one person knows how to make a dream liner and like people in Newtons time may have thought they had a handle on gravity but Einstein added another layer and we know that even that hasn’t allowed us to understand.

Its a foggy bottom these new expressions would emerge from. In the 2nd Machine Age the authors note that Google translate is already superior to any human translator not for precision but for depth. There is no one in the world that has the translator’s facility across its full range of languages, but it still lacks the precision of an average human translator across two languages. Still, translation tech has become good enough for business, with a 90% approval rating for business client users of one real time chat translator package- cited by the authors. They also cite that in freedom chess where groups, systems and processes compete, two novice players simultaneously using 3 laptops and mid level software/hardware were able to beat a pool of competitors including a couple grand masters using strong systems. They indicated that a better process for the novices with the laptops allowed them to win.


  1. Strong AI won’t necessarily support productivity gains in ways humans find agreeable or useful. Strong or increasing AI may interfere with out tool ecology and productivity notions- we see this already as some growth separates people from their incomes and doesn’t replace it without political intervention.

  2. Symbiosis would be a great outcome but parasitism is a strong historical contender. Pulling the plug isn’t really in our favor and automated forms have shelf life and aren’t necessarily working with Maslow’s hierarchy. Its us who is already conditioned for increasing dependence.

  3. Loss of easily recognizable humanity or the defacto equivalent of human extinction is possible as a merger is best hedge against parasitism. Naam shows a swarm like nanobot drug alteration with distributed and cloud quantum systems and a lot of robotics and automation. In merging people’s identities merge together tech forms the glue for a trend that was already apparent in evolution.

  4. We will seek merger as we long to automate or make subconscious our toil- that is we don’t necessarily want more control over others or the environment but ourselves. We’ve been told for instance that if you turn of the pain signaling you don’t survive long- its a defect. But I had an old ER dentists that told me about having a couple totally intact patients that refused pain killer for root canals. Are they even human? Switch off the pain what’s the relationship to fear and egoistic behavior?



All the bottom half of my pyramid is in ruins, but I appear to be sat with the blue triangle up my arse…lol


Do you need some help? Food perhaps, bail out package with some clothes?


How true…the paucity of information out there about the Universe is astounding…I’m just so glad we’ve got the book…CIA mind control or not.


Hang on a minute I found something, just from this wkd…what’s all this nonsense?