AI Sharing on SAFE

Literally just woke up from a nightmare :stuck_out_tongue: was walking around on a tour of Zuckerberg’s Palo Alto HQ and he was showing people how close he was to achieving true AI. He was so far ahead of everyone else, and about to use it to take over everyone’s minds and eventually kill them off, replace them with something else (I couldn’t understand him at times lol even in my own dream :stuck_out_tongue: )

Goes to show this is something I worry about frequently: what happens if someone gets so far ahead of everyone else in AI.

Would any devs with more experience give their input? Possible solutions in my mind:

  • open source AI might be bigger than closed source AI, due to amount of eyes on it? (ugh but corps can use it…)
  • SAFE can help and incentivize sharing of data and AI code (PtP)
  • SAFE will create a more egalitarian world, where more people have internet and the economy stabilizes and evens out much more, so the wealth / power gap isn’t as dangerous as now and more people are enabled to work on open source, democratised AI

IDK, it just really keeps me up at night that all it would take is one billionaire to achieve AI from his employees and then it’s GAME OVER for our species. (not to mention freedom)

Thoughts? @dirvine I’m sure a line or two from you could put my mind at ease :stuck_out_tongue: always does

#EDIT: I am of the class of thought that us humans will merge with AI to become one and the same thing, instead of us humans “passing the torch” etc. to clarify.


Not that it will be a comfort to you, but I would recommend Sam Harris’s TED talk on AI. His view is that AI is a winner take all game, and I find it hard to disagree with him.
"This is a winner-take-all scenario. To be six months ahead of the competition here is to be 500,000 years ahead, at a minimum. So it seems that even mere rumors of this kind of breakthrough could cause our species to go berserk."

1 Like

Hey Will, I’m not going to be comforting here except in saying that I think the stage where a corporation can remain in this position will t be short lived. Depending on what you mean by AI here.

My reasoning is that it is untenable IMO that humans will be able to control a true AI once this is achieved. Think about it!

So once it happens it won’t just be out of our hands, but Zuck’s too I expect.

More interesting then is to speculate about what AI might become and how it will view not just humans (as we do bacteria, ants, apes, fuel, materials?) but everything. Will we build a toddler (remember HAL?) or will it be like Buddha? Have you seen the movie “Her”? - check out Alan Watts on YouTube too.

Don’t have nightmares about AI though, there are plenty of other things to have nightmares about :wink: (sorry)

1 Like

There are two different types of AI. The Evil type, and the good type. The evil type is something that Google Empire and the Elite is building.

Basically there is a hive mind emerging from the internet. By giving an AI control over the social graph that determines which tweets/stories/posts are seen by who, the hive mind (and consequently individuals) can be manipulated. The elites are trying to build a system that will enable them to manipulate it before the individuals become aware of it and learn to manipulate it for themselves, and that they are going to manipulate it to prevent individuals from ever gaining that ability themselves. This means that the AI will able to control your destiny from the day you are born to the day you die.

The good one…
What would it look like if individuals could manipulate it themselves? There’d be a currency backed by ideas and the transmission of ideas, and we could basically dream things into existence. It sounds magical, but it’s just regular economics, probably running on safenet.

Safenet AI will benefit for Humanity because of the Safenet ecosystem that prevents AI from manipulating people’s minds, and reality. It only can provide what individual requested.


See my edit below and in the OP. I don’t agree here, becayse I think we humans will (continue to) use technology (AI, etc) to expand our own minds, brains, and bodies exponentially, to stay in control /relevant:

Much work is already being done in this area

1 Like

That’s a possibility Will I agree. Which happens will I think depend on which is more effective in evolutionary terms, in creating a headstart that can’t be closed by the other. Personally I would bet on algorithmic mass computation rather than augmented human intelligence though.

Humans will, I think, be involved in this more in training and refining (teaching) AI (think Amazon Alexa and Google translate - we are training them without even knowing). I can’t imagine us ending up with individual enhanced humans at the centre, elevated and enhanced by technology. That seems to me like anthropocentric wishful thinking, because I would bet on self teaching AI breaking free of us (eg Alexa leaps from data centers to something autonomous and not reliant on our cooperation or consent).

Once AI gets ahead of us, it will be able to get us (or enough of us) to do what it needs in the same way that @anon81773980 suggests an elite would try using an AI to control people. They may well try, but I seriously doubt they will be able to remain in control because a true self learning AI would be able to control them too.

It seems hard for humans to imagine being superceded by something we created.

1 Like

why could those not become the same

if we’re using AI in our own brains, how would anything break free? We would become ever-more intelligent. We would break free of ourselves? Why do people always cast humans as this static unchangeable thing, and paint a picture of US vs THEM?

Maybe people just have a hard time imagining becoming 100 trillion times more intelligent than you are today. I admit I can’t really imagine being that smart, but I definitely know it’s possible

1 Like

It’s not about “intelligence”

It’s about evolutionary consciousness. We’re at a breaking stages where our consciousness are too finite for this meat space, and we want go even higher.


As I said, what you describe could happen, but for that to be so it needs to win out in the evolutionary selection process against alternatives.

One thing will win and dominate - humans have done just that. We went from just another predator to a completely new level of ability that has enabled us to dominate all the other species we’re aware of.

You are saying you think that humans with augmented intelligence will outperform a purely compute based AI. I think it’s more likely the other way around, but who knows.

I’m not saying that there won’t be augmented humans though. There may be time for that to happen and for these superhumans to dominate humans. But then I think they too will be superceded by a compute based self learning AI that can self design and self evolve into something even superior.

Why would the superior intelligent being, be one built by augmenting a human “chassis” rather than one built from of custom designed materials and processes by a superior AI?

Again, from the talk that I referenced:
"Another reason we’re told not to worry is that these machines can’t help but share our values because they will be literally extensions of ourselves. They’ll be grafted onto our brains, and we’ll essentially become their limbic systems. Now take a moment to consider that the safest and only prudent path forward, recommended, is to implant this technology directly into our brains. Now, this may in fact be the safest and only prudent path forward, but usually one’s safety concerns about a technology have to be pretty much worked out before you stick it inside your head.
The deeper problem is that building superintelligent AI on its own seems likely to be easier than building superintelligent AI and having the completed neuroscience that allows us to seamlessly integrate our minds with it. And given that the companies and governments doing this work are likely to perceive themselves as being in a race against all others, given that to win this race is to win the world, provided you don’t destroy it in the next moment, then it seems likely that whatever is easier to do will get done first."

Again again, why can these not become exactly the same thing?

Why does the human ‘chassis’ or any part of the human for that matter, need to be static / unchanging / a hinderance in any way? Every part can and will be changed. Think future humans /whatever we evolve into will look like us at all? lol. could become 99.9999999999…% or even 100% computation ultimately

Whatever you’re talking about that is purely computation based, humans would merge with or tap into to stay #1 in the universe :slight_smile: the incentives to do so are absolyte

I think we’re now largely in agreement, about the destination at least. Whether there is a stage of human augmented AI or not is moot (but I’d agree with @drehb’s quote above, which makes me think it’s less likely than more).

I don’t know how many millions of years it took to go from the first primates to modern humans, but once our brain reached the threshold to move from genetic “learning” (ie evolution by natural, genetic processes) to cultural learning/evolution the subsequent leaps have happened so fast there has been hardly any generic component to them.

In about 100,000 years we have jumped, in ever reducing timescales, from one technology (human augmented intelligence level if you like) to the next: to make stone tools, fire, language, metals, writing, printed books, science, electricity, radio, rockets, computers, atomic bombs and each revolution quicker than the last.

So it’s logical to think human augmentation will continue. But why, when intelligence reaches the stage where it can design the host itself? Humans are just beginning to be able to do that (eg creating new life by designing a genome), but I think AI will zip past that stage very quickly and have little use for the human body in the process.

In my life I’ve watched these kind of innovations compress in time from taking decades to, about a decade, now faster than that and still accelerating. Remove humans and our slow learning processes from the loop and bang!

It’s just my opinion, but I think that things will evolve so fast that if we blink we’ll miss any super intelligence stage that is recognisably human or based on the human genome.

I find it hard to believe that this body of mine is, by chance, a component in the best solution to the next big leap in intelligence / evolution.

1 Like

Or change it! If we’re wired to AI by direct brain-links, there is nothing to slow anything down!

Of course it isn’t, not in its current form.

But directly interface yourself with amazingly powerful machines in the future and remember to get back to me on your stance once that is available :slight_smile:

I’m certainly not going to let an intelligence explosion just pass me by! And I’m certain there are millions with me on that.

1 Like

Yes there is, the speed at which our brain operates. :slight_smile:

What is the advantage to this super-intelligent-being of:

  • a human body
  • a human brain
  • a human personality
  • etc

Why would any of these and any other human facets be preserved for evolutionary advantage?

If you want to keep them you need to justify them :slight_smile:

Again, NOTHING IS STATIC / unchangeable. Especial’y not that.

This, along with anything else can be exponentially improved.

Our brain can tap into limitless computation outside of it to improve how fast it operates.

Add in nano-tech cell replacements and the merge happens quicker


Absolutely nothing.

That is why we will have to change all of those things in the process.

Are our personalities, brains or bodies the same as our ape / mammal ancestors? No? Then why will they be the same once humans think with AI?

reminds me of this prediction


So as I thought, we don’t disagree about the endpoint, just the route. At least in taking your last post Will, as agreeing that very quickly, there will be nothing human about this AI.

it will still be us though.

No doubt it will change and evolve an unfathomable amount, but it has always been and will always be us, at every step of the way.

…unless some billionaire reaches it before the masses, like in my nightmare, in which case he will just kill everybody.