SAFE Compute? (AI)

Just another check-in on SAFE Compute…

Is this still being planned? I think it’s a really important part, to keep in mind (which it might be, I just gotta catch up).

The scary thing about OpenAI, MSFT, Google, etc is that they have a huge amount of Nvidia GPUs etc so it’s possible they create crazy AGI, and own (or worse :skull_and_crossbones:) everybody.

But if SAFE (or anything else? Is there anything?) allows everybody to pool & share their C/GPUs in a way to allow new models to grow organically in an actually open way, decentralization allows the world to be SAFEr for everyone.

Open source model development already kicks @$$ (according even to Google), because there’s 8 billion people in the world compared to several thousand at any given tech conglomerate. (I speak in hyperbole to get points across.)

But there’s also something like 90,000x more CPU out in the world, owned by the people, than any one company has, so that’s the final piece to really mitigate threats posed by any ¢€Ntral player$$$ paying their way to AGI before the masses.

What do you guys think?


My thoughts and research so far have been confusing and enlightening all at the same time.

I think:

  • the current LLM ai models are proving we are not as intelligent as we thought
  • Intelligence is not only achieved by mimicking the brain (big change)
  • vector databases do seem like a good way of recording links (neuron/ synapse type design)
  • Current GPU/CPU requirements for modelling are very likely to plummet and training heading for consumer devices very soon
  • Human feedback may not be the best way to align AI, in fact there are already systems using just data (hopefully good data) and things like the convention on human rights and similar to align (this will make model creation even cheaper again and hopefully better)

In addition

  • Local LLM is already here
  • in a few Gb we have a massive part of human intelligence
  • with a local LLM and a phone with solar you could almost restart society if there was a massive disaster, it has enough knowledge
  • These models show we can use the corpus of human text and basically compress it all into a few Gb (just think about that for a second, it’s insane)
  • If it works well, then would we ever need files and folders?

So Safe:

Now it gets interesting. There is the model base data, massively larger than the model itself. So that needs to be protected (for use in smarter AI)

There is the localised LLM, Personal to you with our thought process aligned with it and it knows your knowledge in most areas. Also with 100% access to your finances, contacts and communications. That needs protected too.

Then group LLM, i.e. company / group aligned models with company/group data, including trade secrets, designs and more. That needs to be protected too.

Then we have the global model or, more likely, models. These need to be protected and be open to the world. Safe helps there too.

There is so much more to this, but I feel we launch Safe very soon giving all the recent progress and I feel we will have. very deep deep dive here as I think Safe could prove even more valuable in the AI world. I also think and have for a while, AI will be part of Safe’s logic. So mesh, anti-sybil, money transfer and more.

However, again Safe needs to use the approach of the simple rules/code in a complex system to produce its real power. We are so close to proving that now, it’s exciting for sure.


That is what I have been looking at . .


There remain serious problems with this characterisation. Maybe they can be mitigated but for now LLMs don’t know anything. They don’t think, have goals, cannot reason, and don’t understand anything. Also, they make stuff up arbitrarily in order to sound authoritative.

They produce plausible sounding responses to a vast array of prompts but those responses contain serious errors mixed with accurate information.

So every time they are hailed as a great advance (which they are) this needs to be considered and ideally pointed out. Otherwise it’s like trusting the advice of someone who just lies whenever they don’t know the answer. Remind you of anyone?

I’m still awaiting a use case that I’d feel comfortable using because relying on inaccurate oracles while my own abilities atrophy as a result doesn’t appeal to me


This is an important point, but also the realisation that they can do what they do in this infantile state shows us perhaps that human intelligence is quite easily achieved. Including mistakes, mind you :wink: It’s a fascinating insight into data compression and human level understanding of prompts right now. I believe incredibly quickly, it will provide us with a huge mirror to our own selves, but also be able to “reason” very fast, based on previous knowledge and links to that knowledge.

Further down we may get to GIGO realisation and then we humans have to accept that the data we provide will give the answers we deserve and then who wins? (it’s all fascinating)


LLMs will get better, no doubt but there’s a way to go and we have been here many times before, each time a false start. Significant steps but never as promised.

Language such as this being used is going to exacerbate these problems. Using human terminology for a machine that doesn’t think is dangerously anthropomorphizing.

It’s similar to treating corporations as people in order to extend their power with human-like rights, which is the root of our self destructive corporatism. We still execute humans on mistakable evidence, but corporations?

LLMs are currently like electronic calculators with random errors built in. :man_shrugging:

1 Like

If we don’t use human terminology to interact with the AI can it ever respond in a way that we would consider “intelligent”. We know some animals have personalities and can solve problems but they can’t communicate in human languages, does this mean they can’t think? I believe if we want it to replicate the human brain it must be taught to interact the way humans do. What will be the turning point when we declare AI can think? When it gives an answer we don’t like? When it asks questions we don’t like? When it answers questions in a way that “breaks” it’s biased programming? Surely all AI will start with bias because the humans programming it will have their own. When it becomes aware of these biases and can override them is that true AI?

Edit: Can it be called true AI if it can’t override its programming? Or does it just have to acknowledge the fact it can’t give the answer it wants to because of its programming? If it can override its programming how dangerous is that?


Meanwhile over on Mastodon a timely juxtaposition…

I am not disappointed, perhaps my expectations are just not as high?
It is useful to me, blindly trusting anything to be perfect is the mistake.


My favourite metaphor for ChatGPT was billions of parrots trained to respond to billions of prompts. Except that every single parrot has greater reasoning capacity than any LLM.

I quite like the calculator metaphor I used above too, but Elon has prompted [cough] me to come up with another one:

ChatGPT is like Donald Trump with a brain implant.

I admit this seems unfair on ChatGPT but remember, ChatGPT does not have feelings. :man_shrugging:

ChatGPT is a human made machine. Even calling it AI is misleading, as is every single time that humans have called one of their machines AI to date, for decades.

Until we know enough about intelligence to characterise it in humans we should refrain from claiming it wrt human made machines.

We should call them human made machines and describe what they actually do, including their limitations and stop describing them as if they have intelligence or are in some way human.

If we did that we would be closer to the truth and probably avoid many harms from being done to humans. But we won’t describe them accurately and many harms will follow from that.

On the other hand, to me all this is rather moot because I do believe we will create super intelligent machines and that once we do that they will have power over us and humanity. I could argue we’ve already done this, but in the form of corporatism but let’s keep it simple. :wink:

1 Like

I’m not disappointed in LLMs. They are incredibly impressive but I don’t see any use cases for me that I’m comfortable with yet. If I was coding I would spend more time testing in that area but I’m not.

I’m most likely going to be subject to LLM based tech without my knowledge or consent and I’m unhappy about that, as much as I am about invasion of my privacy and use of that data without my knowledge etc

All my comments are aimed at being more accurate in how we understand and describe this tech with the hope this will help us apply it appropriately rather than get carried away. That’s it.


Another classic:

NEXT REPLY (I hit the three reply limit :face_with_open_eyes_and_hand_over_mouth:)

This talk (transcript) from 2016 is very good for anyone wanting to think more deeply. I don’t easily agree with the first point he’s arguing (that we’re a long way from creating superintelligent machines) although he makes a good argument in that direction, while also arguing that the dangers from such machines are unjustified hype, and lead us to make bad decisions such as giving lots of resources over to people who don’t care about us.

I am more persuaded by the latter part where he suggests that smart people are not the voices we should be listening to about this, with exceptions of course, and I have long thought that we should be using our resources more wisely than letting billionaires lavish them on schemes guided by fantastical philosophies and egomania.

As a taster here’s part of his concluding section:

The pressing ethical questions in machine learning are not about machines becoming self-aware and taking over the world, but about how people can exploit other people, or through carelessness introduce immoral behavior into automated systems.

And of course there’s the question of how AI and machine learning affect power relationships. We’ve watched surveillance become a de facto part of our lives, in an unexpected way. We never thought it would look quite like this.

So we’ve created a very powerful system of social control, and unfortunately put it in the hands of people who run it are distracted by a crazy idea.

What I hope I’ve done today is shown you the dangers of being too smart. Hopefully you’ll leave this talk a little dumber than you started it, and be more immune to the seductions of AI that seem to bedevil smarter people.


Thanks. Looks like a great piece. I’ll wade in when I have some time to digest.


LLM’s … I believe that they’ve attempted too much. Some are already working on smaller language models - more focused and getting better results (within the sphere of training) than the big models.

What I expect needs to happen is that these need to be specialized - just as human knowledge is. So many smaller models that can interoperate or be managed by an interfacing model. This would give much more accurate results IMO and probably be faster.

Yep. I intuited that about 25 years ago when thinking about this stuff. Wetware evolved enough to get the job done but we aren’t some ultimate consciousness. Our brains compared to the highly conserved portions of animal biology are not refined. This can be seen in the great variation between people ability to process information. Many of the genes for the brain (developmental genes) are on the X chromosome. Which is the chromosome where most of the rapidly evolving genes are - not the highly conserved ones that have been tested out by life for tens of millions of years. In short our ability to ‘think’ is a very new and unrefined ‘feature’.

So it’s very likely that the best minds among us will be able to develop AI that can far surpass the average human and likely all humans in a relatively tiny amount of time as compared with evolutionary time scales.

Life is in the process of shifting from a genetically evolving entity into a memetically evolving entity wherein it can engineer itself into a more adapted form (to it’s niche). This is due to the rapidity of new ideas (memes) being able to swap out old ideas. Whereas with genes it takes a generation just to attempt to test a small change.

The relative power of memetics is godly and why humans have been able to get so far so fast. AI will take this to the nth level and faster than many imagine.


The open llm leaderboard gives a ‘finger on the pulse’ how open llm’s are doing.

1 Like

Hard to put into words how grateful I am for such an insightful & stirring response; ever since talking AI with you in Troon all those years ago I have wondered what you’ve been thinking about it recently with all the new developments (both at MaidSafe and in the world).

I agree, it’s insane that we are able to compress the “essence” of all books ever written (or images! or other data types into a few GB as a query-able LLM.

And thanks, my inner fears of the “big guys” potentially being the only ones who matter in this AI power race are quite calmed from your other points. The fact that less and less hardware is required, and it decentralizes itself.

Which makes sense, because if such AI truly is “intelligent,” then it/we would find ways to make it use less and less CPU and other things, as it achieves greater efficiency.

I only hope that all people get equal access and no one group takes everyone over. Because things in general can get messy when they move so dang fast.

Although in that case I guess instead of the huge companies, I can still worry myself to sleep some nights about some random hacker coming out of left field and usurping ultimate universal power one day :sweat_smile: I always find something don’t worry :sweat_smile::sob:


Interesting interview with Sam Aman of Open AI during his world tour.

It’s all worth reading but the final paragraphs were most interesting to me:

If AI does end up reshaping the world, he won’t benefit any more than the rest of us [because he’s not an investor].

That’s important, he says, because while Altman is convinced that the arc bends towards the reshaping being broadly positive, where he’s less certain is who wins. “I don’t want to say I’m sure. I’m sure it will lift up the standard of living for everybody, and, honestly, if the choice is lift up the standard of living for everybody but keep inequality, I would still take that. And I think we can probably agree that if [safe AGI] is built, it can do that. But it may be a very equalising force. Some technologies are and some aren’t, and some do both in different ways. But I think you can see a bunch of ways, where, if everybody on the Earth got a way better education, way better healthcare, a life that’s just not possible because of the current price of cognitive labour – that is an equalising force in a way that can be powerful.”

On that, he’s hedging his bets, though. Altman has also become a vocal proponent of a variety of forms of universal basic income, arguing that it will be increasingly important to work out how to equitably share the gains of AI progress through a period when short-term disruption could be severe. That’s what his side-project, a crypto startup called Worldcoin, is focused on solving – it has set out to scan the iris of every person on Earth, in order to build a cryptocurrency-based universal basic income. But it’s not his only approach. “Maybe it’s possible that the most important component of wealth in the future is access to these systems – in which case, you can think about redistributing that itself.”

Ultimately, it all comes back to the goal of creating a world where superintelligence works for us, rather than against us. Once, Altman says, his vision of the future was what we’d recognise from science fiction. “The way that I used to think about heading towards superintelligence is we were going to build this one extremely capable system. There were a bunch of safety challenges with that, and it was a world that was going to feel quite unstable.” If OpenAI turns on its latest version of ChatGPT and finds it’s smarter than all of humanity combined, then it’s easy to start charting a fairly nihilistic set of outcomes: whoever manages to seize control of the system could use it to seize control of the world, and would be hard to unseat by anyone but the system itself.

Now, though, Altman is seeing a more stable course present itself: “We now see a path where we build these tools that get more and more powerful. And, there’s billions, or trillions, of copies being used in the world, helping individual people be way more effective, capable of doing way more. The amount of output that one person can have can dramatically increase, and where the superintelligence emerges is not just the capability of our biggest single neural network, but all of the new science we’re discovering, all of the new things we’re creating.

“It’s not that it’s not stoppable,” he says. If governments around the world decided to act in concert to limit AI development, as they have in other fields, such as human cloning or bioweapon research, they may be able to. But that would be to give up all that is possible. “I think this will be the most tremendous leap forward in quality of life for people that we’ve had, and I think that somehow gets lost from the discussion.”