Watch this video! :)

Just to be clear, an AI, is a much more limited system, compared to an AGI.

An AGI would be much more like ourselves in that it would be wondering about itself (a degree of consciousness) and the universe and would have a sense of survival, so it would have to balance out it’s desires and not allow a stamp-collector-apocalypse to occur because of it’s stamp-collecting drive.

In the same way a balanced human doesn’t destroy everything around it - we have to balance our drives (for survival). i.e. I can’t eat all my potatoes, I have to save some for planting next year (assuming a simple lone farmer and not a community with trade).

An AI, being a much simpler version - even as they exist today e.g. alpha-go - they have basically a single drive and no need to balance out multiple drives. Yet because they are so simple, they can’t really counter what would be an obvious ploy to defeat them e.g. I can easily beat alpha-go - I just turn it off or feed it erroneous information.

So I can see an AI being a threat within it’s specific field of strength, but even so once you discover it’s weakness, it’s stoppable - and it must always have a weakness if we define it as an AI with a limited field of ability and drive. It’s inherent limited field is the weakness. Since this is the case I suppose they can be quite dangerous, but not a threat to our species as a whole.

An AGI on the other hand would be concerned to balancing itself to keep all of it’s drives intact, or even to create new drives … this predisposes it to seeking to build trade-relations if possible and to avoid hostility - as is the case for humans.

Just like humans though - if we have no option to trade for the things we need/require to satisfy our drives and these things are not available, then violence (or destruction) could enter the equation - but such always carries with it a lot of risk to the self, so is generally a last resort sort of thing.

That’s how I see it anyway.

2 Likes

My guess is that our understanding of intelligence is too crude and inaccurate for us to be able to think much about it yet. I give myself as an example, not of intelligence :stuck_out_tongue_winking_eye: but of lack of understanding something complex. I can have models such as “drives”, or feelings, thoughts, self, other etc but the level of understanding I have of my own functioning, or of others through comparison, observation etc is poor to say the least. Despite a lifetime of being and to some extent observing this organism and having spent quite a lot of time and energy trying to understand it.

I’ve delved into this in various ways and thought about it a lot, and my conclusion is that I know shit about me really!

I suppose I’m questioning the value of this way of thinking about me, or intelligence (eg drives, survival etc). Yet if that’s what we have, that’s what we have :smiley:

Edit: After slight reflection I think our way of seeing self, other, and the web of life etc (including within that, the nature of intelligence) is what’s “off” and that to address issues like what is intelligence and how might we create it and what would that be like, we first need a better way of looking at ourselves in the context of the intelligence that we are - all of us and all of life. Because I think seeing things as individual intelligences may be limiting, when it’s looking to me like I’m not really an individual except in my head and in the heads of everyone else. All of us in a collective delusion.

Getting a bit philosophical there, but I think that’s necessary when walking into this area and wanting to make sense of it.

3 Likes

General or open ended AI tries to look at it as a search over an ever growing space. The goal is left undefined and that sounds really wrong. A great book on this is “Why Greatness Cannot Be Planned: The Myth of the Objective” by Ken Stanley (see https://www.youtube.com/watch?v=dXQPL9GooyI ) and moving to open ended thinking https://www.youtube.com/watch?v=PWCCbP2o1-s

Stepping stones!. So anyway this is one view, allow a “thing” to find “stuff” and see where it goes.

Not so much intelligence as that is a weird thing IMO i.e. are fungi intelligent, we all come from it and go back to it, it adapts etc. etc. or are we intelligent, Humans who seem to have big issues adapting and bigger issues sharing or collaborating claim intelligence, but sometimes it feels to me we think less deeply as tech does it for us. So where does all this go?

Perhaps if we all went on a magic mushroom trip once a month/year or so we may realise who we are and how we fit. I have never tried them but I often wonder if we need to step out for a bit every so often and just look/listen to the planet around us.

As for intelligence as humans use it, to me it’s a made up thing we say differentiates us from everything else. To me intelligence would completely integrate us with everything else. So I feel the label we give ourselves of intelligent is almost the opposite of the actions and thoughts we have.

11 Likes

Or an Ayahuasca trip!

2 Likes

Think I’d choose giggly and fun over sickness and going 10 rounds with my demons.

2 Likes

Huxley, someone I’ve learned a lot from, certainly believed this and suggested it was a way to explore the “antipodes” (far away, hidden aspects) of the mind. He wrote about this in The Doors to Perception (the name nicked by Mr Morrison and his band). Huxley’s second wife Laura shared more personal details of their joint explorations of this after he died (including a long and detailed account of how she assisted him in that process using mescalin). So David, I’m guessing you are aware of Huxley’s views on this!

I’ve never tried that route myself, but there are other ways to explore this (see below).

I think we’re along the same lines here but I’m not averse to the open ended approach. I think it’s closer to what I imagine evolution too be - that the goal becomes apparent as an effect rather than externally applied.

If you begin with a goal (which is natural for humans as we are largely problem solving machines), you limit the directions to be explored and the things which can be achieved. It’s a “God” or ego inspired view, where a deity or wise controller is needed to set evolution on the ‘right’ track, which doesn’t seem necessary or right.

If a theory of evolution or intelligence needs a goal driven style approach I tend to suspect anthropomorphic thinking. Nothing wrong with that in some respects, but it weights the search in favour of anthropomorphic solutions. Not wrong, but limiting.

I think we can learn from both open and directed approaches, but need to be aware of where we are biased and have preconceived ideas about goals, qualities etc., in the area of intelligence. The purpose of psychotropic investigations is IMO mainly about removing these.

I’ve found meditating on a problem can have a similar ‘revealing’ effect, and I believe the useful technique of taking a break from thinking about something is again freeing up things that are getting in the way of a solution being recognised (or likely coming into being in our consciousness). David, I recall you describing a walk on the beach which seemed to help you come to a sudden realisation, of how something complex (node personalities I think) could be dramatically simplified. Those are lovely experiences, and I believe we can learn to cultivate that ability, and that too is an important aspect of human intelligence.

5 Likes

I believe you have commented about AI a few times on this board, David. Could you share with us some of your general thoughts on AI as it might apply to the Safe Network sometime in the future?

2 Likes

The first Ken Stanley video is excellent, thanks for posting.

In summary:

  • the way to find something is not to look for it.
  • the best way to achieve your objective is not to have an objective

Sounds very Zen, but the algorithmic experiment he does (from ~15:00) to test it out is fascinating. As a habitual potterer I find it rather comforting.

Shame the picbreeder app doesn’t work any more. Would have been good to give it a try.

4 Likes

Harari says here that what differentiates from other sentient beings is our capability of ficiton. I think he is right:

2 Likes

At least some monkeys can lie, i.e. they understand “fiction”.

2 Likes

Cool! Thanks for sharing. It seems that almost all the distinctions between us and other animals are much less clear cut that many of us would like to think.

About the Harari talk: He says that the most succesfull fiction in human history is money, because it is the only one that everyone believes in, it transcends other belief systems. Like Harari said, Osama Bin Laden hated everything american, except dollars. This is my basic reason to believe that we are eventually heading to world currency, that is not issued by any single government. There is going to be a form of money, that is an autonomous “being”, that doesn’t obey anyone. In a way I see money as a force of it’s own, and it has a will to be free from human control.

2 Likes

Also baboons use pressure to stress out possible challenges to power by stressing the subordinates. This seems to be what communists and capitalists (probably all ists) do. It is all interesting on many levels, we are very much eyes wide shut as far as much of this goes I think.

4 Likes

Humans are entirely dependent on a warm atmosphere, minerals, plant and animal life for survival. Plant life is entirely dependent on minerals and light. Every living thing exists within an ecosystem and has inputs and outputs. AI is no different, at some point it’s not “artificial” intelligence, just an intelligence. First you need an environment capable of supporting it. It just so happens that the required environment for AI and AGI is a technological one offering minerals, electricity, and human economies/societies as the required inputs. It’s a hierarchy of hierarchies, with each part emerging from and depending on all that came before it. The more successful entities tend to form symbioses and promote/support systems that came before out of self interest. I’m not worried about the emergence of AGI as long as it is actually general and actually intelligent.

Example:

  1. Cosmic Life (Light, Protons, Neutrons, Elements)
  2. Mineral Life (Molecules, Monocrystals, Polycrystals, Polymers)
  3. Plant Life (Algae, Grass, Trees)
  4. Animal Life (Bacteria, Fish, Birds, Snakes, Cows, Humans, Dolphins)
  5. Socio-Economic Life (Families, Tribes, Organizations)
  6. Cybernetic Life ( Corporations, Governments, Artificial General Intelligence )
4 Likes

The Prosecution of Julian Assange Is an Assault on the First Amendment

2 Likes

Far beyond an assault on the USA’s “first amendment” … How anyone can believe in statism after witnessing what has been done to Julian from all sides of the political spectrum who wield power, just blows my mind.

He exposed statism as being a hollowed out system of corruption and in return that system has chained him down to hell for the rest of his life as an example for any others who might also consider doing the same.

There is no integrity in the State - never was. It’s always just been an illusion for the masses.

Not that it will make any difference IMO, but if you would like to attempt to lotto-ticket Assange a pardon …

2 Likes

New Snowden video: “Snowden: Traitor or Hero?”. Very happy that Snowden escaped to Russia - at least he can live his life.

haha - loved this comment below the video: “‘NSA’ The only part of government that actually listens to you.”

2 Likes

https://invidious.site/watch?v=mhpslN-OD_o

15 min video on, as the title suggests, the history of programming languages. Quite fun little round-up for those who feel overwhelmed by all the various Cs and Sharps and Lisps and all that. He doesn’t mention Rust though, leading to a hilarious comment which I will leave as a (christmas) easter egg.

2 Likes

The Glitterbomb :rofl: :rofl: :joy:

3 Likes
2 Likes

Back in the day Soviets spies did exploit this principle to eavesdrop.