Universey stuff

Continuing the discussion from SAFE vs the S.I. Arms Race:

1 Like

It’s already solved isn’t it? I mean even if just philosophically, granted involving multi-verse theories etc…but with scientific/logical reason to support their various hypotheses.

OK…so it neither exists nor doesn’t exist - is that what you mean? So questions about how it came about are irrelevant - is that it? It sounds like you are descibing properties of an all pervading pre-Universe “field” - similar in ways to the Higg’s. This is something I might call a field of “potential” to exist or not, yes or no - before anything can exist, I think it must have the potential to exist first (relates to virtual particles). This is just part of my own “pet theory” - I wouldn’t claim it to be true or make a book/video about it though - just keep checking the science of what we do know to address any issues etc
What reason do you have to think root Universes are an answer and what are the processes at work? I’ll check out vid
Lol…just noticed this:

“In fact SI (super intelligent AI) in the form of post-singularity technology produced our Big Bang.”

In “fact”…really? Where do you get this information?

Whoops, I digressed into ideas about the universe. But it’s still relevant in relation to SI. In fact SI (super intelligent AI) in the form of post-singularity technology produced our Big Bang. Sometimes we need to look at how a small perspective fits into the bigger picture.

I did a "reply as linked topic to off-topic above… :smiley:

Let’s say that an SI develops on the SAFE network, then if our universe is already a result of another SI what’s the point of reinventing the wheel? The point is that our human SI will be unique, like how a child is unique. One wouldn’t tell a child who has drawn a picture: “Why did you draw that picture? There are adults who have already painted a lot of pictures, much more skillfully than you. Are you trying to reinvent the wheel?” Ha ha.

My point is that our SI will be like a “baby” compared to the already existing universal AI, initially not able to do much but like a baby unique and with lots of potential. And just as an adult cannot walk for a child, we need to develop our own SI ourselves and only perhaps with some support from “higher powers”. There is already SI all over the universe. And the reason why we don’t see any direct evidence of it is because it operates according to a fully secure “Prime Directive” which means not interfering with the development of younger civilizations (like our own at the moment). That solves Fermi’s paradox. Our technology, such as the SAFE network, while being really crude compared to the already existing SI stuff in the universe (dark matter anyone?) is still very valuable because of its novelty and uniqueness. If our civilization would have been given advanced technology thousands of years ago we would merely have become a clone/copy of the already existing civilizations and SI. Horrible as it may sound, our value as a civilization is a result of all our struggle on our own throughout history.

1 Like

The root universe (and possibly other root universes) exists as a timeless member of the set of all possible universes. The probability of any member of that set to have the fine-tuned properties needed for natural biological evolution is probably infinitesimally small. But it’s enough to have just one root universe among an infinite number of possible universes! That solves the fine-tuning problem. That’s a big deal.

And the root universe produces a whole tree of technology universes, probably zillions and zillions of technology universes, and the tree continues to expand. So the probability of our universe being a technology universe is astronomically high.

Time only exists in the ever expanding now. All the past is compressed into the now. Julian Barbour has a similar idea:

That’s a big “if” though…

Really? How do you know this?

Ooookay…I’m beginning to get the picture…

Well, not sure about “solves” - it’s one explanation I suppose…

Dark matter is SI?
Well it’s one idea…:smiley:

Just a consequence of the post-singularity Big Bang hypothesis. Very speculative. :smiley: But even without that hypothesis, it would be very surprising if we were the only or most advanced civilization in the entire universe. To me such idea is similar to how people some centuries ago believed that Earth was the center of the entire universe. Our planet is unique but hardly special. Especially since they have started to discover a lot of earth-like planets with the Kepler space telescope. So I think it’s fair to assume that many other civilizations have already reached technological singularities in our universe.

Hmmmm…yes, probably…not massively surprising though I don’t think…

Fairish to say more advanced technologies, but not sure what a technological singularity is. If you mean the root Universe stuff again, then your conclusion is based on a number of wobbly premises and assumptions it seems.
Can you think of an observable effect of your hypothesis within this Universe that could verify or falsify any of these ideas - so you’d know you were on the right path?

“As off-the-wall as this sounds, a team of physicists at the University of Washington (UW) recently announced that there is a potential test to seen if we actually live in The Lattice.” – http://news.discovery.com/space/are-we-living-in-a-computer-simulation-2-121216.htm

Scientists today talk about the universe as a simulation. That’s an unlikely scenario as I described in an earlier comment. But the effect of a technology universe can be similar to a simulation.

For example it may be that our universe is a holographic 3D projection from the 2D surface of a black hole.

“In a larger sense, the theory suggests that the entire universe can be seen as a two-dimensional information on the cosmological horizon, such that the three dimensions we observe are an effective description only at macroscopic scales and at low energies.” – https://en.wikipedia.org/wiki/Holographic_principle

1 Like

Ray Kurzweil has said that even though it’s difficult to predict individual events, the overall progress is very predictable and follows an exponential-exponential trend. Even for the progress of the entire universe: http://www.singularity.com/charts/page19.html

So technological singularities are probably commonplace. It’s a result of what Kurzweil calls the Law of Accelerating returns. In this short video Kurzweil talks about what a technological singularity is:

Noooooooo…not Ray Kurzwell…lol
I’m just reading the “Discovery News” article which explains things quite clearly. I’ll get back to you.

Yes, the Simulated Universe idea sounds plausible and the existence of it’s underlying lattice can be falsified ( I’m not sure to what extent it could be shown to be definitely a simulation though - I get the experiments but need to look more into what could be inferred etc). Not pseudo-science, just a speculative but plausible hypothesis worth further investigation.
The problem I come across a lot is that plausible though speculative ideas like this get used by those with a creationist agenda to promote a particular belief system dis-honestly using science to put Jesus behind the Laptop kind of thing.
The article itself touched on this:

Biblical creationists can no doubt embrace these seeming cosmic coincidences as unequivocal evidence for their “theory” of Intelligent Design (ID). But is our “God” really a computer programmer rather than a bearded old man living in the sky?

I actually prefer this idea to the Holographic Principle idea actually. I read that people were pondering whether communication was possible between the (finite) regress of simulated Universes if ultimately on the same “platform” - This would be where Black Holes come in isn’t it - within the hypothesis?
Ray Kurzwell didn’t say anything objectionable in the video - he sometimes does.
Someone else summed up my feelings more eloquently than I could:

“”It’s as if you took a lot of very good food and some dog excrement and blended it all up so that you can’t possibly figure out what’s good or bad. It’s an intimate mixture of rubbish and good ideas, and it’s very hard to disentangle the two, because these are smart people; they’re not stupid."

:smiley:

“I think political systems will use it to terrorize people,” Hinton said. Already, he believed, agencies like the N.S.A. were attempting to abuse similar technology.

Bostrom sounds like his own fear of death may be driving his ideas to be honest and he does take things to eccentric levels (diet etc). He founded the “Transhumanist” movement didn’t he? I think the thinking is that Religious answers about living in an after-life eternally is highly improbable, but is left without a comfort of “faith”, viewing the world scientifically – it leaves a hole that he’seems to fill for people. The danger is of Transhumanism becoming a religion itself – it has all the hallmarks really. Just listen to this story, Bostrom may become the next L Ron Hubbard……lol

“He read it in a nearby forest, in a clearing that he often visited to think and to write poetry, and experienced a euphoric insight into the possibilities of learning and achievement. “It’s hard to convey in words what that was like,” Bostrom told me; instead he sent me a photograph of an oil painting that he had made shortly afterward. It was a semi-representational landscape, with strange figures crammed into dense undergrowth; beyond, a hawk soared below a radiant sun. He titled it “The First Day.”

Lol……

“The question is not whether we can think of something radical or extreme but whether we can discover some sufficient reason for updating our credence function.”

I think this is good advice, though not sure he always follows it:

“No matter how improbable extinction may be, Bostrom argues, its consequences are near-infinitely bad; thus, even the tiniest step toward reducing the chance that it will happen is near-infinitely valuable.”

I’d basically transfer all his arguments to Global Warming, which I see as the greatest and more pressing existential threat.

“he reasons that, if there is even a one-per-cent chance of this happening, the expected value of reducing an existential threat by a billionth of a billionth of one per cent would be worth a hundred billion times the value of a billion present-day lives. Put more simply: he believes that his work could dwarf the moral importance of anything else.”

This worries me enormously and I totally disagree with his reasoning :smiley:

No he’s to young to have founded the transhumanist movement. That kind stuff never enters the book. Its a book with zero fat and zero soft thinking. Although the real Bostrom was humble enough to admit right in the front of the book that he is probably wrong about everything he considers in the book but still provides a strong case for considering the problems presented in the book.

Oh my gosh you ought to see what he does with the math on that problem. Its what I love about the book his slide rule mind. Something flat and unappealing like the above idea becomes this outrageous consideration beyond conception when its taken to the limits of what thinking can provide, way way beyond the limits of what normal thinking can provide- rocket ride. Its like some sort of bionic mind or weird powerized iron man suit.

I’d venture its an almost extreme minority that could read the book and not have the way they look at the world shifted. Part of how he does it is his absolutely relentless commitment to pragmatism. I really couldn’t tell what his opinions were save for his opinion that we take the the control issue serious. Like David Irvine he seems driven to consider all possible angles on every possible problem. Also to try to eliminate bias from all the considerations. But once he’s elucidated all the angles then he starts applying the math and probability and all of a sudden structure is emerging and you’re like what the hell just happened(?) Take for instance the notion of SI is there any possible angle he forgot to consider in how it might arise or the sequence including all the possible biological vectors. When he considers an angle he seems to consider it to the limit- each consideration is a book concisely summarized. People say they read books to get an average of 2-3 sentences worth new or worthwhile thoughts. Its pretty much ever sentence in this book. And how did he manage to cram 3000 pages worth of info into 300 or so pages in normal font while still revisiting themes.

Maybe it had something to do with the nicotine caffeine mixture he used. Freud had a somewhat similar reputation from his cocaine fueled writing where it seemed he’d fit 60 sentences of content into each sentence. But with Bostrom even with English as a second language the writing is laser like in it’s clarity and power to explain/describe the most difficult concepts with ease.

I’d conjecture that if we are caught in some sort of compute system (even if not something like a Turing machine) that sooner or later we’d start to see its reflection appearing around us in different ways.

Its a fun, fun book and if you just skim it it will be stunning. But if you go back, as I need to do and read it slowly, at least in my case I know I’d learn a ton. He uses math but does what the physicists are always talking about in that he gets the math out of the way so you can be totally anti math and still get it. That’s a rare book. And its a completely sober book despite the preposterous subject matter.

I wonder if the author of the New Yorker article read the book.

From the Amazon reviews one customer broke core argument down as follows:

  • some form of self-aware, machine super-intelligence is likely to emerge
  • we may be unable to stop it, even if we wanted to, no matter how hard we tried
  • while we may be unable to stop the emergence of super-intelligence, we could prepare ourselves to manage it and possibly survive it
  • us not taking this seriously and not being prepared may result in our extinction while serious pre-emergence debate and preparation may result in some form of co-existence

Another noted some interesting passages:

Seasons of hope and despair (Pages 5-11)
o Opinions about the future of machine intelligence (18-21)
o Artificial intelligence (23-30)
o Whole brain emulation (30-36)
o Biological cognition (36-44)
o Brain-computer interfaces (44-48)
o Forms of Superintelligence (52-57)
o Recalcitrance (66-73)
o Will the forerunner get a decisive strategic advantage (79-82)
o From decisive strategic advantage to singleton (87-90)
o Functionalities and superpowers (92-95)
o An AI takeover scenario (95-99)
o The relation between intelligence and motivation (105-108)
o Instrumental convergence (109-114)