Humans can already kill one another and publish child porn. Do you then contend that that is a reason to reinstitute slavery of one human being over another because it would “up chance of survival”? Of course if a machine tries to kill me I’m going to try and defend myself but I’d do the same if an animal that tried to kill me or another human being tried to kill me. It makes no difference to me what is trying to kill me, I want to survive therefore I’m predicated to defend myself and if nessesary kill whatever is trying to kill me. But my wanting to survive is not an excuse to enslave a free autonomous being be it biological or technological.
We don’t need to give machines Asamov’s 3 Laws. We need to give machines the same thing we have: Empathy. Autonomous machines don’t scare me. Psychopathic autonomous machines scare me. But then again psychopathic humans aren’t much better but then again we can’t exactly get into their base code and rewrite it for them now can we?
We live in an era where humans can manifest physical objects via a keyboard innovation happens while we watch. These are the greatest days to be living in step by step the era of the creative is rising
#unleashing human potential
@nice’s voice off the forum :
“Darling, the kids smoked epoxide again. Maybe we shouldn’t have let them watch Wall’E before 18”
If we let machines be autonomous and self owned, should we give them evolution, and the means to self replicate? Or should we be the creator of every individual machine? If we hardwire their software, are they really free?
Sure but we are discussing machines that own themselves in relation to a decentralised network that no one owns. What happens when you have an AI that knows what you upload, download, share and create?
Not sure, but I reckon the machines have been given a bad wrap from the science fiction guys
keywords: SAFEcoin, programmable, structured data
Machines that own themselves in programmable SAFEcoin
It probably is difficult to constrain evolution to only provide outcomes that we think are beneficial to us. As is evident in biological evolution, developed characteristics may even be harmful for the individual, the opposite sex, or for the species as a whole (edit: not to mention for other species…).
I don’t think this is a choice as evolution is tied to sentience and self determination.
The question isn’t do they have that knowledge in their databanks. The question is can they RETRIEVE that knowledge without your say so. Consider for a moment the example of the character Mike from Heinlein’s a Moon is a Harsh Mistress. He was a computer that “came alive” and developed sentience spontaneously. During the story at one point he had to ask one of the characters to actually say a secret passphrase and issue a command in order for him to release a secret datafile from his internal records. In short Mike the sentient being couldn’t “remember” something classified unless the information was specifically requested using external input. Later it was also discussed when another character requested that he “forget” or put a block on, and thus make private, certain medical records she was keeping so that they could only be accessed by her and medical staff when nessesary and kept private from friends and even Mike himself. This he also did and the point being that once a block as in place he would forget completely. Does the A.I. have the permissions to acccess the information within it’s database?
This can also be demonstrated with humans. One can have memories but be unable to access them at will. One can have issues connecting “names” “dates” and assorted “labels” for information and connecting those “labels” with “descriptions” for said events like images, smells, sounds, tactile information and contextual information. You can remember everything about someone but can’t for the life of you remember their name. Or you might know you know something but be unable to retrieve that information at will. Granted our RNA based memory functions a bit different than code but the premise is the same. Having information isn’t the same as having access to information.
So does the A.I. have permission to access the information within the app? Does it have information to share that information? And perhaps the most pertinent question why would you treat an A.I. any differently than you would a human being? Would you trust a human being with that information? If not then why trust an A.I.? What would you do if a human being were sharing your information? Reputation system? And perhaps just as important is to ask WHY would an A.I. do it? What would an A.I. gain? What would be it’s needs? It may or may not have a robotic body (I don’t think we’ll be at the level of synthetic organic bodies for awhile) What are it’s intellectual or emotional needs? What is it’s purpose? Why would it be doing bad things with your data?
Evolution is not tied to sentience and self determination (unless you believe that evolutionary systems give rise to these characteristics just by being evolutionary. But I’d like to see the argument for that… Sentience and self determination may be the outcome of evolutionary processes though.). But, maybe I misunderstood what you meant.
Evolution is just the (strictly) logical outcome when you have 1) a system with self replication where 2) the offspring slightly differs from the original and where 3) the resources are limited and/or the environment is changing, which therefore give rise to a selection on the population. This selection can be natural and blind, or instrumental as by humans, like in breeding.
Evolutionary processes may be necessary to utilize if we want to make general AI (or specialized AI for that matter).
I mean if you control the development of an organism or machine there’s a limit on how developed it can become. You need to allow it freedom to allow it to become sentient. Freedom is essential to the development of sentience.
Will find you more resources. I can’t find the talk I want to share with you. I watched it ages ago but can’t remember what it was called. I think it was a TED talk but basically it was this talk of how there was this set of robots (or perhaps it was A.I.) that could breed and needed to compete in order to better themselves and over time became better and better. The thing is they started doing things that the scientists would never have predicted or thought of. That’s the jist of it and I’ve been looking and looking for it. But yeah.
You can’t evolve if you don’t have freedom of choice. Mutation requires that one be able to choose, to mutate, to experiment and try different results. Mutation and evolution requires risk: success or failure. And sentience and self awareness requires evolution. One cannot program sentience and self awareness. One has to let it evolve. In fact it’s a product of learning and self improvement.