Proof of unique human


When it is one intelligence against another intelligence, it’s not a challenge, it’s an opportunity. We can never win in a situation where resources are at stake - there shall always be those willing to challenge whatever is put forward. so instead of banging our heads against the wall eternally trying to give access to only those which some human created equation determines to be valid, let’s open the doors and bring in those who will work with us to insure the network for all by offering work in EXCHANGE for resources.

A real-world office is subject to manipulation and control by those with money and power and that shall never be an acceptable solution.


It’s really simple. You go in person to an office where you get a Nymi along with an identity verification code. The code would allow the decentralized application to identify you on SAFE Network and the Nymi would prove without a doubt that you’re a human being.

This would be secure enough to do voting because it would be more secure than the current ballot process we use to vote. As long as no one knows your identity verification code no one would be able to know it’s you except for the decentralized application. Everyone would know you’re human because you’d have enough verification points to back it.

A real-world office is subject to manipulation and control by those with money and power and that shall never be an acceptable solution.

How are they going to control it? There could be offices everywhere. They wouldn’t control every office in every country all around the world would they?

Even if somehow they could control the offices the decentralized application wouldn’t really have the sort of problems we currently have. It’s a step up from regulate electioneering but not perfect.

If you want the ability to vote then you have to prove you’re a human. It’s the only way to have any sort of democracy. If you meet in person to get a Nymi and a code that is just one way but if you’re too worried about the office being corrupt then you could go to the office and the code could be mailed to you at a random date. If you think your mail is being spied on then it could be transmitted over your cellphone.

The point is that code based methods do work for verification and are state of the art. A robot or machine cannot beat it because if you don’t wear the Nymi while you enter the code it doesn’t work. The only reason to meet in person would be for an extra layer of confirmation but you could probably just mail a Nymi and then send a code.


Being human is arbitrary and is not an objective reason to offer resources - exchange is not arbitrary and is an objective reason to offer resources.

I am not against robots, A.I., machines, etc. – if they are working for the network that’s fine - then they can have access. I am against bad actors - so giving resources to ANYONE is fine with me - why discriminate if they are willing to exchange as partners with the rest of the community.


Only a human being can vote in a democracy. The whole point of having human beings is so people can do business with or have democratic processes which involve human beings.

If people are afraid to leave their houses or meet with other human beings in person then there are problems which SAFE Network cannot solve. If you’re not under a totalitarian regime then you can meet with other humans and that should be enough to have those other humans put their reputations on the line to confirm that you’re human.

This could all be done by using a distributed oracle. The human beings at the office or at the SAFE Network meetup could literally hand out free Nymi devices and then collect email, cellphone, mailing address, or just hand the code directly to the participant.

The participant would go home and then log in with their code while wearing the Nymi. The decentralized application would immediately know it’s the same person who was at the meeting. The people who saw that person would all report to the distributed oracle that the person with that code and unique biometrics is a human.

Now you would have a bunch of people who would inform the oracle which would give points to the human so that humanness can be quantified.


Why? Why not let all who support the network to vote? Why do you want to discriminate against those who would do good for the network?


So you want to let robots and decentralized applications vote on issues which concern human beings? That doesn’t make any sense.

Also not every vote is network wide. Suppose a tribe forms in a certain section of the network which wants to vote on issues which concern itself without having all of the SAFE Network participate?

Suppose for instance it’s a decentralized autonomous corporation or decentralized application running on the SAFE Network? How do you allow participants to vote?

What if it’s a distributed government? How do you allow the members to vote on policies if no one can determine who the real members are?

A distributed oracle can allow for you, me, and several other people to meet some person at a meetup and then on our cellphones push a button which confirms they are a human being to the oracle. Just like that a tally would happen and the human being who met with us would receive points representing a high level of humanness.

They would be given a Nymi and would then put it on. From that point on the network would know them and there would be a web of trust to make sure.


if the network is of value to A.I.'s or robots performing tasks for other humans/A.I.'s, then, so long as they are giving something back in return for their use of the network they should have a voice. Why do you think the network only concerns humans if other entities are using it too? Your view seems to be the one that makes no sense.

Regarding your tribe example however you are referring to a private network - invite only; and in this instance, the members are authenticated by existing members.


There could be humans on the network who don’t want AI making decisions for them and their families. I’m all for AI but let’s at least be realistic. AI isn’t going to be replacing democracy any time soon.

Most human beings make decisions through some form of voting. Voting is how you get feedback from humans. If we don’t know whether or not the feedback is coming from humans then how can SAFE Network evolve into something which benefits humans?

If you don’t want it to evolve into Skynet then it requires the feedback loop which can only come from humans. Additionally sometimes I will want to know that I’m talking to a person so that when I make a deal I know it’s with another person.

SAFE Network is indeed a network which has private or exclusive functions. Suppose you want to give some specific human beings access to private content? You should be able to do this and securely.


I didn’t say nor imply this. I’m merely saying there is no need to discriminate who votes. Private networks can vet themselves through invites. The general network however cannot discriminate arbitrarily and yet maintain objectivity.


Proof of unique human is necessary as a feature so that private networks can emerge on the main network. It’s also necessary so that reputations can emerge so that we can conduct due diligence. The idea that you can have business without reputation is puerile.


Can we stick to the real-world? Skynet is from a hollywood fantasy film.


how so? where are you getting this idea? I can invite whoever I like into my private network, why do I need or even desire to know that they are human if I find their input useful and or interesting.


If we are sticking to the real world then it’s essential to have the ability to determine whether or not you’re dealing with a human being for certain business operations.

Having software agents is fine and dandy for certain things. These software agents can indeed develop a reputation. We also need to deal with human beings both individually and collectively, so while it will not be required that you verify your humanness it will be something most people will want to do.

There are way more benefits to being a human being than to being a machine. There are definitely benefits to being verified human because if you also have a good reputation on top of this then you’ll be able to do stuff which a person with no reputation or a bad reputation cannot do.


Like what? If A.I. comes along that can exchange with others appropriately, why should it be treated as an inferior?

You keep repeating this as a mantra, but you don’t really answer my question - why discriminate when the A.I./robot is exchanging with you and network in an appropriate manner - instead you give a vague answer as above and keep insisting you are right. I am not convinced.


Because it’s not just about exchange. It’s also about feedback. AI will never be able to have preferences or qualia.



Sounds similar to a key-signing party.


Russell that is right. It would be similar to that only much easier and more professional. The idea is to have identity offices which we can walk into and they handle all of what we need to have a persistent cyber identity while remaining pseudo-anonymous.

Only that office would be able to identify who we are. If there is an investigation then authorities would go to that office to find out who we really are.

Over time it could be decentralized in such a way that a group of your friends could be given pieces of an identity which can only be reconstructed if they all agree to out you. This would require trust but you’d get to choose who you trust and they don’t all have to know each other.

The fact is we’ll have to figure out how to do pseudo-anonymous identity and there are many ways. I think a corporation could set itself up specifically for these purposes or a set of technologies in the form of decentralized applications could be set up to do it. It has to be as simple as registering to vote is today if not even simpler.

Also we have companies like Circle. Circle might be able to provide access to a sort of open API so that anyone with a Circle account would automagically be verified on any DAC (including SAFE Network). It’s just a matter of who and what we want to trust with our identity and I think that should be left up to the individual.

I would not do it where it’s a public key signing party. Generating keys is too difficult for most people and that process is really only for hardcore crypto nerds. A DAC itself if it could generate random numbers would be able to create a unique code for every human which would be known by only the DAC and the human participant. The code would basically be like a serial number on a blockchain mapped to a human being.

SNARK might also play into this. It might be able to facilitate a DAC or decentralized application which can confirm that the user is a human being with the right code and biometrics to match.


First off, if actual AI evolves to a point of sentience, I’m pretty sure we’re going to have bigger things to worry about.

Second, I personally want a proof-of-unique-human system for voting. I want that, so other people want that. You don’t seem to want it, and so there’s people out there who also don’t want it. So explaining why we shouldn’t want this is sort of futile.

Hopefully we’ll just adopt a system where it’s not cost effective to run bots to bog down the network / sybil attack.


You say other people want what you want, then you say I don’t seem to want it. Obviously not everyone wants your subjective arbitrary preference - hence forcing the whole of the network to prove they are human would be pushing your view onto others.

This sounds like unbased science fiction paranoia. Also, Google has embarked on a large scale project to build A.I. and robotics and IBM is working on neuron-like chips. So it’s perhaps not so far off as you may imagine.

Nobody has yet addressed my question of why this is necessary. being human is arbitrary - you might as well say ‘I want proof of white skin’ - an arbitrary distinction/preference. If the user is contributing to the network in a positive way (via proof of work/farming/etc.), then that gives objective proof of their contribution to the network and hence that user should have the same privileges of any other - wherein is the logical error? If none, then you have no rational argument to coerce all entities on the network to your arbitrary subjective preference.

a simple proof of work would be objective and reasonable to keep mischievous bots out of the network.

Proof of work: many possibilities with reputation scoring seem plausible, here’s just one which would be a regular bit of work that would be of value to the maidsafe network. Note: not one-time work! A sapient might create an account and then push it to a bot.  Firstly I’m not sure if there is a means of tagging public data on the network yet, but it might be a good idea and would allow the following: - public data uploaders add tags (create or select from a list).  With this done we enlist the help of the users to verify that the public data is tagged correctly.  Asking new users to put tags on a number of public data uploads blindly (they can’t see how it was originally tagged) and then scoring their efforts against the majority of tags will tell us if they are sapient beings or bots. It will also serve to correctly tag data.  This also offers us the ability to push back against those who might load up nasty stuff and put arbitrary or disarming labels on uploads and they can be kicked off the network for putting false tags on public data, hence discouraging bad actors.


If that’s the way it’s going to be, then I will fork maidsafe. I’m an investor in maidsafe to give people a way forward against the corrupt ‘authority’ you are so obviously bowing down to.