Proof of unique human

I’m opposed to the idea of putting a price on the use of maidsafe and especially to making it invite only or exclusive. I posted about why here:

But in short it’s because it would exclude those with initial capital to invest.

I’m a little upset that you took my word “donate” and changed it into “taxes”.
If that was not your intent then please explain why you replied to my post?

Just to be clear, donation is voluntary, while taxes are obligatory. At no point did I advocate any kind of obligatory payment. As for the exclusion part, that was not my idea. We are discussing ways to prevent users from creating multiple accounts using a system similar to Ripple.

What is so wrong with my willingness to give away my own safecoins in order to help the network?


I think Blindsite2k is just misunderstanding the intention, or possibly the cost. We’re talking about a really small amount of money. Under a dollar. Possibly pennies. And what I was saying about exclusivity is just a momentary byproduct of this approach when it first starts running. If you need a special coin to gain access to Maidsafe, that generates buzz, and gets people motivated to find them. If its easy to find them, then all the better.


Anyone who knows me and reads my post, will know I’m one of the biggest proponents in favor of trying to open up the SAFE Network for all, including those with no resources.


I really like all the ideas here. Why not use them all?

So if you use a voice recognition account, you get a little icon saying verified. This will give you more rep, maybe apps on the network will give more privileges etc.

To follow on from @dyamanaka ,  on the idea to donate, perhaps app developers could “sponsor” people onto the network. In return for trying out a game, testing out an app, the user gets a small bit of safecoin from the developer for trying out the app. This could solve the captcha problem too as it would be too complex for bots to play games or if they could, the developer would just have to make the task more difficult as to stop losing coin.

Of course, you could just pay yourself on as also suggested gaining a “paid” status.

Then you could have the basic captcha account, which nodes could evaluate in their own way.

I think giving as many options as possible and allowing the network to award good behaviour strikes a nice balance.


More doors to the same building makes it more accessible to everyone.

Regarding bots playing games,
I actually have a bot (auto clicker) that plays some games, based on scripted clicking & triggered events. AI is growing smarter every day.


Here’s a thought: Instead of having one way to verify whether a user is human why not have multiple little ways? People do lots of little things that are unique to being human. Nuances in their voices are just one example but another way to spot them is their behaviour online. I’m sure you’ve spotted those fake facebook acccounts (or maybe you haven’t) that add you as a friend (or try to) but have no posts, are fresh accounts, lots of friends and a hot pic to entice you in? Obvious fake bot account. A real person posts regularly and thats how they get friends: they build relationships over time. A real person can conjugate a coherent thought, sentence and paragraph. Can a bot do that? A real person can hold a conversation, a real conversation. Imagine having a system where you verify by having a supervised conversation with at least 1 other user, just 5 or 10 minutes of your time. Enough to prove you can abstract ideas and think logically and sentiently. During the conversation you’d have 2 or 3 other participants or observers who would vote on if you were really a human. Set criteria would be established (so they couldn’t vote you down for having the wrong opinion or being from the wrong place or something like that.) Over time more people could observe and vote on these “interviews” as they became more widely used. Or what if one of the ways you got a vote for being “human” (or not human) was if someone monitored your social network behaviour over a period of time. You could opt-into this trial period (as not to violate your privacy or consent) and your social network behavior would be reviewed for a period of time (say a couple days, weeks or a month, however long to establish a pattern) and then discontinued. Observers would vote on whether you were human or not. If you won the YES vote you got a check on the human tally. Or different games and puzzles could be created, again to establish if you were human, or emotional challenges. This could even turn into a new form of game development: Proving you are human. And the more you pass the more votes you get to prove you’re human and unique. These votes are then tallied and hashed. The system doesn’t know what you did to prove you were unique, it simply knows you did it and that you are. It doesn’t know whether you used voice recognition, or social media, or a captcha or a conversation or what, it simply knows that you got the votes and did it. Of course people can vote you are NOT human as well. Those too are hashed to the system. If you get more yes votes than no votes you are declared human. If you get more no votes than yes votes you are deckared not human. This would also lead to people wanting to be unique and distinguish themselves from A.I. bots.

Does this sound nuts?

I think Blindsite2k is just misunderstanding the intention, or possibly
the cost. We’re talking about a really small amount of money. Under a

I would like to point out that the majority of the world lives on less than a couple of dollars a day. What would the maidsafe system look like in say Africa, India, the Middle East or China if we charged “only a dollar” in order to join? Think about it for a moment.


The problem is see with suggestions for proving that a user is human is that they leak personal information badly.

The worst are the voice / face recognition proposals. Imagine - and this is totally made up of course - an adversary powerful enough to set up a lot of nodes on the network. An adversary so powerful in fact, that it can also access phone conversation, social network data, tap Skype calls and the like, and run statistical speech pattern analyses on this data.

The latest thorn in their side is a secure, decentralized network that promotes the same privacy they spend most of their resources fighting. Users are anonymous, and all the data they store on the network is client-side encrypted. What a pain! How are they going to tell who’s a freedom-hating terrorist now?

Oh, but there’s a small caveat. As it turns out, all the private stuff on the network is encrypted on clients… except, that is, for little chunks of data that users send to verify that they’re human. Stuff like voice recordings, pictures of themselves, or directed conversations. Stuff that can be crossed-referenced with other data sources super-easily. And, get this - they send it voluntarily! Now this is just great. All they have to do is allocate a few hundred thousand well-behaved nodes, and let the information stream in. It’s Christmas everyday.

Nah, I’m just kidding. They won’t provide well-behaved nodes to that hippie network. They’ll just infect as many as they can and let them harvest personal info silently.

Thankfully it’s all made up, right?


How are they going to tell who’s a freedom-hating terrorist now?

A: They look in a mirror.


Governmemt. Corporates. Maybe the Mafia… Other such entities.

I think we need to ask the fundamental issue of whether it’s worth trying to keep bots off the network vs the security concerns. Perhaps the question shouldn’t be keeping bots off the network but rather keeping them away from users and user data. There’s a lot of talk about public and private shares but what about public and private do not shares? What if I want a file shared with everyone except x y and z users? What if I want to alert the community to users I do not wish to share with or think should not be shared with (everything from “I don’t like your attitude” to “we have fundamental different core values” to “You ripped me off in our last bitcoin trade!”). It wouldn’t prevent anyone from doing so it would simply be a list of who you weren’t sharing with and why and this could develop into a public database. What I’m getting at here is if we know $EvilBot is on the network it doesn’t really matter if everyone ignores him and doesn’t share any files with him.

This being said I think it’s important users be able to create different usernames/profiles/identities. For instance you don’t want your job identitiy being mixed up with your personal sex life (or at least in some cases you don’t). There’s been a lot of talk about only allowing a user to have one account on maidsafe but I’d say that’s a really bad idea. 1. Because it infringes on the freedom of the user. 2. It compromises security. Eventually that name will become know in one way or another. And if the user isn’t free to create a new idenity he’s stuck with a compromised digital reputation. People are prejudiced bigoted bastards a lot of the time. 3. If your username shows up in the same place all over the place people can track that and develop a profile on you as easy as they can with your password. If you want security you’ll change your username regularly.

1 Like

Let’s never use this as a requirement to create an account though. “Hey, you should join this new, really cool network! Just click the link I sent you - and uhm, well, you just have to pass a quick interview and you’re good to go!”. :wink:


Very good point to point out. Nice fictional story writing too! :slight_smile: I do think that if at some later point we would look back at this, the NodeManagers should only have access to hashes of the voice print for example. But I agree, the client-side cannot be trusted from the network-point-of-view, so sending raw voice to the NodeManagers is indeed off the table, unless it would be unlinked to the MAID in some genius way (@dirvine something to break your head over after testnet3 :wink: )

I am not sure my proposal clear enough (unlike me eh:-) ), the proof of human would be a collection of data to recognise a human not as an individual, just it’s a human.

The stats would be hashed and that hash stored. It is impossible to reverse a hash and it is not linked with an account. It is a mining proof alone not an access mechanism (which would have all the effects people rightly worry about here). This is a pretty big are of research at the moment and answers a question I have had for many years. How do we prove an account is a human, and this human being completely anonymous and private.

This collection of the stats can be done via a consensus chain, so not theoretically possible with even a 50% attack to collect this information. It can be stored in the same way. Not simple, but it would be an amazing thing if we can pull this off. Worldwide consensus of many issues can be collected. People can be assured there are not bots ranking info etc. I think this offers immense opportunity. Done incorrectly it would be disastrous though. I agree with that part for sure.


I think this might be part of a philosophical difference between you and I. I think people should be able to have multiple accounts, I completely agree because inevitably, something will happen that will compromise an account, as is the case with all other networks. If it’s compromised on a decentralized system, chances are you’re losing it, so you’ll need to create a new one. That’s definitely an entire thread unto itself for discussion.

I believe the vast majority of people want to minimize the amount of usernames they have. And I don’t think the system should cater to the minority in this particular case, especially if it means compromising an attack vector on the system (ie, a bot generating millions of accounts). And by requiring a small fee (we’re talking the equivalent of pennies here, if the dollar example seemed too outrageous) it prevents robots from generating accounts. A single user won’t be deterred by spending a dollar of their own money to create a bunch of accounts (like you’re saying). But someone generating accounts with a bot is going to think twice before creating a million accounts for $50,000 or more. You could probably even figure out a way to have the new account cost be some sort of equation taking into account total amount of coins, users, income distribution over the network, and whatever else that might apply.

Agreeing with your ideas here to a great extent. We should be aware though (just for clarity) the account is never known or transmitted. The public names accounts are. This still has the same issue you discuss, so this is just a wee bit of info. You can consider three independent systems in play here

1: Login (retrieve some unknown data and decrypt)
2: MAID (these credentials are not used in comms, only in data store/get etc.)
2: MPID (public name) this is where the problem lies, this thread is talking about public names created by the user. At the moment there is not a limit.

These ID’s are all separate and not connected. So somebody cannot trace back to your data store get id (MAID) from your public name. They cannot track from one public name to another either.

I hope that helps a bit, as I say not changing the discussion, just a minor tech detail really.

This is indeed a tough one for sure. A suggestion, a verified public name for voting, ranking etc. (no messages or comments) and separate public names for ‘other stuff’ ? Just a thought.


I think that I know what you’re saying and I think that two different things are being discussed in this thread. You are saying that we could/should use these techniques not to prevent someone from creating multiple accounts, but to prevent someone from validating multiple accounts as “human”. Is that about right?

1 Like

Yes this is it. So you can only validate a single human account.

1 Like

I’d disagree with that. Is my need for a love life (romantic human) or sex life (sexual human) of less value than my need for my job (professional human) or my ability to vote (poltical human) and should I have to compromise my security in order to perform any of these tasks?

Okay this next analogy is going to sound weird but what if there was some little magic fairy (a bot, program, subroutine, you devs work your magic) that basically did checks if people were human and kept a database of IF people were human but not HOW they were human. Basically a very reliable but forgetful fairy. So Bob wants to know if Alice is a real person so he asks the human checker if Alice is real, and the human checker checks her list “Oh yeah she’s real!” “How do you know?” “No clue! I just do!” And then Alice asks the human checker “Is Bob real?” So the human checker checks her list and can’t find Bob so she takes Bob aside and has him do a few tests and when he passes she puts him on her list and as soon as the last penstroke is finished she gets sudden amnesia of the entire test process. Bam! “I know you’re real Bob! No idea how but I do!” And off she goes to tell Alice that Bob is real.

Basically the problem with most ways of confirming you’re human is by submitting lots of personal details, details that can be leaked. But the goal here isn’t to store how we know someone is human (the personal information: ie voice pattern, credit card number, birth certificate, speech patterns, social network behavior, whether friends vouch for you, passing capchas, etc etc, but rather that you ARE human. A binary question: Yes or no. How one arrived at that conclussion is irrelivant to the issue of whether you are human. So while different people might use different means to validate their accounts none of that personal data gets SAVED, the only thing that gets saved is the result of the experiment which is “Is the user human? Yes or no?” Now a user might have a lot of this data saved client side (voice pattern, personal details, handwriting signiture (if they have a tablet), facial recognition, etc etc, but none of that needs to be transmitted on the network. The entire “test” can be done client side then result recorded, test proceedure and methods eraced (personal details, which method, in short how do you know), and the only thing transmitted would be “Yes or no”. This would ensure security for the user. Also this would all be encrypted and this particular piece of data might even have it’s own hashe as it’s very special and is usued for multiple IDs.

1 Like

This is the point though. A bot would need to be turing test capable (and none can pass a turing test) to do this. What I have been talking about is a system where the network can state, this is a human, I cannot tell which one, but I can tell its a human and they have no account. Then they can create an account.

There are many ways to ensure this is completely private. It is also possible for this to be a seal, the user can use at will to prove any of his/her id’s are linked to a real human (this is the operators choosing, but I cannot see a problem with it). In that way participation in community efforts can be considered, ie.e. voting and ranking. I see the ability to participate in events where not everyone knows everyone is important.

In terms of communicating with others, I do not think it matters, the person you communicate with will be very good at spotting a bot. This is a thing we have over computers right now, we can spot them easily.


I should add, this is not to spot bad or good humans, only the fact it is a unique human.


Ok I’m going to ask a stupid question. Do we really need to know if a user is human or not? Do we really need the systems which would require it? Let’s face it democracy is dead thanks to lobbying (and never was that great of a system anyway, tyrany of the majority and all that). What exactly do we need a ranking system for? Or perhaps a better thought would be why couldn’t we create a ranking system that one could opt into that required one to validate that they were human? Instead of making the whole maidsafe system figure out if you’re human or not just make a ranking system app, or a “check if you’re human” app. That way if you want to gain reputation for an ID you can but you don’t need to gain it for all your IDs. And having a reputation app could be expanded so you could record reputation of different kinds and on different identities.