Proof of unique human


That could depend on how the ‘tree’ is represented on screen. Coupled with a time factor, it might make it hard enough to do what’s required.
Voices can be sampled too, so it doesn’t look like there’s a simple solution to this problem.


To clarify I mean ‘human to human’ real time video chat/interview.

Turing test has already shown this could be gamed by bots in a text format so video (or to a less secure level just audio) would be required.

I’m not so sure. Even current AI technology could understand many representations of trees from drawings, painting, photos, icons etc.


It occurs to me that even if we come up with a proof of unique human method, we would need to make sure that any free amounts that come with that proven unique human are worth less than the cost of say, going to the third world and getting a person who has virtually no access to the internet to make an account which they then give you access to.


Another point which just occurred to me is that to the extent that we are going to have DOs and DAO’s on the network, it may be desirable, if not necessary (for accountability purposes) to have accounts for the agents/employees of those DO’s, which are separate and distinct from the personal account of the agents.


I was pondering this question:


Those who provide, are credited. Instead of voting, we do accounting, via safecoin.


Here’s why I think adding verifications for humans is a bad idea. This assumes that the MAID would be verified, and that MPIDs would not. I read all the comments but some of this is bound to have been covered already. Sorry for the lack of proof-reading - this has become a stream of meaningless words to me…

  • In General
  • A lot of effort is being put into making the network untrickable, and it’s not feasible to show that a unique human parameter is untrickable.
  • If being a unique human becomes valuable, then unaware humans might get extorted by people looking to gain something from the network.
  • Free Space
  • There’s no way the network can compete with centralised storage in terms of free space offered.
  • People will want to want to use the network due to the privacy, freedom etc, even if they don’t get free space.
  • If it’s an insignificant amount of free space, it won’t satisfy people or lure them in, and if it is significant, then serious farmers will use a branch that doesn’t ‘tax’ them.
  • People will require increasing amounts of space - HD cameras will see to this.
  • Multi-accounting
  • Other than the farms having downtimes while farmers switch between accounts, I can’t see how this would be a bad thing. The downtime would reduce their profits too, so there’s the incentive to keep an efficient farm.
  • Biometrics
  • Fingers can be severed/coerced, voices lost etc. There’s no sensible way to do this in a way that works for all humans.
  • Encouraging biometrics is a big ethical problem. Even if someone comes up with a brilliant open-source biometric device, encouraging this would increase acceptance of biometrics in general, so closed-source back-doored biometric devices would pop up. Also, I doubt it’s possible to do this in an open way without it being fake-able by bots. This would also provide justification for putting biometrics in PCs.
  • The idea of using multiple biometrics to add to a verification score is bad because that would mean all relevant biometrics would have to be secure.
  • I’d want to be sure that I’d be unable to access my information if I were to lose my memory.
  • People with multiple personalities would be unable to use the network normally.
  • Peer-reviewed Humans
  • This would exclude people who don’t communicate with others, and people who only speak obscure languages.
  • The approach of using certified offices is way too centralised, and people will use a branch of the project instead.
  • Using Ownership Of Mobile Phones As Proof
  • This would block verification for everyone without access to a mobile, and people like Richard Stallman who refuse to use mobiles.
  • The number allocations can be controlled by governments and corporations.
  • This would be mandating non-free software as most users of mobile phones are legally prevented from altering the baseband processor firmware, and that’s especially dangerous because baseband processors can sometimes override other CPUs.
  • Assumed Benefits Of Verification
  • I can’t think of a case where a 1-vote-per-person policy would work on the network.
  • Other open-source decentralised networks have risen without this in place.
  • Making the network exclusive would alienate some people. The exclusive aspect would be redundant anyway if it becomes as essential as the current internet is.
  • I don’t mind interacting with bots - their spam is less effective than spam created by humans.
  • Other Problems
  • People would no longer be able to plausibly deny having a MAID, because someone could force them to undergo the unique human verification process. A similar situation could emerge with governments requiring that people log into government computers with a verified MAID.
  • Considering the above notes on biometrics, some people would opt to have a master password, and losing this would mean they’d never be able to use a verified account of their own again.
  • Seals are not humans, so allowing seals to be verified is a serious flaw.
  • Potential Solutions
  • If this were just an opt-in layer built around (and not merged into) the network.
  • If there were an opt-in system for contributing to the nodes’ verification efforts - if I couldn’t contribute space to the network without also verifying humans, I wouldn’t contribute space to the network.
  • Giving MPIDs the option to ‘trust’ other MPIDs. If people ‘trust’ all of their friends, then it would be easy to check how far removed two MPIDs are, and to verify that none of the trust-links in between are bots. On forums, it would be possible to filter spam by only seeing messages posted by MPIDs a set a number of trust-links away from an admin. This would also work for ‘distrust’. I don’t think this would need to be built into the network.


Very useful post @to7m thank you.

I see all things like this as opt in, with each adding an orthogonal metric to a multi part trust vector. Users, apps, DAOs etc. can optionally provide or use the vector elements they want, or combine selected elements to create new vectors, for private use our publication - risk assessment add a service for example, to help answer: should I trust X to do Y?


In that case, each verification process could have its own list of trusted MPIDs, and that should suffice.
Example: a government could request one MPID from each citizen, and then list the MPIDs publicly (with or without real names), and people/services would then be able to determine whether the government views a particular ID as a person. Actually, that might be useful for petitions…


I’d just like to note in this discussion about “trust” that trust is variable, subjective and usually applied by a matter of degrees and is not a binary thing as seems to be implied by the given conversation. Trust is more like colour than it’s simply like having the lights switched on or off. Hey now there’s a thought, a trust system based on colours. I mean we can assign colours numerical value, via hexcodes and such, and assign those values different meanings and catagories. Then combine those colours and levels graphically OR numerically. And there is an extremely wide spectrum of colours so there’d be plenty of choices of pigments to assign to trust catagories. I know this plays more into a reputational system or what not but I just had the thought now so yeah.


New post: Authentication: FIDO Support Looks Important


Really excellent post.

As a note, it won’t even be necessary to have down time on the vault, as the user account is separate and can be switched between on the same vault. In other words, anyone can log onto the network on any “user account” on any machine running a vault.

This thread overall was relevant when considering how to keep people from gaming the network with multiple free-resource accounts. With the account creation/activation scheme which now seems the best solution, there are no free-resouce accounts, so no need for “proof of unique human” at a core level. (Hurray! as I never liked the idea from the start).


I think you guys are onto something with a reputation system giving proof of a unique human over time.

It’s like a system where people can fulfill basic requirements over time to earn the right of unique human, etc and gain community recognition.

Just thoughts


SlickLogin acquired by Google

No complex interactions. Just place your phone next to your laptop\tablet and you can login.

Users love it
Streamline your authentication process to boost user engagement and customer retention.

Unique patent-pending technologies and a fortified protocol model enable military-grade security.


I like the idea of different levels of opt in proof granting different privileges: captcha, voice, etc.

This would allow to use really powerful systems like anonymous, government id protected proof:

Have a trusted third party receive a minipayment(with a password as message) to their bank account. This party takes the password and presents it to the network. Now the user submits the password with the proof request. The network queries its database and sees if the trusted party has submitted same password.

This way the trusted third party only knows the name of a person who has wanted to authenticate to safe network. They would not know the resulting user id that gets granted the proof. The network never knows the name of the user. So there really is no information leaking except “a person named john smith wanted to be verified on the safe network”, and this only to the third party.

This would ensure that if you had "verified by " badge on your account then most probably you are a unique human. As long as the trusted third party really is trusted. They could be a certificate authority, a company or nonprofit organization (even official entities making it an official badge that allows voting in regional/national votes). It would be up to the developers/users to weigh how much they trust different third parties verifications.

For this to work there needs to be some method to add and remove trusted third parties as time goes on. Perhaps voting is the answer to this? Perhaps since this inherently involves trust it is not something that should go into core maidsafe, but be implemented on a higher level.

  1. A user creates a private and public key and selects an e-mail address.
  2. They retain their private key and attach their public key and e-mail to a unique identity profile (containing their full name, home address, phone number, birth certificate, social security, medical insurance number, and so on).
  3. This profile is deposited online in an encrypted vault that links their e-mail address, identity profile, and public key.
  4. Different bits of this profile are then analyzed by random individuals on the network who confirm the identity.
  5. After the profile is validated by enough users it is automatically sent to an anonymous registration system.
  6. This system re-encrypts the profile using the public key and produces a new secret key and private account.
  7. This new information is then sent directly to the individual’s e-mail address.
  8. The individual combines the private, public, and secret keys to decrypt their profile, obtaining their anonymous login data and unique privileges to the network.
  9. At this point the user gains total control over their profile and can represent themselves however they wish.
  10. One might also imagine an autonomous public auditing system that crawls all encrypted profiles to search for duplicates (using a master key generated by the network itself).

Nobody other than the user will have access to the private e-mail or private and secret keys. Nobody can manipulate the process of verification or link it back to the anonymous profile. The network serves as a randomizing and blinding mechanism on existing state-based identity registration systems. It is a low-barrier of entry solution. Of course, you can make this more complicated by adding all the methods discussed in this thread. But I don’t think it’s necessary. Thoughts?


Kind of half-baked idea, but…

I could possibly see a way of doing this using some mixture of PageRank reputation and Self-Organizing Maps, but real humans would be required. You would start off with an undifferentiated network, then when two people want to “log in”, another random live human is pulled in, and challenges them with some restricted set of creative challenges (‘sing this phrase’, etc.). Each of the challengers has a unique pubkey. If the ‘tester’ human is satisfied, he can vouch that (a) these folks are human and, more relevantly, (b) how different they think each human is from the other. Basically, the human interaction would change the weighting of the points/nodes on the map, and it would self-organize into clusters of nodes that are ‘more’ and ‘less’ alike. Clusters that are farther apart would have had more human tests indicating one person is different from another. These validations could be cryptographically signed in some fashion. So, eventually, you’d be able to tell that the person associated with one node/key is pretty different from another (on the map). The more discrete the clusters, the more granularity you could have with uniqueness.

A traditional SOM might not necessarily work due to the desired level discretization, but some variation on it might.


Not sure this would fit the bill here, but interesting. Linked to your cell phone, so has some fallibility.


It used to be…

now we have de-centralised trust…and it’s definitely a binary thing…lol

Live 'person to person' video stream Turing test for 'Proof of unique human' for voting

most interesting captcha seen in a while; from