I like the idea of different levels of opt in proof granting different privileges: captcha, voice, etc.
This would allow to use really powerful systems like anonymous, government id protected proof:
Have a trusted third party receive a minipayment(with a password as message) to their bank account. This party takes the password and presents it to the network. Now the user submits the password with the proof request. The network queries its database and sees if the trusted party has submitted same password.
This way the trusted third party only knows the name of a person who has wanted to authenticate to safe network. They would not know the resulting user id that gets granted the proof. The network never knows the name of the user. So there really is no information leaking except “a person named john smith wanted to be verified on the safe network”, and this only to the third party.
This would ensure that if you had “verified by ” badge on your account then most probably you are a unique human. As long as the trusted third party really is trusted. They could be a certificate authority, a company or nonprofit organization (even official entities making it an official badge that allows voting in regional/national votes). It would be up to the developers/users to weigh how much they trust different third parties verifications.
For this to work there needs to be some method to add and remove trusted third parties as time goes on. Perhaps voting is the answer to this? Perhaps since this inherently involves trust it is not something that should go into core maidsafe, but be implemented on a higher level.
A user creates a private and public key and selects an e-mail address.
They retain their private key and attach their public key and e-mail to a unique identity profile (containing their full name, home address, phone number, birth certificate, social security, medical insurance number, and so on).
This profile is deposited online in an encrypted vault that links their e-mail address, identity profile, and public key.
Different bits of this profile are then analyzed by random individuals on the network who confirm the identity.
After the profile is validated by enough users it is automatically sent to an anonymous registration system.
This system re-encrypts the profile using the public key and produces a new secret key and private account.
This new information is then sent directly to the individual’s e-mail address.
The individual combines the private, public, and secret keys to decrypt their profile, obtaining their anonymous login data and unique privileges to the network.
At this point the user gains total control over their profile and can represent themselves however they wish.
One might also imagine an autonomous public auditing system that crawls all encrypted profiles to search for duplicates (using a master key generated by the network itself).
Nobody other than the user will have access to the private e-mail or private and secret keys. Nobody can manipulate the process of verification or link it back to the anonymous profile. The network serves as a randomizing and blinding mechanism on existing state-based identity registration systems. It is a low-barrier of entry solution. Of course, you can make this more complicated by adding all the methods discussed in this thread. But I don’t think it’s necessary. Thoughts?
I could possibly see a way of doing this using some mixture of PageRank reputation and Self-Organizing Maps, but real humans would be required. You would start off with an undifferentiated network, then when two people want to “log in”, another random live human is pulled in, and challenges them with some restricted set of creative challenges (‘sing this phrase’, etc.). Each of the challengers has a unique pubkey. If the ‘tester’ human is satisfied, he can vouch that (a) these folks are human and, more relevantly, (b) how different they think each human is from the other. Basically, the human interaction would change the weighting of the points/nodes on the map, and it would self-organize into clusters of nodes that are ‘more’ and ‘less’ alike. Clusters that are farther apart would have had more human tests indicating one person is different from another. These validations could be cryptographically signed in some fashion. So, eventually, you’d be able to tell that the person associated with one node/key is pretty different from another (on the map). The more discrete the clusters, the more granularity you could have with uniqueness.
A traditional SOM might not necessarily work due to the desired level discretization, but some variation on it might.
David the answer to the “proof of human” in my mind is very simple, and I want you to really think about it.
You use a challenge response system with the device’s accelerometer.
Really, think about it.
The human supplies no real identifying information, but their human interaction with the device is clear as day.
I see it like this. It spits out initial voice commands to get a baseline “put device on ground” “hold device above head” “hold device out” (logging arm length/height/other basic metrics).
With the initial human data the challenge starts saying commands “shake up and down” “hold still” “move device in a figure eight” “move device in a circle” displaying the desired motion on screen, and records from the accelerometer. (think wiimote)
This human challenge can go on for as long as is needed, continuously telling the human new commands until a decision is made.
Maybe AI is advanced, and programmers are dedicated, and physics equations are easily understood… but mother of god have fun writing the program that will render the device in 3d space with perfectly fluid motion from every command to every other without delay or hickup.
The real secret sauce is in the commands given, “shake vigorously” can get an estimate of the person’s strength, which can be compared to the “hold above head” height.
You could of course add some voice challenge (count to 10 while putting arms up).
At the very least you can verify it’s human, not necessarily a unique one (although I’m sure there are commands you could give to figure that out), but this will dramatically reduce the millions of fake accounts generated by common scripts.
Desktops are an issue but smartphones and laptops all have accelerometers now.
That means Professor Hawking could not be seen as human And anyone with such disabilities like limited arm movement.
Also it would give jobs to low paid sweatshop “employees” who respond to the bot which relays the movements back to the app trying to verify if human.
Too many ways to fool it without even hacking or programming a voice activated response system. No need for AI, and is being developed for robotic response systems.
SAFE network does this by requiring payment for account to persist
I think you gloss over the really difficult problem. It is easy to get lots of people in a room going through this process (just as with generating game currency, filling in online forms etc.). The difficult part is the “unique” bit, and so if you think you can solve that part please go on. Then you will have found something no-one here has yet. I like the idea you present, but I don’t see how you can tie those responses to particular people. I’m not saying you can’t, but I think it needs to be demonstrated - or a theory of how presented that is worth someone exploring.
Maybe someone will pick up on the basic idea, but if you think you can solve it, don’t stop there!
The reason I mention this was that the current captcha system is being bypassed now with bots sending out the images/puzzles to unsuspecting users to solve (& dedicated solvers). Thus a bot operates as a human and proof fails. Ever been to that site asking for captcha to be solved for access to the article, yet the captcha always says you failed (to get the person to solve many)? Its increasing and saves the need of AI, or other recognition systems.
The movement challenge and response reminded me of that and how easily I could rig up a mechanism to fool such a system. (engineering is my expertise ) Easier than that captcha fooling system although a tad more expensive.
I also gather that the human proof for account creation has basically been replaced by paying a coin. Thus I gather it is more a quest for unique now.
Personally I feel that the need for human proof and unique human should be for things that really need it like voting and a number of other APPs that want humans and not bots. There is a lot of resistance to wanting people to prove things in order to use a system unless they feel its necessary. The more they have to do the less likely they will do it.
Uniqueness & human proof is very difficult and mostly fooled because the input has to be converted to digital. I’d say the best would be a DNA analyzer that creates a hash of a person’s DNA so personal info is kept private. Then the problem is cost, identical twins and those that actually have 2+ DNA sequences. (the case of the mother who is not the “mother” depending on where the DNA sample is extracted from.)
I just read this entire 229-post thread inone fell swoop. Disclaimer: I’m new to MaidSafe.
It certainly seems that this thread needs clarification on the “is this integrated in the network or built in apps on top of it” point. For the higher-level apps, then, the point is moot, since the apps will do whatever they want, including their own seperate ways of verifying humans if they so desire, and we can’t and shouldn’t stop them. All of the discussion about upper-level authentication, while interesting and valid, is therefore only useful as suggestions to new app developers.
So all that really matters is if PoUH should be implemented at a network level, and if so, how?
Well… it shouldn’t. Here’s what the thread has taught me:
Biometrics are fun and interesting and as discussed, plausible. Unfortunately, they’re also never truly unique. Sharing a thumbprint with someone is rare, but it happens. Even less likely so for retinal scanners. Even less likely for heartbeat rhythyms, and even less for DNA. Yet, all of the above, are not perfectly unique. (To test any such idea, always apply the limit as the nimber of living humans approaches infinity) In each method there is also accessibility issues (blind people, fingerless people, inconsistent heartbeat, etc). It was also mentioned that this would promote acceptance of biometrics in general, including the other proprietary/insecure/spying methods out there.
Combination of Biometrics seems like the obvious answer to the above, and many people would say it’s “unique enough” for our purposes. It comes in two forms:
Mandatory combinations, say a heartbeat, an eye, and a fingerprint, exhibit the issues discussed about being far too complex, introducing high barriers to entry; not to mention hardware cost.
Optional combination of biometrics, like “eye scan or voice print, whichever you want!”, solves almost all of the accessibility issues but defeats the security gained. Bad actors simply go after the most exploitable method, and don’t have to worry bout the rest.
In either case, biometrics are obviously cirumvented by physical coercion, as usual.
Defining “human” in an operational way for the network is also necessary - if not different than “a user who provides value”, which would invalidate the need for PoUH (only bad actor elimination would be necessary), then we have no leg to stand on. For example, take the definition “rational animal”. If a non-animal with rationality comes along then why not give it the same resources and opportunites? If an AI had the rationality of a normal human user (such that it were indistinguishable from one) then does it not deserve disk space and resources to use the network as an other user can? Or are we really concerned wih precluding value added by “animals” only for some reason? I may have just watched Bladerunner last night, but you have to admit, defining human is… completely arbitrary.
Voting, which should never need to be done on whole-network scale anyway, is not meant for this low level. We can use higher level layers and apps, to vote, and even then, among mutually-verified subgroups of users. So let’s rule that out.
Joining the network seems to take three schools of thought:
Join and get paid, a.k.a. free space/resources for new users
Just join, and get/give nothing, all the way to
Pay to join, such as requiring a safecoin for an account.
Each of the extremes leads the user incentives astray. We don’t want to incentivize the mass creation of accounts, so giving away free space, while generous and easy, won’t work. We can gain the benefit from this approach, though by simply making it easy to immediately earn a first safecoin (for example). This also has the disirable effects, such as psychological value, of the “pay to join” approach, except the barrier isn’t actually there. What we don’t want to deal with is the imposible task of charging the whole world a sensitively fair amount, not to mention that telling people they need to pay for internet 2.0 is not going to go over well.
Charging nothing and giving nothing has the pros of both extremes, with none of the cons. We can simply value the coins, rather than the accounts - and hey, we can even use that invitation-network-effect idea - we simply pass around “invites” wih coin attached!
Seperating personal lives into multiple ‘accounts’ seems to be objected to, on the basis that it would be desirable and the PoUH would interfere with this. I’m arguing against PoUH, but this is not a good reason. First of all, we still want everyone to have only one account, or at least I think so. I was under the impression that you could deal with the seperation-of-personal-life issue via something called Personas, which aren’t as permanent/anonymous as your actual maidsafe account. I may have the terminology backwards, but what they’re arguing for is PoUH for the account, which should suffice for all of the Personas under it. I still think everyone should have one account, and that there should be no point in having multiple (fragmentation of your stuff! multiple authentications to remember!).
This makes other processes fall into place - accounts will tend towards coinciding 1:1 with a single human because the incentives are set up that way.
Passports are used to identify unique human beings across the world. It is safe to assume that members of society have a passport. It is also safe to assume that given this form of identification, one can prove that one is not a bot.
Of course the problem is that people do not under any circumstances want to compromise their passport information.
Is it possible to generate a unique signature using a passport number as a one off without compromising the safety of said number, and still retain enough cryptographic information in the network to be able to verify this signature? Would this be illegal or impossible or just not secure enough? Since a lot of sites demand identification these days it got me curious as to why this type of solution would be infeasible.