Proof of unique human

Well, I don’t favour proof of unique human, to begin with. I see no reason to exclude bots, if they earn their way with coin like everyone else.

But this particular proposal, of a biometric device attached to a person more-or-less permanently (I realize that it would not start out that way), reminds me of science fiction stories (more than one, and a movie) whose trope is an explosive neck-collar intended to keep prisoners from escaping, and with such a device the rulers can dispense with walls, turning the entire planet, even the solar system, into a prison.

4 Likes

Encrypted - so what, my data to the forum is encrypted, yet I see it represented in a form I can read and my input is unencrypted

The data is initially unencrypted and that is where any “hack” will start. Feed the wrong data in and you encrypt the wrong data.

Good luck with that one.

At this time and foreseeable future any electronic system reading biometrics/location/etc can be fooled into accepting faked input. One problem is the human element and unfortunately that is what you are trying to measure.


Oh and if your system worked perfectly then you are being tracked for every vote you make because you have to use that ID to vote. This is against an anonymous system

2 Likes

Thanks for clarifying. I take your points but don’t think that embedded devices or explosive collars are more than useful symbols reminding us of these dangers.

Widespread multi mode surveillance and pattern matching (face recognition, speech, walk, places frequented, associations etc) make attachment of devices unnecessary - unless we manage to hide almost all of our external profile from the world (which would be just as much a prison - it’s the quandary posed by The Matrix 1, where freedom came at a cost).

I think there are good reasons to want to differentiate between human and none human (at the present time at least) but we don’t need to debate those here. This thread is about how to do so, not whether it is a good idea. So back to that… :slight_smile:

Well, with projects like Deepmind, which can now beat the best human players of Go, Google translate, and so on, we are getting to the point where machines can pass a Turing test in limited domains. If a machine can play board games then it can do what medical specialists do and diagnose a patient from an x-ray and other lab tests, since it is all just pattern recognition. So medical data can eventually be faked in real-time to create whatever impression is desired.

Here’s a gratuitous picture of me (lol) trying on one of the proposed devices. Actually, in the movie, since it is fantasy, the bracelet eventually visits poetic justice on the villain. But in real-life we don’t always get such contrived endings.

2 Likes

This has been a long and arduous thread and in part very useful. Thanks to @happybeing it’s back on track. My question is this: "Does the principle of ‘proof of human’ acually limit the ability of one human who needs to use the SAFE Network for multiple purposes such as for his personal use, preclude him from creating a second account for his discretionary family trust and then a third account for his family’s self managed superannuation fund and then a fourth account for a corporation of which he is the company secretary? I think the SAFE Network should have the means to accommodate this scenario and I’d appreciate some feed-back please.

2 Likes

A good point, but then if say the network could identify a single human (the network being the distinction) but not ever disclose that to any human would that then be Ok? I mean the network knowing a human has X accounts, may be Ok. So it may not preclude this, however it would mean the network needing to somehow search all accounts for this user having several and that is an issue.

I suspect there may be an area of research in this area and I hope it does get some attention soon. I have some early ideas, but not enough to present just yet.

4 Likes

Thanks for your quick reply David. On thinking this issue through a little deeper I guess it’s not really an issue that SAFE Network need’s to concern itself about because SAFE doesn’t store data about a completed transaction, whereas a trustee of a superannuation fund most certainly does and moreover needs to ensure that his personal and other business matters are well separated from an accounting and banking point of view. My train of thought was towards developing a TelelinkGlobal “master wallet” that, inter alia, incorporated an accounting package - cash book, journal, profit and loss account, depreciation schedule and balance sheet in multiple currencies - SAFE Coin, TelelinkGlobal trade credits (TCs), fiat, allocated gold etc.

4 Likes

If the authenticator worked like this… (unfortunately this is not really open sourced by Google, I think).

Your trust score could already be your unique proof (to the network), this would basically only be hackable by an ai I guess.

Ideally an app accessing the SAFE Network, should be hidden on the phone so that people can’t be coerced to login.

:stuck_out_tongue:

Creepy

But not if your creeping onto a autonomous network

3 Likes

This an interesting post, that we shouldn’t let get lost to time. I think a way to verify that someone is human may be to look at their account and safecoin balance. Say they need a minimum of £10 worth of safecoin to join the network. The network will take the balance from them for a month or longer and then return their coins to them later. This would add a cost to spammers and botters for creating accounts which might help deter them.

Or make it so the user has to have had an account for a period of time (say a week) before they’re allowed to run a node or edit data on the network. However making legit users wait might hinder adoption.

I’m not 100% on the best thing to do, unless we start asking for credit cards and passport but I guess that would be a big no no :joy:.

The latest ideas around Online Pseudonym Parties, now re-named Pseudonym Pairs based on that the parties are now 1-on-1. The overall design now, in case there is a problem, people can break their pair to be verified by another pair, 2-on-1. So, a flat hierarchy and nice experience, and if there’s a problem, a small hierarchy with mob rule.

The latest ideas for how bots are kept out of the network is mentioned in the “whitepaper” that describes the complete system, overall a simpler design than the earlier ones. Have spent the past day adding mixing to it with ring signatures.

Ethereum is the best state technology at the time, and is very slow so it can only support a population of around 10000 “nyms”, BitLattice looks like the next generation after Ethereum.

5 Likes

The POI/POH question has received lot of attention also in the blockchain UBI scene:

Among solutions mentioned in the above article:

https://github.com/CirclesUBI/docs/blob/master/Circles.md

and standalone systems

2 Likes

Is it possible to farm without an account? If so have people farm to collect a small amount of Safecoin and then purchase an account with the coin they collected. The price for an account could fluctuate like the farming rate. This would incentivise people to farm and help prevent spamming.

If people want an account sooner they could purchase Safecoin on the market with fiat.

5 Likes

It is possible to farm without an account! :wink:

14 Likes

How does proof of unique human fit into the philosophy of the Safe Network? Acting as different personas at different times seems to be part of privacy to me.

Just think about a spy who meets a source. If he acted as himself (a foreigner whom the local authorities may suspect and even monitor) he would endanger not only himself but the source as well. Instead, he’ll use disguise and act as somebody who seemingly isn’t him.

There are certainly times when it’s necessary to have proof for someone’s real identity, or at least uniqueness, but I don’t think it’s a valid concern for the Safe Network because it’s unlikely such features could be added without leaving the door open for potential misuse. I see it a lot like encryption and government backdoors: sounds clever until we realize once we weaken something it’s weak for all not just the intended purposes.

I think the opening lines of the topic set out the premise for the discussions.

Things have moved on and the original premise may not be so relevant now. What you say is correct and why I think unique human is not good for the safenetwork specifically, even if it were possible to do with absolutely certainty.

Account limiting should not be such an issue now since the chicken-n-egg issue has many potential solutions and most if not all people will be able to open accounts if they previously had no coins. For instance gifting, farming prior, buying a coin on exchange, some sort of selling safecoin address preloaded, or Fraser’s division makes the above even simpler. Also the concept of opening an account without coin but not saved till a coin is paid, allows setting up account then receiving a coin from a friend to the coin address set up and added to the account.

The Pseudonym Pairs protocol is pseudo-anonymous, you get a new “nym” token once a month, untraceable to your previous one. So it’s not a proof of your identity (whatever that means…), just a proof-of-unique-human.

Was going to post this in the current thread, but as @neo mentioned it is probably best not to turn that thread into a discussion about all the possible ways of doing proof of unique human. Here are my current thoughts on the subject, which someone has maybe already thought of above?

I haven’t looked into this Bright ID thing yet, but I have spent a lot of time thinking about the unique human problem, because I think it could be very useful if someone can crack it (without needing KYC or losing privacy).

One kind of solution I quite like takes some of the ideas from SOLID. I really like the idea of having your data in your control, and if you have it on a social media site, you can take it with you to any other app / website. With each social media we use, we build up a reputation, which takes time to build and isn’t easy for a bot to simulate. If we can build on the same reputation at every forum we use, that could in time be used as a kind of almost proof you are a unique human. It wouldn’t be 100% provable, but if you have a 5 year account across multiple platforms and interacted with many other people (that maybe endorsed / rated you), you have to be quite sure they are a real person. Yes they may have multiple accounts, but if it is hard / impossible to simulate (regular captures may help too), you have to be reasonably sure they don’t have lots of accounts as the time needed to build up a high reputation takes quite a while with a lot of input.

So that idea isn’t totally solving the unique human problem, but I feel it’s as close as we can get at the moment. It is kind of like the vault aging idea, but for a person’s profile. The higher level your profile gets the more you are trusted that you are unique.

If we can then work a way out to verify we have one of these high ranking accounts and say vote on things without giving up to anyone which profile did it, it gives us some useful tools to create ideas on.

Can anyone see any problems with that idea?

The thought I have is that only a portion of the human population (using the internet) even use social media with real accounts. A bigger portion use real and fake accounts. But a lot don’t. And of course I am not including those who don’t use the internet at all.

So I am thinking the problem is that if these non-socialmedia-forum people want to use an application then they cannot get an ID that would be acceptable.

What about the people who barely use these social media stuff yet are inflating the members of the various social media sites. Then the Chinese might have a lot of difficulty since they are not supposed to use social media outside of China, but these people wish to access applications on safe.

In my opinion such a device (let’s call it a decency chip) is not only desirable, but may be the only alternative to a surveillance society as long as its data remains under the exclusive control of the user, unless the user breaks the law, in which case the device would flag the user to the authorities.

Any entity fitted with a decency chip (a decentity) should enjoy total privacy as long as they remain within the law.
The question then is, can we ensure the laws are themselves kept reasonable and afford enough freedom to everyone? We need an evil-proof decentralised governance system for any of this to work.

Real account is one where the person is who they wish to portray themselves to world

Fake accounts are ones where they just want to respond to something, get something, or post crap and not be know who they might actually be. Or to flame someone or any number of other reasons. Its estimated that up to half or maybe more of the social media accounts are fake.

The point being that facebook claim x millions of people use facebook, but it could be 1/2 that number actually are real people and then of those real accounts there are people with multiple accounts.

Thus using social media for rating a person is only going to work for albeit a large number (??500 million) a small portion of the human population.