BrightID - web of trust proof of unique human


#1

We’ve had quite some discussions wrestling with ideas of how to prove unique humanness on SAFE Network because it is 1) very desirable and 2) very hard.

Take a look at BrightID which is based on a web of trust which respects anonymity outside your circles of friends, and at first glance could be quite easy to sit on top of SAFE.

What’s nice about it is that it aligns well with the fundamentals of SAFE.

Seems credible to me, but I think it needs better eyes than mine to figure out if it can do the job - particularly wrt attacks! cc @neo @mav @dirvine

BrightID white paper(Google doc - contains more links at the bottom)


Stylometry & anonymity on the network
Verified WebIDs in the SAFE Browser
#2

I haven’t time tonight. But thought I’d point out that for the original idea of the unique human (account creation) it was “downgraded” to not needed for that purpose. Since the spending of a coin was to be used instead and the fact that its a good idea to allow each person the option to have more than one account.


#3

I agree - we should always have the option to create more than one account. People use different identities for different reasons - professional, personal etc etc - most folks dont like to mix them up. Ability to create multiple user accounts should be a given. Anything less will be a no go for any user adoption.


#4

Not thought seriously about the viability of this, but at first glance I like it, because it seems reasonably close to how we (used to!) interact in real life.

I don’t think this sort of thing is necessarily against the fundamentals of the network, and could in fact strongly help to enable some of those fundamentals, but perhaps point 2) should be ‘very hard within the fundamentals of SAFE.’

Depending what problems it is being used to solve, there is no surely no reason why one identity couldn’t have a multiple but limited number of accounts, or at least IDs.

Practicalities notwithstanding, I’d actually be as interested in seeing something like this on the vault side as on the client side, or a connection between the two. From a purely economic point of view I think it’s going to be very difficult to ensure that provision of vaults remains decentralised. As has been seen with Bitcoin, it doesn’t take long for small economies of scale to translate into massive structural shifts.

As I see it, a technology like SAFE cannot be wholly neutral in its application, and within a society I think freedom probably has to come with at least a modicum of responsibility, upsetting as that may be to all of us! In the BrightID case I suppose that that responsibility is just to participate sufficiently to be verified as human by some (any) other humans, which would perhaps exclude some on the very margins of society, but much less so than the current centralised systems of governance and economics.


#5

My reply is not directed at your post but just thoughts that arose from reading it.

Why would limiting number of IDs that a person can used be seen as a benefit?

If I am doing hundreds of online purchases in a period of time (month perhaps) over the safe network then why would I be limited to the number of IDs that I can use. I might be someone who wants to be anonymous as much as possible so I use a different one each time. So I could just for online purchases be using many thousands of IDs per year. Then I might want different IDs for each forum I visit, each interaction with others and so on. And there are uses for different IDs that we haven’t even thought of yet. Games might use an ID for each and every object used. So 1000’s of object types and 100,000s of players.

And then to make it even harder. IDs can be generated without SAFE, they are just a keypair

What would you be thinking the limits of IDs should be?

Now to accounts. What if my young (grand)kids are made to use a different Account for each game they play on line. How many accounts would you limit a person to? What of a business?

I do not think we can be limiting either accounts or IDs since we have no concept of the amazing applications or features the safe network will bring to the world. Imaging the internet in 1995 and thinking oh lets limit this or that because who could ever use this many downloads or this much of something else.


#6

BrightID

  • To help users become verified as a unique individual.
  • To connect users to applications that offer benefits for being verified as unique.

I this this actually says it all as far as motivation for the idea. (Not achievable without loss of anonymity see below)

Also from reading it there seems to be a push to have this accepted by authorities by some of the use cases they suggest. (for example Welfare payments, Voting in elections)

In some respect this is anti-anonymity even if they suggest it allows anonymity.

For SAFE there for sure will be applications that will want some form of uniqueness to accounts opened for that application. And there has been suggestions of a web-of-trust or similar for safe apps to be able to have confidence of trusting an ID. In these case maybe a form of BrightID might help. Basically this is similar to the ideas of web-of-trust that others have talked about here over the years from time to time.

I think that once SAFE becomes global then these forms of ID management will have the black market problem that national IDs have (and other IDs etc) How many under 18s have a fake 18 year old (or license) in order to get into night clubs. Not hard to get from what I hear.

So for the digital world, once there is enough money to be made from falsified IDs then it’ll be done on scale. If a black market operation has 1000’s of operatives then producing fake IDs with reasonable high scores will be easy enough. Even if you went further than BrightID do and have people actually verify the person, then the black market could basically be verifying themselves and pumping out IDs with good scores at nearly whatever rate they want to fulfil the market.

And therein lies the problem, new applications cannot really rely on any score since it could be fake, but needs to rely on the score gained in using the application.

Unless they start doing KYC verification then its just way to easy to fake good standing IDs

Also it sounds like that is the way they might be going to get government approval for issuing IDs usable by citizens. Mind you I do not think governments would use them though.


#7

The point was really in reply to the suggestion that there is no scope for giving a verified user more than one account or ID. The only reason I suggested any idea of limitations was that I was thinking more from the vault side, as I mentioned later. Limiting the number of vaults one could operate might be seen as beneficial, in the scenario where a vault provider is trying to game the rewards system (or attack the network) by operating many virtual machines instead of one large vault (based on the assumption that rewards tail off for larger vault sizes.)


#8

Although on the vault side. There might be a benefit, but somehow I doubt it can be achieved without KYC being used. And even then that can be bypassed with illegal means.


#9

But isn’t that the whole point of the unique human discussion? To try work out a way to solve it without KYC and giving up any privacy?


#10

There is already that discussion in its own and basically it cannot be done reliably.

This topic is more about BrightID and is it any use to safe. So I just used the outcomes of the unique human topic.

If you want to come up with a solution to unique human in this topic then I fear it will be a long topic without discussing the merits of brightID


#11

Yeah, if something can be undermined it’s worse than not having it in the first place.

Just to be clear though, I imagine proof of unique human is more appropriate for what I was suggesting than KYC because the idea would not so much be to root out malicious actors as such, as to limit their capability, or slow them down to such an extent that everyone else can outnumber them.

What I missed from my second post was that the idea of how proof of unique human could be beneficial was as much in response to my first point about economies of scale as to anyone being deliberately malicious. Some of the biggest problems of accumulation of power we currently have, particularly in the tech world, are simply from some companies getting ahead through accumulation of capital, whether that is in the form of knowledge, physical infrastructure or whatever else. Without wanting to sound too Marxist about it, this quickly and completely makes a mockery of any notions of freedom or free markets, and I think the traditional crypto libertarian arguments either wilfully or accidentally miss this point.

In order to maintain freedom in a healthy society, then some freedoms must be limited. Obviously taken to extremes this idea can be used to justify all kinds of authoritarian nonsense, but I hope it’s clear that’s not what I’m advocating.


#12

I fear that with the foreseeable technology KYC is basically the only thing that can hope to come close.


#13

I didn’t anticipate that there would be a big debate over limiting IDs, rather I was wanting a discussion about the approach used by BrightID and whether it can be integrated.

Maybe we should split the former discussion into its own topic @neo as it is not really about BrightID, how BrightID works, or how it might be adapted or integrated with SAFE (should anyone find a use for that).


#14

Regarding the attacks, the section “Avoiding a Sybil Attack” only describes how to avoid collisions (ie multiple identical IDs), not how to prevent multiple accounts from the same user. So they haven’t mitigated Sybil attacks. The technical competence seems pretty low, using a salt that’s “the unique name of the requesting application encrypted with the user’s private key” - this would be done using the public key, not the private key.

They seem to intend using social groups and web-of-trust style verification to ensure only one account per user. This would probably work for false negatives (ie it will probably be very inclusive of real people since we’re all surprisingly well interconnected) but I really doubt it will work for false positives (ie Sybil attacks).

Their motive for proof of unique human is interesting. “everyone everywhere should have the benefits that come from being verified as a unique person. These benefits include: greater access to money, greater choice in government, and greater access to honest information.” This quickly becomes very philosophical so I’ll leave it to a more suitable medium. It’s a really complex topic…

I was recently in India where they’ve implemented a national identity system called Aadhaar and having heard about the problems first hand from people directly affected by it I’d say BrightID is not understanding their target market at all.


#15

Hi, all! :wave:

Founder of BrightID here. I think we have a good solution to the one-account-per-person problem, if that is indeed the problem you’re solving–you seem split on whether that’s desirable.

If anonymity is also desirable, that depends on how trustworthy different components are. For instance, we have built a solution for Aragon.org that allows a user to link their brightID score to an ethereum address in a certain “context.” (The “context” would usually be a DAO, but it could also be Aragon-wide.) The public key isn’t visible on ethereum blockchain at all, only the score mapped to an ethereum address in the smart contract. This bypasses the special requirements for storing public keys in a database (“Method I”) in the whitepaper, which is nice, because organizations will fail at this. Also people will fail each other–they will connect to other people who will out them. The catch with the blockchain method (Method II) is that brightID nodes must be found which are trusted by both the app (e.g Aragon) and the user to generate an accurate score, but the win is there is no public key in the public record (blockchain), so no chance for the user to be outed that way.

It’s interesting that @mav says that it should work well for false-negatives but not false-positives. I have the opposite feeling. The point of the beta launch (launched yesterday) in my mind is mostly to see how inclusive this solution can be.

And a tiny note, in safe-guarding brightid publickeys in an app database, the salting does work differently than you’re used to–I added this to the whitepaper just now (thanks for the feedback). --> The salt (which is generated by the user and sent to the app)

And @mav could you elaborate on the problems you’d heard about Aadhaar? This is very interesting to me.


#16

@adamstallard, really pleased to see you here and I think brightid’s work is fascinating (rereading my prior post it came across more critical than I intended).

One of the problems with aadhaar I heard about was that it was not uncommon to fail the identity check. The fingerprint and retina scanners worked at first, but after some time (a year or more) didn’t scan correctly. Whether this is a problem with the hardware or the biometric algorithm or the person themself changing is unclear but in the end it didn’t work. Fortunately this is an aspect which can be improved with better technology, so failures should be lower in the future. WRT brightid, I think the technique for verifying identity needs to be robust, and public key crypto is probably a robust way to achieve it so long as key management is, well, manageable.

This failure of technology for identification is itself not so bad. However, all other mechanisms for verifying identity also failed (I think these include presenting an aadhaar card, using their phone app, not totally sure). These people are now ‘not people’ according to the aadhaar system. That’s pretty confronting, especially if it happens at the pharmacy being denied medicine. I think brightid needs to be conscious that not everyone will be good at managing public keys, and not all vendors or customers will be good at recovering from failures.

Which leads into the next point, where aadhaar is voluntary in theory but in practice certain things are not practical without aadhaar (opening a bank account, some medical services, running a business, getting a sim card). The issue I take with this is that central identity services can’t maintain the promises they start with (in this case the promise is to be voluntary) and are prone to erosion of privacy and security (whether intentional or unintentional). This is at the whim of the authority that manages the system (and the processes of vendors who utilize the identity for services) which will change over time. Hopefully brightid doesn’t face this problem since it’s a distributed p2p system, so pushing dubious changes should not be as easy. But it does provide an important question of ‘how are changes introduced’ and what recourse do people have if they don’t agree with changes to how their identity data may be used. This starts getting into governance and becomes very nuanced so I’ll leave it for now.

Aadhaar is vulnerable to fake identities and stolen / sold identikits. It brings into question the idea of ‘value’ of identity and that’s not a rabbit hole I want to go down just now. I’m glad brightid is exploring this and even though I’ve been overall negative about proof of unique human I’m definitely interested in it as a technical and social problem (even though I don’t really see the value behind it). So please don’t take criticism as disapproval, and remember it stems from my own very strong prior biases. I think identity is a critical aspect of technology and has been poorly implemented thus far, but unique identity per person… I’m less convinced about the value of that form of identity.

There are concerns about aadhaar keeping so much information about so many people all in one place. I think it’s a valid concern, but I don’t really understand the details of the risk (for aadhaar or brightid) so it’s not a point worth expanding here. Brightid seems in a good position to manage these risks compared to most identity companies.

Hopefully the brightid beta brings some really interesting results! Thanks for working on such an interesting project.


#17

Thanks a lot for your insights into Aadhaar and identity in general.

The technique for verifying identity in BrightID is to use sybil detection algorithms to assign “scores” to users. How robust this can be is yet to be seen; it’s vulnerable to social engineering attacks, but we will have a rewards program to counteract this.

As far as (sybil) attacks where one or more people work together to create duplicate accounts, I feel we have a very good solution for this. The current system we’re using is described here and here. We have a standalone platform where anyone can download the current graph (or create their own) and simulate an attack and see how well it performs. We intend to offer bounties to anyone who can come up with a simulated attack that leads to an improvement in our anti-sybil algorithm. We also allow node operators to choose their own anti-sybil algorithm(s) to compute scores; i.e. BrightID allows for competing anti-sybil systems.

As far as key management, the method for replacing a lost/stolen key pair in BrightID is to reconnect with a few of your former connections (from before the point the key pair was lost) and have them affirm that you are the same person. The public key is then swapped in the database.

Any identification system based on biometrics is going to have problems with stolen biometrics. We wanted to provide something superior to biometrics-based identification.

With BrightID, losing the ability to be verified as a unique person, or having your identity stolen isn’t at risk. If someone takes your ID by force or bribery, you can easily replace it by making a few connections, and the compromised ID is invalidated.

What’s at risk–if public keys are mismanaged–is that a user can be outed. Because of this, I don’t feel we can make guarantees about anonymity. All we can say is that the core BrightID system doesn’t use or store any personal data. The graph that is stored on nodes consists of public keys only. If someone is outed, that is the fault of the people or apps that person decided to trust. We’ve thought about ways to mitigate this, but haven’t found anything really satisfying yet. If you want to contribute to this discussion, please come join either our telegram or matrix group.