Reputation Systems

On the contrary.

If reputations are worth little (example: you earn it by getting “thumbs up” from anonymous sources), then what’s the point of having a reputation?

If it’s difficult to obtain and can be used for economic or societal gain - either by the user or to someone who may want to buy it - then people will be motivated to acquire it. If it’s easy to acquire and difficult to monetize a good reputation, it’s going to be worthless.

While people who disagree are waiting for the holy grail, for me the problem has already been solved by participants in the free market before cryptocurrencies even existed.

If some address owner puts 3 BTC in escrow with a trusted notary or sends it to a “burn” address (no 3rd party is involved), as far as I’m concerned he’s trustworthy up to that amount of BTC unless I can find failed trades or a bunch of incomplete trades involving that address that amount to more than 3 BTC. In both examples that account’s “reputation” can be sold for 3 BTC (assuming it’s not in debt) because that’s exactly how much its reputation is worth.
And it works for DAOs, AI and robots because it’s detachable for any identity, it’s just an address.

@janitor your arguments don’t make sense to me, partly because you are coming at this from a different perspective.

I’m looking at natural systems and tying to understand how they work - human relationships in particular.

The Evolution of Trust

If we imagine the small communities we evolved in, and how identity was fixed for life, and loss of trust by your fellows was a threat to survival (for individuals and the group), then I think we see the key elements that are present: a fixed, untradable identity, and very closely tied to a distributed reputation. Because reputation is decentralised and everyone has a stake in it being accurate, there are strong forces against it being traded. You have to bribe everyone, and everyone knows you bought your reputation and so can’t be trusted based on actual behavior. It just doesn’t work when everyone knows everyone, and everyone has an ongoing relationship with everyone.

In such a small community there are very strong incentives to keep those relationships healthy and functioning: survival of individual, the family group and the tribe all depend on relationships working. There is little scope for deceit, and when exposed it is likely to carry a high penalty because the cost is high to everyone hurt by the deceit: a risk to survival.

So thinking about that gives me some basic ideas about what reputation and identity are - or how they came to be - and how they work.

Size Matters

Then things scale, and soon we can’t know everyone and we don’t have ongoing relationships with many of those who we interact with. We are therefore more easily deceived, and the community being fragmented is less well positioned to handle deceit and misbehavior - fast forward to today and the world is full of opportunities to deceive, with many good chances of getting away with it unseen - and little chance that the individual, their family group, or the tribe will see the deceit as threatening their survival and penalize it heavily (even though one could argue today that this deceit is threatening the survival of many, and the structures that support life).

I think the reasons we are in the current mess just might be because scale has made deceit a much more frequent occurrence, reputation had been made very hard to judge on an individual basis - because most people we deal with, we don’t know, and many people we will deal with only once, ever. To cope, the have been many moves to centralise trust, but each also adds more opportunities to deceive and on a grander scale.

Money itself makes deceit easier, and I argue that the ability to trade reputation is one aspect of that.

So my approach to reputation systems is to try and patch these “holes” that have come about through scale. One is the ability to shift identity, another the ability to trade reputation, etc From here I think we can all have a go at identifying the holes and think of ways to patch them.

That’s my approach anyway, so I don’t find your arguments interesting or convincing.

This is the origin of our different views I think. You appear to have a wish for trading, and are arguing that it is OK, when to me it seems evident that this is one of the things that has made it possible for people to be screwed on an enormous scale. Whereas it also seems evident to me, that deceit happens far, far less where individuals have regular ongoing working relationships.

That I think points to the source of the problem: scale and the opportunities for deceit it creates because of the way it undermines our evolved mechanisms for relating, trusting, deceiving, identity and reputation.

1 Like

[quote=“janitor, post:21, topic:152”]
If reputations are worth little (example: you earn it by getting “thumbs up” from anonymous sources), then what’s the point of having a reputation? [/quote]

@janitor That is the problem that we are trying to solve. But the solution is not to allow monetary units which are falsely marked as “reputation” to be traded back and forth in a liquid market.

[quote=“janitor, post:21, topic:152”]
If it’s difficult to obtain and can be used for economic or societal gain - either by the user or to someone who may want to buy it - then people will be motivated to acquire it. [/quote] As money, which we already have.

[quote=“janitor, post:21, topic:152”]
If it’s easy to acquire and difficult to monetize a good reputation, it’s going to be worthless.
[/quote] As money, but thats ok, because we already have money.

You seem to be subscribing to this belief that money results from hard-work, and therefore we can assume from the presence of money that you are hard-working, reliable, etc. But as we all know, there are other ways of acquiring money besides being hard-working and honest. Namely by being deceitful, cheating people, and cutting corners. Therefore money, (or anything which is freely transferable as a monetary unit) cannot be used to evaluate the qualities that I am interested in, when I talk about reputation.

2 Likes

Mk, so my understanding is that you are saying that money itself solves this issue. So why bother with it?

If you think that reputation system, is a holy grail i.e. an unattainable goal, then what do you gain from having strange monetary units which people claim is reputation, but which everyone knows is just money. Why not just have money?

2 Likes
  • Something plays the role of money because it’s (nearly) universally accepted as substitute for labor.
  • Most people can’t survive by working for free
  • Therefore, money (financial commitment) is generally speaking a fairly good guarantee of honest behavior

I said a reputation system based on non-financial incentives isn’t going to work simply because nothing is as universally appreciated as labor (which can be expressed in units of accounting, like $10/hr).

If my misplaced trust in someone can cause me a total loss of $100, I don’t care if I’m dealing with the worst thug as long as he escrows a “good behavior bond” of $100.1 before I start conducting my business with him. So you’re right - I’m not at all interested in one’s reputation (or identity).

I am not saying that I would refuse to work with some famous coder if he asks me to pay in advance. I am saying that because of the existing solutions that work on the basis of financial incentive for honest behavior (escrow, arbiters, etc.) it is not necessary for me to know who is a famous coder and who is a scammer.

This is a very narrow view. Valid certainly, but trade is not the only thing that humans use trust and reputation for, and not the thing that motivates me to think about this.

I take a lot of things on trust. Maybe one day maths and logic will secure everything, but I don’t think equating money to trust covers that adequately. In fact, I don’t trust it! Subjective of course. :slight_smile:

2 Likes

It seems like we just need to understand that reputation can’t be boiled down to a number or score. One of my favorite aspects about going to art school was getting a final review and a discussion instead of a letter or number. :slight_smile:
Aggregations of various reputations numbers or grades is a step in the right direction… like seeing all different kinds of “scores” from various sources focusing on different aspects of interactivity. This can help basic trust situations but the more complex an exchange becomes, the more important it is to understand the details of reputations based on previous interactions with others.

3 Likes

Reputation farms will be a brisk business, it seems.

Isn’t that what is done when one is held accountable for one’s actions? Someone does something bad and they are made to pay for it, either literally or figuratively. The very word sin means debt so indeed we do equate reputation as something someone can monetize. And doesn’t someone’s reputation get tarnished somewhat if they defend someone who is viewed as being nefarious or in the wrong, even if the person defending them believes they are worth defending? Is that not a transfer of reputation?

Although I agree the concept of buying trust kind of undermines the whole concept of trust that is in essence what we do every time we allow someone to give us financial compensation for their misbehaviour. If someone can buy back their reputation for something they have done then why can’t they simply buy reputation for something they want to do or will do? We buy reputation all the time in the form of certificates and diplomas do we not?

This is essentially why I’m against the concept of a reputation system in the first place because it’s just another form of democracy and democracy is inherently flawed. It’s just another score system and currency. You might as well say that if I pay x amount you’ll trust me y amount. Real trust does not rely on scores or socially organized voting systems but happens organically between individuals over time.

Let me pose to you a scenario. If Microsoft bought a million trust points would you trust them? Probably not. What that would say rather is that they had invested in compensating the community for the damage they would inflict. And as people kept voting them down and saying they didn’t trust them they’d have to invest more and more of their money. So unless a company is ACTUALLY trusted by the community then buying trust is actually an act of proactive compensation.

This is why I think we should have a set of stats to go with the reputation system. How many votes the user or widget gets either way, how much the user might have invested buying trust points and how much this happens over time. If someone has a gradual increase in good reputation over time then it’s a good chance they’re earning it the old fashioned way. If they however have a bad reputation and then a sudden spike in good reputation it’s a good chance they just bought trust points from someone.

1 Like

Well, in theory no, all we buy with money is the right to compete for those certificates and diplomas, but in practice, largely YES, which is why to my mind, some kind of alternate structure is needed or at least desirable for ascertaining reputation.

But if this reputation system is actually going to be an advance over what we currently have, it needs to be non-transferable, to some extent non-monetizeable and in some way resistant to the inequities in the current monetized systems. These are indeed difficult problems, but what we are trying to achieve is a way of allowing lines of trust to be established in a way thats both digital and organic.

Digital in the sense that it can move at digital speed and use the digital force multipliers, the infinite room effect, predictive algorithms which can be used to initiate relationships with high degrees of compatibility for the purpose at hand.
But also organic, something that our brains can accept, do well at and that can confer the kinds of benefits that we associate with organic network building.

Thats the goal, and I think as stated its clearly desirable. Whether its technically feasible is another issue, and considerably more in doubt.

But if it is possible, it will almost certainly require an absolute privacy shield as the default system so that it can’t be gamed through data mining. If it is possible it will require high level organic decentralization features and algorithms, to allow people to interact in a way thats as close to face-to-face organic personal networking as possible.

Until SAFE, I had never seen anything which was even trying to meet those goals, and so I never gave the issue much thought, but now, I’m at least interested in talking through the idea.

5 Likes

Agreed, this is where it starts, we have new rules now

5 Likes

Quite so.

For example, an ideal comprehensive reputation system would use trust between parties, but it would section out different types of trust, based on the strength of the relationship and the topics of the relationship. So for example, people who were friends in school, are friends at work, work together on different projects, or are family would see different sides of each other.

The problem is how to evaluate these things. If you use external relationships you have two problems. First the verification problem (who inputs the information about who is someone’s parents, siblings, lovers, children, mentors, work partners, financial advocates, clients etc., obviously you can’t trust a gov’t or a big corp to do the verification and have access to all these things. Second the consistency problem how do you deal with non-traditional or nonstereotypical relationships?

These two problems are almost unsolveable in an scenario where your personal data is poorly secured and unevenly “owned” (under the legal fiction that large corporations can provide you with a service and become entitled to your personal data).

But if we are on the SAFE network and the default is a pretty absolute privacy shield, and a system (eventually a pretty sophisticated system) allowing that private data to be shared in a controlled way by the user, who is the ONLY owner, then the reputation system can evaluate the strength and type of the relationship.

So instead of asking the government or a big corp, who is X’s parents, siblings, lovers, children, friends, etc. We ask: What type of information has X shared with Y? If X has shared social information, then we say that Y’s opinion of X is a good indication of X’s social status. (Again we don’t ask if X and Y are “friends” whatever that means) If X has shared financial information with Y, we say that there is some kind of commonality of business interest, and therefore Y’s opinion of X is relevant to X’s business judgement or acumen. Again this would depend on the type and the detail of the information.

And here is where the SAFE network’s architecture plays such an important role. See you (A) never ask X for X’s reputation, A asks Y for X’s reputation. A maybe asks X who has access to his social contacts, his financial information, his medical data etc, but A asks Y whether Y thinks X’s data is good or “real” and Y who has the right to access the information, can run an analysis and tell A, either Yes, the data that X has given me access to is both real and complete enough that you should have this percentage of confidence in my evaluation of X. This data manager function allows A to evaluate X without ever needing to access X’s data.

And if X hasn’t trusted enough people or uploaded enough data to the network to make an evaluation, then the result comes back as a nullity. But thats ok, because what we want is to avoid unjustified trust decision. That is, when we decide to trust someone because of the reputation we want to have a high degree of confidence in that decision.

So this is a preliminary idea. What do you all think?

2 Likes

Yes but face to face, or interpersonal organic, trust isn’t always something you can quantify. It isn’t measured in units or on a scale. You might trust someone more or trust someone less but how much more or how much less? Moreover it isn’t always the kind of trust. You might trust someone with your money but not tell them about your love life. You might trust someone to keep secrets but not to keep house. Someone might be great at taking care of kids but might be horrid at keeping business appointments, or vice versa. You might feel safe telling someone about one aspect about your life but not another. Say the classic example of a kid trusting his mother to take care of him but not trusting her with his porn stash or telling her that he and his best friend Billy were the one’s that painted the chihauhau belonging to the lady down the street a brilliant shade of glow in the dark neon green. You might tell your wife every secret about your childhood but not that you’re working for the mafia (or M16 if you prefer the James Bond angle) and had to kill someone last week. So yeah trust is relative, subjective and not exactly quantifiable.

I think if we were to develop a reputational system each user would have to develop their own personal units and subjective trust catagories. So you’d describe how much trust 1 unit represented, perhaps be given various people you’ve interacted with and different hypothetical scenarios and work from there. I don’t think a standardized unit of trust would work very well because people just don’t work like that. We might have something like trust micropayments hwere once one’s relative trust unit was established it could be equated to a measure of standardized reputation or safecoin. So say you divided your trust bar into 10 and you entrusted someone with 0.1 bar unit. Then you’d take that measurement and go to a standard amount of trust and say "Ok this guy entrusts worth 0.1). Now if person B divided his trust into 100 and aloted say 18 units to someone for a particular act you’d do the same thing. You’d go to the standardized unit and say person B trusts this person worth 0.18. People could have as many catagories and as many divisions as they wanted. The more divisions a person has means the less they trust people, or the more precise they want to be in their decisions. The fewer divisions the more liberal they are with their trust or the less control they care about having over measuring it. But I still think defining a unit would require a lot of questions being asked and a lot of scenarios being posed.

2 Likes

So i’ve been reading silently… And I think you guys should read about Bitmark.

EDIT Here is the correct github link: Home · project-bitmark/marking Wiki · GitHub

The importance of decentralization in systems like these is holy.

With very little code one can hijack a websites database and extract “likes” to obtain a working model. For example in discourse derivative sites like this one its sometimes SELECT "user_actions".* FROM "user_actions" WHERE "user_actions"."action_type" = 1

1 Like

First of all, as a ‘reputation systems fanboy’, I’d like to say that I loved reading this discussion. I agree with a lot of stuff that’s being said in here, and also understand the complications that you guys are running into.

I started working on my theory for a peer-to-peer identity & reputation database back in January. It took me approximately 6 months to come up with a theoretical solution, and when the last piece of the puzzle finally fell in its place, it really felt like an epiphany to me.

Not long after that, I bumped into the Bitcoin dev 0.2 - 0.8 video, which left me wondering who the hell “sirius-m” was, so I googled this nickname, which lead me to a developer named Martti Malmi. He tweeted a verification of his identity from Keybase.io, to which I replied “When are we going to see truly decentralized identification?”. He then showed me a project he was working on, “Identifi”.
After looking at what he was doing with it, I realized that my theory could actually be right, and that he had actually already been building the proto-type of this system for over a year. He even came up with a few things I hadn’t thought of about myself up till then (like “non-violent justice” for instance).

As you can imagine, I was as stoked about this from the first moment. So, from that moment on I’ve contributed a fair amount of effort to help Martti getting this thing out into the world. In September I decided to start working on this project full-time, after which Martti asked me to become a co-founder of Identifi (the soon-to-be business), to handle the business development as its new CEO. It didn’t take me long to reply with a solid “yes”.

^ All of that is just to give you notice on me being unbiased on this topic. Please, feel free to break the theory that’s coming up, it would actually be much appreciated. This is already the second post in which I’m writing about Identifi, so I’ll try to summarize it and keep it brief (I’m feeling dirty for spamming this already), to then answer any of the questions you may have. I think it would be much more constructive this way, than me simply typing out all of the details.

About

Identifi is basically an address book with anyone in the world in it, which makes it possible for you to basically rate anyone/anything in the world. By organizing identities and reputation, it becomes much easier to find out who you can trust.

The bit more technically advanced explanation would be: It’s a P2P identification & reputation network, a protocol and therefor an infrastructure by itself (like Linux or Bitcoin). This way, anyone can build new products/services on-top of it, run their own node, or fork it to build something completely new (try “open source decentralized credit rating agency” for example).

bitcoind fork

Identifi was originally forked from the Bitcoin daemon, so it contains Bitcoin’s crypto, command-line interface, etc., even though there is no blockchain and no need for it in this case. On Identifi we have Proof of Human as our algo.

Solving the Sybil Attack

You only add people you trust yourself, and thereby only people that you actually trust influence your personalized network, or you can even choose to trust the people they trust, or the people they trust; it’s a white-listing principle. This is how it solves the Sybil Attack among the human Web of Trust.

You can just discard any identities that are poorly connected to your real social networks and don’t have the long history of interaction that real identities usually have. When you visualize the trust graph, Sybil nodes show up as separate swarms that have few links to real nodes. If there’s a real identity that links to lots of sybil nodes, you can just down-vote it.” – Martti

Privacy & Security

Last but not least, it doesn’t look at your sensitive data, since there’s no need for it. Since it doesn’t look at it, it also doesn’t store it.

Summary

Privacy-friendliness, security, decentralization, lesser need for authorities to tell us whom to trust, and more freedom for anyone +1

For additional information, there’s our GitHub or Alpha version.

Feel free to ask me any question, I’d love to explain the details, and/or listen to any of your feedback!

2 Likes

This is something that has disciplinary origins. In gaming systems the reputation system was/is mainly useful for people no one would want to game with. An equivalent is leader board leaders based on points for competition. These are marginal uses. Where reputations systems are useful is for organizations, businesses and products and can be used in transparency efforts.

What follows are some thoughts on why general reputations systems for people aren’t a good idea. It summarizes some of aspects of other threads. But it belongs in this thread to help avoid posting yet another thread on the same subject and because some of us think it’s not just a bad idea but a very dangerous idea.

To think people are going to walk around with sponsored phony number glowing on their head and pay higher costs for everything so some shills can ascend- no way. I can pay a bunch of people to crank up my reputation- so even if objectivity were helpful in this regard (it’s actually incredibly hurtful) it’s not even objective. This is the worst of can’t opt out Face Book and American Credit agency rolled into one. When applied to human beings generally it’s the most vile bit of nonsense imaginable. It would be involuntary (a lock out principle) and really opposite in spirit to anonymity as its wants to twist people with sunk cost. You might as well tattoo yellow stars on people’s foreheads and tattoo the measurements of their noses with the pitch angle. This would inevitably be another enclosure scam, despite claiming it’s not anti-privacy it is exactly that, nothing could be more invasive yet more superficial. It comes for the same nasty labeling impulse as the social voodoo of IQ and books like “The Life Not Worth Living,” advocating for euthanizing disabled people.

It’s like trying to weld coffee cup handles to people’s skulls to make it easier for a scamming elite to attach leashes. They trying to measure and spy on every other aspect of your life how great for them when they can claim to sum up you worth with one stupid number you can’t get rid of and claim that number is based on the power of consensus. It’s a people bar coding wet dream, its basically a way to take inventory of people to subject them to exploitation through enclosure. Its also a kind of condoning of bigotry and racism type perspectives. Black people will automatically have a harder time if it becomes obvious that they are black and such bigotry will be self-reinforcing. You will get better deals based on how much of a shallow suck up your are. Typical retorts in support resemble: if not general reputations systems then how would I know if my children are talking to psychopaths?

Now the censors will come on here and delete this thread because it doesn’t have the right tone and it violates the OPs ability to propagandize and to censor. To me this stuff is coming from the same shallow impulse as Nazi Propaganda and it’s meant also to fuel spam markets like: pay us to increase your rep score. It’s a system meant to disempower people and profit from the fear it creates.

@Warren, was your comment a reply to mine?

@Tim Not aimed at you or your post. Same con sentiment from me since the rep threads started appearing or I became aware of them. I find the notion deeply disturbing. I’ve seen it come up in fiction in a trade context but even in the fiction examples it was used among entities or organizations, vice people. Too much room for it to be gossip and distract from discourse and make things about people and their attributes vice common ground and positive outcomes. I do see potential for reputation/marking systems, like" bitmark" above, I just think we need to be very careful. States are thinking they can uses these system, its as if they would have people wearing all their parking tickets etc.

2 Likes

Thanks for clearing that up @Warren.

Regarding to your previous comment, the lengthy one, that kind of “Down and Out in the Magic Kingdom”-scenario is exactly the kind of thing we also would like to prevent from ever happening. This is why we’re collaborating with Arjen Kamphuis (Dutch hacktivist who helped getting voting-computers here in the Netherlands to be banned), Bits of Freedom (Dutch version of the EFF, civil lib org), and Jaap-Henk Hoepman (assistent-professor on privacy). They’re helping us on publishing a full analysis on all the privacy-aspects that come along with such a system, and an in-depth description of the model we’ve chosen. Hopefully that will clear things up. Still, any thoughts or concerns are always welcome, since we’re probably all on the same page here (edit: …also if we’re not, of course).

Reputation systems are here to stay, whether we like it or not. This is why I prefer to see a well thought out decentralized (pref. distributed) one that’s actually privacy-friendly, secure, and one that lets the user decide to either create and manage a public/private profile, or even both (since you can actually have 2 valid IDs this way).

The other option would probably be that centralized parties such as banks won’t only print and manage our money, but also our identities with inherently our reputations (which already happens to a certain extent, but could be worse).

1 Like