Reputation Systems

Mk, so my understanding is that you are saying that money itself solves this issue. So why bother with it?

If you think that reputation system, is a holy grail i.e. an unattainable goal, then what do you gain from having strange monetary units which people claim is reputation, but which everyone knows is just money. Why not just have money?

2 Likes
  • Something plays the role of money because it’s (nearly) universally accepted as substitute for labor.
  • Most people can’t survive by working for free
  • Therefore, money (financial commitment) is generally speaking a fairly good guarantee of honest behavior

I said a reputation system based on non-financial incentives isn’t going to work simply because nothing is as universally appreciated as labor (which can be expressed in units of accounting, like $10/hr).

If my misplaced trust in someone can cause me a total loss of $100, I don’t care if I’m dealing with the worst thug as long as he escrows a “good behavior bond” of $100.1 before I start conducting my business with him. So you’re right - I’m not at all interested in one’s reputation (or identity).

I am not saying that I would refuse to work with some famous coder if he asks me to pay in advance. I am saying that because of the existing solutions that work on the basis of financial incentive for honest behavior (escrow, arbiters, etc.) it is not necessary for me to know who is a famous coder and who is a scammer.

This is a very narrow view. Valid certainly, but trade is not the only thing that humans use trust and reputation for, and not the thing that motivates me to think about this.

I take a lot of things on trust. Maybe one day maths and logic will secure everything, but I don’t think equating money to trust covers that adequately. In fact, I don’t trust it! Subjective of course. :slight_smile:

2 Likes

It seems like we just need to understand that reputation can’t be boiled down to a number or score. One of my favorite aspects about going to art school was getting a final review and a discussion instead of a letter or number. :slight_smile:
Aggregations of various reputations numbers or grades is a step in the right direction… like seeing all different kinds of “scores” from various sources focusing on different aspects of interactivity. This can help basic trust situations but the more complex an exchange becomes, the more important it is to understand the details of reputations based on previous interactions with others.

3 Likes

Reputation farms will be a brisk business, it seems.

Isn’t that what is done when one is held accountable for one’s actions? Someone does something bad and they are made to pay for it, either literally or figuratively. The very word sin means debt so indeed we do equate reputation as something someone can monetize. And doesn’t someone’s reputation get tarnished somewhat if they defend someone who is viewed as being nefarious or in the wrong, even if the person defending them believes they are worth defending? Is that not a transfer of reputation?

Although I agree the concept of buying trust kind of undermines the whole concept of trust that is in essence what we do every time we allow someone to give us financial compensation for their misbehaviour. If someone can buy back their reputation for something they have done then why can’t they simply buy reputation for something they want to do or will do? We buy reputation all the time in the form of certificates and diplomas do we not?

This is essentially why I’m against the concept of a reputation system in the first place because it’s just another form of democracy and democracy is inherently flawed. It’s just another score system and currency. You might as well say that if I pay x amount you’ll trust me y amount. Real trust does not rely on scores or socially organized voting systems but happens organically between individuals over time.

Let me pose to you a scenario. If Microsoft bought a million trust points would you trust them? Probably not. What that would say rather is that they had invested in compensating the community for the damage they would inflict. And as people kept voting them down and saying they didn’t trust them they’d have to invest more and more of their money. So unless a company is ACTUALLY trusted by the community then buying trust is actually an act of proactive compensation.

This is why I think we should have a set of stats to go with the reputation system. How many votes the user or widget gets either way, how much the user might have invested buying trust points and how much this happens over time. If someone has a gradual increase in good reputation over time then it’s a good chance they’re earning it the old fashioned way. If they however have a bad reputation and then a sudden spike in good reputation it’s a good chance they just bought trust points from someone.

1 Like

Well, in theory no, all we buy with money is the right to compete for those certificates and diplomas, but in practice, largely YES, which is why to my mind, some kind of alternate structure is needed or at least desirable for ascertaining reputation.

But if this reputation system is actually going to be an advance over what we currently have, it needs to be non-transferable, to some extent non-monetizeable and in some way resistant to the inequities in the current monetized systems. These are indeed difficult problems, but what we are trying to achieve is a way of allowing lines of trust to be established in a way thats both digital and organic.

Digital in the sense that it can move at digital speed and use the digital force multipliers, the infinite room effect, predictive algorithms which can be used to initiate relationships with high degrees of compatibility for the purpose at hand.
But also organic, something that our brains can accept, do well at and that can confer the kinds of benefits that we associate with organic network building.

Thats the goal, and I think as stated its clearly desirable. Whether its technically feasible is another issue, and considerably more in doubt.

But if it is possible, it will almost certainly require an absolute privacy shield as the default system so that it can’t be gamed through data mining. If it is possible it will require high level organic decentralization features and algorithms, to allow people to interact in a way thats as close to face-to-face organic personal networking as possible.

Until SAFE, I had never seen anything which was even trying to meet those goals, and so I never gave the issue much thought, but now, I’m at least interested in talking through the idea.

5 Likes

Agreed, this is where it starts, we have new rules now

5 Likes

Quite so.

For example, an ideal comprehensive reputation system would use trust between parties, but it would section out different types of trust, based on the strength of the relationship and the topics of the relationship. So for example, people who were friends in school, are friends at work, work together on different projects, or are family would see different sides of each other.

The problem is how to evaluate these things. If you use external relationships you have two problems. First the verification problem (who inputs the information about who is someone’s parents, siblings, lovers, children, mentors, work partners, financial advocates, clients etc., obviously you can’t trust a gov’t or a big corp to do the verification and have access to all these things. Second the consistency problem how do you deal with non-traditional or nonstereotypical relationships?

These two problems are almost unsolveable in an scenario where your personal data is poorly secured and unevenly “owned” (under the legal fiction that large corporations can provide you with a service and become entitled to your personal data).

But if we are on the SAFE network and the default is a pretty absolute privacy shield, and a system (eventually a pretty sophisticated system) allowing that private data to be shared in a controlled way by the user, who is the ONLY owner, then the reputation system can evaluate the strength and type of the relationship.

So instead of asking the government or a big corp, who is X’s parents, siblings, lovers, children, friends, etc. We ask: What type of information has X shared with Y? If X has shared social information, then we say that Y’s opinion of X is a good indication of X’s social status. (Again we don’t ask if X and Y are “friends” whatever that means) If X has shared financial information with Y, we say that there is some kind of commonality of business interest, and therefore Y’s opinion of X is relevant to X’s business judgement or acumen. Again this would depend on the type and the detail of the information.

And here is where the SAFE network’s architecture plays such an important role. See you (A) never ask X for X’s reputation, A asks Y for X’s reputation. A maybe asks X who has access to his social contacts, his financial information, his medical data etc, but A asks Y whether Y thinks X’s data is good or “real” and Y who has the right to access the information, can run an analysis and tell A, either Yes, the data that X has given me access to is both real and complete enough that you should have this percentage of confidence in my evaluation of X. This data manager function allows A to evaluate X without ever needing to access X’s data.

And if X hasn’t trusted enough people or uploaded enough data to the network to make an evaluation, then the result comes back as a nullity. But thats ok, because what we want is to avoid unjustified trust decision. That is, when we decide to trust someone because of the reputation we want to have a high degree of confidence in that decision.

So this is a preliminary idea. What do you all think?

2 Likes

Yes but face to face, or interpersonal organic, trust isn’t always something you can quantify. It isn’t measured in units or on a scale. You might trust someone more or trust someone less but how much more or how much less? Moreover it isn’t always the kind of trust. You might trust someone with your money but not tell them about your love life. You might trust someone to keep secrets but not to keep house. Someone might be great at taking care of kids but might be horrid at keeping business appointments, or vice versa. You might feel safe telling someone about one aspect about your life but not another. Say the classic example of a kid trusting his mother to take care of him but not trusting her with his porn stash or telling her that he and his best friend Billy were the one’s that painted the chihauhau belonging to the lady down the street a brilliant shade of glow in the dark neon green. You might tell your wife every secret about your childhood but not that you’re working for the mafia (or M16 if you prefer the James Bond angle) and had to kill someone last week. So yeah trust is relative, subjective and not exactly quantifiable.

I think if we were to develop a reputational system each user would have to develop their own personal units and subjective trust catagories. So you’d describe how much trust 1 unit represented, perhaps be given various people you’ve interacted with and different hypothetical scenarios and work from there. I don’t think a standardized unit of trust would work very well because people just don’t work like that. We might have something like trust micropayments hwere once one’s relative trust unit was established it could be equated to a measure of standardized reputation or safecoin. So say you divided your trust bar into 10 and you entrusted someone with 0.1 bar unit. Then you’d take that measurement and go to a standard amount of trust and say "Ok this guy entrusts worth 0.1). Now if person B divided his trust into 100 and aloted say 18 units to someone for a particular act you’d do the same thing. You’d go to the standardized unit and say person B trusts this person worth 0.18. People could have as many catagories and as many divisions as they wanted. The more divisions a person has means the less they trust people, or the more precise they want to be in their decisions. The fewer divisions the more liberal they are with their trust or the less control they care about having over measuring it. But I still think defining a unit would require a lot of questions being asked and a lot of scenarios being posed.

2 Likes

So i’ve been reading silently… And I think you guys should read about Bitmark.

EDIT Here is the correct github link: Home · project-bitmark/marking Wiki · GitHub

The importance of decentralization in systems like these is holy.

With very little code one can hijack a websites database and extract “likes” to obtain a working model. For example in discourse derivative sites like this one its sometimes SELECT "user_actions".* FROM "user_actions" WHERE "user_actions"."action_type" = 1

1 Like

First of all, as a ‘reputation systems fanboy’, I’d like to say that I loved reading this discussion. I agree with a lot of stuff that’s being said in here, and also understand the complications that you guys are running into.

I started working on my theory for a peer-to-peer identity & reputation database back in January. It took me approximately 6 months to come up with a theoretical solution, and when the last piece of the puzzle finally fell in its place, it really felt like an epiphany to me.

Not long after that, I bumped into the Bitcoin dev 0.2 - 0.8 video, which left me wondering who the hell “sirius-m” was, so I googled this nickname, which lead me to a developer named Martti Malmi. He tweeted a verification of his identity from Keybase.io, to which I replied “When are we going to see truly decentralized identification?”. He then showed me a project he was working on, “Identifi”.
After looking at what he was doing with it, I realized that my theory could actually be right, and that he had actually already been building the proto-type of this system for over a year. He even came up with a few things I hadn’t thought of about myself up till then (like “non-violent justice” for instance).

As you can imagine, I was as stoked about this from the first moment. So, from that moment on I’ve contributed a fair amount of effort to help Martti getting this thing out into the world. In September I decided to start working on this project full-time, after which Martti asked me to become a co-founder of Identifi (the soon-to-be business), to handle the business development as its new CEO. It didn’t take me long to reply with a solid “yes”.

^ All of that is just to give you notice on me being unbiased on this topic. Please, feel free to break the theory that’s coming up, it would actually be much appreciated. This is already the second post in which I’m writing about Identifi, so I’ll try to summarize it and keep it brief (I’m feeling dirty for spamming this already), to then answer any of the questions you may have. I think it would be much more constructive this way, than me simply typing out all of the details.

About

Identifi is basically an address book with anyone in the world in it, which makes it possible for you to basically rate anyone/anything in the world. By organizing identities and reputation, it becomes much easier to find out who you can trust.

The bit more technically advanced explanation would be: It’s a P2P identification & reputation network, a protocol and therefor an infrastructure by itself (like Linux or Bitcoin). This way, anyone can build new products/services on-top of it, run their own node, or fork it to build something completely new (try “open source decentralized credit rating agency” for example).

bitcoind fork

Identifi was originally forked from the Bitcoin daemon, so it contains Bitcoin’s crypto, command-line interface, etc., even though there is no blockchain and no need for it in this case. On Identifi we have Proof of Human as our algo.

Solving the Sybil Attack

You only add people you trust yourself, and thereby only people that you actually trust influence your personalized network, or you can even choose to trust the people they trust, or the people they trust; it’s a white-listing principle. This is how it solves the Sybil Attack among the human Web of Trust.

You can just discard any identities that are poorly connected to your real social networks and don’t have the long history of interaction that real identities usually have. When you visualize the trust graph, Sybil nodes show up as separate swarms that have few links to real nodes. If there’s a real identity that links to lots of sybil nodes, you can just down-vote it.” – Martti

Privacy & Security

Last but not least, it doesn’t look at your sensitive data, since there’s no need for it. Since it doesn’t look at it, it also doesn’t store it.

Summary

Privacy-friendliness, security, decentralization, lesser need for authorities to tell us whom to trust, and more freedom for anyone +1

For additional information, there’s our GitHub or Alpha version.

Feel free to ask me any question, I’d love to explain the details, and/or listen to any of your feedback!

2 Likes

This is something that has disciplinary origins. In gaming systems the reputation system was/is mainly useful for people no one would want to game with. An equivalent is leader board leaders based on points for competition. These are marginal uses. Where reputations systems are useful is for organizations, businesses and products and can be used in transparency efforts.

What follows are some thoughts on why general reputations systems for people aren’t a good idea. It summarizes some of aspects of other threads. But it belongs in this thread to help avoid posting yet another thread on the same subject and because some of us think it’s not just a bad idea but a very dangerous idea.

To think people are going to walk around with sponsored phony number glowing on their head and pay higher costs for everything so some shills can ascend- no way. I can pay a bunch of people to crank up my reputation- so even if objectivity were helpful in this regard (it’s actually incredibly hurtful) it’s not even objective. This is the worst of can’t opt out Face Book and American Credit agency rolled into one. When applied to human beings generally it’s the most vile bit of nonsense imaginable. It would be involuntary (a lock out principle) and really opposite in spirit to anonymity as its wants to twist people with sunk cost. You might as well tattoo yellow stars on people’s foreheads and tattoo the measurements of their noses with the pitch angle. This would inevitably be another enclosure scam, despite claiming it’s not anti-privacy it is exactly that, nothing could be more invasive yet more superficial. It comes for the same nasty labeling impulse as the social voodoo of IQ and books like “The Life Not Worth Living,” advocating for euthanizing disabled people.

It’s like trying to weld coffee cup handles to people’s skulls to make it easier for a scamming elite to attach leashes. They trying to measure and spy on every other aspect of your life how great for them when they can claim to sum up you worth with one stupid number you can’t get rid of and claim that number is based on the power of consensus. It’s a people bar coding wet dream, its basically a way to take inventory of people to subject them to exploitation through enclosure. Its also a kind of condoning of bigotry and racism type perspectives. Black people will automatically have a harder time if it becomes obvious that they are black and such bigotry will be self-reinforcing. You will get better deals based on how much of a shallow suck up your are. Typical retorts in support resemble: if not general reputations systems then how would I know if my children are talking to psychopaths?

Now the censors will come on here and delete this thread because it doesn’t have the right tone and it violates the OPs ability to propagandize and to censor. To me this stuff is coming from the same shallow impulse as Nazi Propaganda and it’s meant also to fuel spam markets like: pay us to increase your rep score. It’s a system meant to disempower people and profit from the fear it creates.

@Warren, was your comment a reply to mine?

@Tim Not aimed at you or your post. Same con sentiment from me since the rep threads started appearing or I became aware of them. I find the notion deeply disturbing. I’ve seen it come up in fiction in a trade context but even in the fiction examples it was used among entities or organizations, vice people. Too much room for it to be gossip and distract from discourse and make things about people and their attributes vice common ground and positive outcomes. I do see potential for reputation/marking systems, like" bitmark" above, I just think we need to be very careful. States are thinking they can uses these system, its as if they would have people wearing all their parking tickets etc.

2 Likes

Thanks for clearing that up @Warren.

Regarding to your previous comment, the lengthy one, that kind of “Down and Out in the Magic Kingdom”-scenario is exactly the kind of thing we also would like to prevent from ever happening. This is why we’re collaborating with Arjen Kamphuis (Dutch hacktivist who helped getting voting-computers here in the Netherlands to be banned), Bits of Freedom (Dutch version of the EFF, civil lib org), and Jaap-Henk Hoepman (assistent-professor on privacy). They’re helping us on publishing a full analysis on all the privacy-aspects that come along with such a system, and an in-depth description of the model we’ve chosen. Hopefully that will clear things up. Still, any thoughts or concerns are always welcome, since we’re probably all on the same page here (edit: …also if we’re not, of course).

Reputation systems are here to stay, whether we like it or not. This is why I prefer to see a well thought out decentralized (pref. distributed) one that’s actually privacy-friendly, secure, and one that lets the user decide to either create and manage a public/private profile, or even both (since you can actually have 2 valid IDs this way).

The other option would probably be that centralized parties such as banks won’t only print and manage our money, but also our identities with inherently our reputations (which already happens to a certain extent, but could be worse).

1 Like

@Tim That sounds right to me. Under centralization people tend to have several emails accounts which act as quasi identities to an extent but sunk cost and less than perfect transferability, has them favor accounts which become default identities. It looks like ProjectSAFE will support at least two accounts or an account and anonymous mode.

As it seem you were alluding to, firms under centralization have been mining privacy and will seek to mine identity. You peoples lives would become even more about managing appearances and the outside and even less about discovery. I can see industry in the US in particular trying to drive identity like it tries to drive credit where people are encouraged to have debt to have better access to debt as if cultivating an addiction. I think we need to build tech that locks them out of this potential.

I want these predatory firms out of business, their model, their mentality their world view is corrupt, unsustainable and needs to come to an end. But more than that these are powerful people who have abused power and need to cease to have that power and influence. We need systems that achieve that result. They’ve been about one message, its always been one message over and over: money is power. That signal needs to cease.

2 Likes

Exactly. Locking them out of that potential like Bitcoin locks out bankers of printing more coins; self-regulation through distributed open-source infrastructures could actually be the only good alternative to these kind of centralized organizations (either gov and/or corp).

Money has become power because money is a goal by itself, and not just the means to make trade possible anymore, and hasn’t been for a long time.

I’ve always said that Bitcoin is a big step in the right direction, but not the perfect currency, exactly for that reason. Once we have the right reputation system though, we could go for a maybe even better model. One where the currency is the tool to facilitate trade, instead of the goal by itself.

2 Likes

That better model is an epiphany. Safe system could be substituted in place of the block chain? As I was reading that I suspect part of what will truly accommodate proof of unique human or a practical stand in, is systems that also provide true anonymity. Almost like their might be some kind of balance. At the same time I tend to always think of identity in terms of locks and enclosure. An electronic lock on our front door might couple a biometric to a combination. But above in the thread art he cautions about biometrics. There is also a third possibility of pseudoanonymity which may have a place where its provides plausible deniability. In terms of our current world though I’d rank true anonymity the most important even as it can create other problems like untraceable bribery in voting.

Following link talks about chipping and cashless banking. We chip you, make it illegal to remove and go to a cashless system and we can get push button exile. Talks about why some thing this would be a good idea.

Aaron Russo Death Be Not Proud