Bad people using SAFE network (a group dynamics perspective)

I think you are getting close to selling me on the filtering idea. Couple of things though…

A filter just puts a wall between me and the bad actors. It doesn’t make any less of them on the network. Are there any consequences if they are actually a large chunk of the network as long as they just stay in their own area? I feel like that might still give us the wrong image (even if after trying it out you quickly realize how easy it is to just avoid)… as @piluso was saying it could hurt us to be perceived as this criminal network by outsiders we want to sell this thing to.

Secondly I am not so sure about going with the assumption you will be similar to your friends/family in terms of what you want to privately view online. What if you have a weird uncle that likes to browse erotic goat pictures? I do think there needs to be a starting point for sure and not just ‘naked’ searches at first to see what you like out of a basically random assortment of options. Maybe like “would you like to answer a few questions about your browsing habits to help us give you more relevant results?” Then if you opt in its like a 20 question survey devised by some clever psychologists to capture how answers on these Qs correlate with what people wanted to view online.

2 Likes

I think the most important goal - is creation of robust categorization mechanics.
If user needs porn - that is not a problem.
The problem is that porn can be placed in non-appropriate places.
For example, inside discussion about quantum physics.
Same applies for spam.
Users should be able to see what they want and do not see what they do not want.

3 Likes

Yeah, very interesting, like asking ‘how big is the network’ vs ‘how big is the network for me’.

A lot of what I describe about filtering can be derived from eigenmorality (it’s long and dense but a very good read). There are lots of ways to explain the ‘solution’ to various filtering scenarios but I think reading that article will hopefully give a pretty good idea of what I’m getting at. Happy to elaborate further if you like. It talks a lot about web search.

It may not be the right approach in the end but it’s a handy thing to keep in the mental toolkit anyhow.

4 Likes

If we can somehow get a ‘bully group’ on the network that actively hunts, tracks and lists bad content and actors we could try to seperate them from normal forums, and as a bonus they’d try to crack into the network to find out what accounts own what id’s.

(That’s a good thing, if they succeed maidsafe can patch it)

However I fear that that will only lead to more attention, we’d want as little as possible.
Either way, I agree that we as a community must seed the place and steer it in the right direction even though we still have to figure out a feasable way to do so.

3 Likes

I sort of tried to go this way with some comments some time back. Didn’t go well, not well thought out apparently.

I Like @mav idea.

I wonder if there was a way for an autonomous network AI to just boot those pricks off our network?

1 Like

Haven’t read everything, apologies if I missed this point.

I see the danger @andyypants highlights but I don’t think it’s as significant as we might think in terms of discouraging participation.

The reason is because SAFE is a brand to us because we’re geeks who are here because we see particular characteristics. But we over focus on those.

To most people SAFE won’t be what they encounter because it is like the internet, a platform. Most people think the web is the internet, or even Facebook, or that Google is, because that’s what they use every day when they go online.

People don’t say, I’m not touching the web, internet, Facebook, Google because they host criminals, terrorists, or child porn, although all do have all of these in various measures.

The dark web is so labelled, because it really only offers one thing, a place to do stuff that most people don’t need and are probably uncomfortable with, and is often illegal. Since that’s all it offers, that’s pretty much what the name means. It isn’t really a thing either, but a collective term for anything that is primarily for that kind of activity. SAFE won’t be only for that, or even mainly for that, though like other ‘platforms’ some ‘dark’ stuff will go on.

The fear is that it will be dominated by such users but I doubt that.

IMO most people won’t come to SAFE Network for what we call the fundamentals, but for the services that are enabled by them. I think they will come for a range of services everyone can understand as valuable regardless of the fundamentals.

Take Syncer as an example, since I’ve just been working on it. Install it and you would have a local drive that is automatically backed up to the web, works as fast as your hard drive but is unlimited in size, and from which you can retrieve every version of every file you’ve ever saved to it. (David has envisioned this for a long time BTW, Syncer just looks like it might be a way to get a pretty impressive implementation of this kind of thing going quickly).

There will be many ‘ordinary’ apps of this kind. Irresistible features for everyday use that are nothing to do with uncomfortable stuff. So I think most people won’t even think of that as SAFE Network, but another thing they get by going online.

8 Likes

Reality is what reality is.

In a network that is permisionless, perpetual, and effectively censorship-proof, there will be some bad actors and reprehensible content. As the network is a reflection of society.

I hope/believe that over time, as freedom and prosperity grow, people will become more enlightened, and there will be relatively less of such “bad content” because there are relatively less “bad people”. But that is a long-term, multi generational goal, and SAFE Network must deal with the here and now,

But as I said in the other thread about youtube/filtering, we cannot stick our head in the sand. There will be both good and bad content, depending on one’s personal or societal definitions, and we must face that and provide (or at least encourage) tools for people to have an enjoyable experience by filtering out content they find highly objectionable.

Basically, I see this as one part messaging, and the other part technology to empower individuals and parents, schools, etc to have a “safe” view of SAFE Network.

Messaging: The SAFE Network is a tool/infrastructure. “Bad” content is regularly sent over phone lines or internet cables. Crimes are regularly committed in cars/trucks. We do not monitor every phone call or have an inspection for every automobile trip. Sometimes the bad must be taken with the good because the good is so very useful to society, or has the potential to be. The SAFE Network lets everything in, but empowers you to view/see only the content you wish to. Then expound about all the SAFE Network benefits, etc, etc.

Technology:

  1. Granular rating criteria. Provide a framework whereby rating criteria can be applied to each piece of content (must make sense for the content-type) and a slick interface for people to rate things. Content-type could be as simple as mime-type, to begin with at least. A few examples of possible granular rating criteria: quality, grammar, obscenity, profanity, sexuality, violence, racism, humor, agreement, nsfw, child-friendly, etc, etc.

  2. A mechanism to reward rating new/unfiltered content as an act that is helping the network. Debatable if needed, as people may choose to do this on their own, or non-profit orgs, or governments could sponsor. Also, if the network provides incentive/reward, how does it prevent people from rating badly/wrongly and getting rewarded for it? Interesting to think about.

  3. A way to define/extend/edit criteria for particular content-types. This is possibly a “political” area, so some care needs to be taken with the change-control process.

  4. Provide a filtering system whereby people can easily share and customize filters, including filtering out new/unrated content by default if desired.

Both the rating and filtering tech would ideally be baked into the SAFE API and available for every SAFE App to use.

7 Likes

I’ve been thinking about this sort of thing quite a bit in relation to developing a search app.

If we’re serious about a decentralised network, then search must be decentralised too. The thing that brought me round to this was actually these worries.

I don’t want to be responsible as an app developer for linking to harmful and abusive material, but nor do I think that I, or any other individual, organisation, company or government should be responsible for making decisions about what is censored on a network level. I think that tension is what Facebook and Twitter are struggling with at the moment, and which Google has managed to compromise on in a way that has not offended anybody too much, but which, I personally believe, is causing a lot of hidden problems.

At the moment I’ve actually come round to preferring a model where search might be based purely on connections to contacts and trusted websites, as @mav seems to hint at above, rather than searching for everything and then filtering it.

The logical conclusion of this would probably be that the network would be like a giant social network (or perhaps that’s just the true meaning of a network,) rather than a resource in the sky where we expect to find all that we need and desire. As someone who likes Wikipedia and hates social networks that’s something I’m struggling with at the moment, but it’s an interesting thought experiment that I think might be worth pursuing.

3 Likes

Haha,

Once again you make a good case for the exact opposite to what I’m saying @danda!

The idea of having ratings baked into the network API is really interesting, but it would certainly put a slightly different spin on the way people saw the network (not meaning that as necessarily a bad thing though.)

2 Likes

search might be based purely on connections to contacts and trusted websites

Sounds iike a web of trust model.

In my experience, these are too computationally and data/mem expensive. Consider that just 6 degrees connects everyone in the world.

Also, if we look at just first degree trusted connections (eg family, friends) how much content have they actually looked at and “approved” somehow? (And isn’t the approving act itself a rating?) Now consider all the rest of the content in the network that they’ve never seen or heard of. It all essentially unrated as far as you are concerned.

That is solveable by traversing enough connections, but the amount of data quickly becomes huge. Plus the entire network graph of social connections for the web of trust is a potentially huge privacy problem in itself.

Lots to solve there.

3 Likes

Yeah, certainly agree on all those issues, and I spent a while arguing very similarly on a search app thread somewhere.

I think the key might be in finding ways for sites to group themselves together into indexes (reducing the amount of traversing,) and then ways in which those indexes can interface with each other, and can be traversed in a purposeful fashion.

Nothing against ratings as part of this, and they could make it a lot easier, the difficulties as you mentioned is getting ordinary people to actual rate things.

2 Likes

This thread is worrisome. Especially the concept of “bully groups” roaming around and trying to deplatform and identify specific individuals. If that’s actually going to be possible, the project is dead in the water I think. If people can pick and choose what kind of data they store because it’s all labeled/tagged, the project is also dead in the water. On the “clearnet” I’ve seen plenty of people deplatformed by hysterical mobs or individual sour people. If deplatforming is possible, people will try to do if for ideological or “bad” reasons.

If a service is useful, image doesn’t matter. People will use it, including corporations and governments. What anybody’s mom is comfortable using is irrelevant because I bet most older people will never be willing to transition. At the very least it’s not worth forcing a google-approved type of experience. It’s scary that anybody here would support that.

Whitelisting services you see would be far, far, far better than the ability to blacklist.

7 Likes

I tend to agree with you. I think giving power to groups of people to filter is a somewhat scary proposition. Even if you subscribe to filter groups to accomplish this, it brings into question who these groups are and what is their agenda. Part of the problem we have now is everyone has started to get locked into their own echo chambers. I think having the proposition of a more open and free discourse could help assuage some of the current societal issues.

On the other hand, I do have similar concerns to others, here, about things like CP. There is a line where things go from morally gray to objectively wrong. There is a difference between free speech, and actual harm. I’m not sure where that line is, and that is a large part of the problem, but the network can’t have stuff like CP free floating around on it and succeed. It just can’t, and nor should we accept that as a reality. Perhaps there should be a non profit specifically for tagging universally illegal things like CP, human trafficking, snuff films, or anything that causes direct (non speech) harm to people.

4 Likes

I’m pretty sure that such non-profits would spring up naturally. As developers, I feel like our role is simply to provide them with the underlying tools (granular rating infrastructure) to facilitate their efforts.

2 Likes

I got interested in SAFE a couple of years ago and back then I don’t remember seeing as much of this sentiment. What appealed to me was the idea of everything being allowed and staying forever. If that’s not the case, there’s just no point in my opinion. If neo-nazis can’t trust it, I can’t either. If neo-nazis can trust it, that feels very safe to me. I don’t carry any abstract notion of my moral superiority that would make me more immune to censorship. Most people do though. I’m tired of being controlled by the whims of the majority. SAFE network must be safe even from its own creators.

This is the attitude I remember seeing back then. Nobody should be able to pick and choose what can be on the network and accessed by those who want to. Nobody should get to decide where that line is.

9 Likes

@horace I’m with you. My view is that: anyone can put/publish. Anyone can rate. Anyone can filter.

Now, with a rating/filtering system a danger could arise that certain filter templates become somehow “mandatory”. Or people could attempt to game the ratings themselves.

But I think that something is better than nothing, and at least having the infrastructure in place to rate/filter provides a counter-argument to the “SAFE Network is just for bad people” narrative. Keeping in mind also that an individual can set the filters to view EVERYTHING for all filters, or a single filter, or can reverse the filter logic. ie, for a given criteria, say: profanity, scale 1…5 where 1 is contains-no-profanity: Jan’s filter could be set to profanity: <= 1, meaning only show me content without any profanity. Whereas Bob’s filter could be set to >= 3, meaning only show me content with moderate to extreme profanity, and Jim’s could be * meaning show me everything, including unrated content.

4 Likes

How would that be enforced though? Would everyone be required to to rate what they publish? Would groups go around rating every single thing(probably not even feasible). What scares me most is the idea of the autonomous network choosing what can and can’t stay, especially based on human input.

If people want a walled garden, the SAFE network could accommodate that. Anything within a subnetwork/service could be monitored, just like how websites work now or AOL. Things would have to be published to get into that service. Within the larger network though, I think any such thing shouldn’t exist, especially with how data is stored. If data can be labeled and those labels are recognized by the network, wouldn’t data be deletable? Or would people be able to choose not to store anything above a certain “profanity level”?

2 Likes

enforced? it wouldn’t. no force.

Would everyone be required to to rate what they publish?

no, though it’s an interesting idea to enable/encourage self-rating at time of publication. Can’t really be prevented anyway.

Would groups go around rating every single thing

possibly. various types of rating gaming would be tried.

What scares me most is the idea of the autonomous network choosing what can and can’t stay, especially based on human input.

I’ve tried to be as clear as possible that this is not about what can stay, only about empowering individuals to include/exclude things they may not wish to view. and/or to get an idea of community opinion about a given piece of content before viewing it. Think of this forum if bodies of comments could be selectively hidden based on (a) your personal filtering preferences operating on (b) the communities ratings. But you could over-ride the filter at any time. And it is not the forum software hiding content for everyone, but rather your personal agent software doing it on your behalf, and unfiltering as you see fit.

wouldn’t data be deletable?

ImmutableData cannot be deleted, period. What I am describing is an opt-in rating/filtering system above that. In theory, such could be implemented by any third party.

4 Likes

This is what I was hoping we could get at in this discussion. Not so much how can we make sure there is no bad content (or maybe content I think is bad but you think is good.) Just how can we avoid the narrative of being the deepest darkest web and thus attract lots of bad people. Like if you research Tor you quickly get informed this is like chalked full of criminals. Then maybe you say I am not that kind of person so I won’t use it. Thus there is a feedback loop that leads to higher and higher concentration of bad people in the solution.

1 Like

I agree that everyone needs to be able to trust it, and that censorship (by the network itself) is not a viable or good solution. As far as I’m aware, the developers are not creating a system that can discriminate like that. I also don’t think anyone here is calling for that, or at least I hope not.

There is a difference between what is allowed on the network (everything) and what would be an acceptable starting point for your average user (not everything). Having your average new user do a network search and suddenly find themselves in a bunch of illegal activity would not be ideal. If the network was really a safe place that we all hope it is, those peddling in illegal activity would most likely want to label/tag their own stuff, anyway, as keeping it out of the mainstream is in their best interests as well, and would help others find the content they were looking for.

2 Likes