Bad people using SAFE network (a group dynamics perspective)

Yeah, certainly agree on all those issues, and I spent a while arguing very similarly on a search app thread somewhere.

I think the key might be in finding ways for sites to group themselves together into indexes (reducing the amount of traversing,) and then ways in which those indexes can interface with each other, and can be traversed in a purposeful fashion.

Nothing against ratings as part of this, and they could make it a lot easier, the difficulties as you mentioned is getting ordinary people to actual rate things.

2 Likes

This thread is worrisome. Especially the concept of ā€œbully groupsā€ roaming around and trying to deplatform and identify specific individuals. If thatā€™s actually going to be possible, the project is dead in the water I think. If people can pick and choose what kind of data they store because itā€™s all labeled/tagged, the project is also dead in the water. On the ā€œclearnetā€ Iā€™ve seen plenty of people deplatformed by hysterical mobs or individual sour people. If deplatforming is possible, people will try to do if for ideological or ā€œbadā€ reasons.

If a service is useful, image doesnā€™t matter. People will use it, including corporations and governments. What anybodyā€™s mom is comfortable using is irrelevant because I bet most older people will never be willing to transition. At the very least itā€™s not worth forcing a google-approved type of experience. Itā€™s scary that anybody here would support that.

Whitelisting services you see would be far, far, far better than the ability to blacklist.

7 Likes

I tend to agree with you. I think giving power to groups of people to filter is a somewhat scary proposition. Even if you subscribe to filter groups to accomplish this, it brings into question who these groups are and what is their agenda. Part of the problem we have now is everyone has started to get locked into their own echo chambers. I think having the proposition of a more open and free discourse could help assuage some of the current societal issues.

On the other hand, I do have similar concerns to others, here, about things like CP. There is a line where things go from morally gray to objectively wrong. There is a difference between free speech, and actual harm. Iā€™m not sure where that line is, and that is a large part of the problem, but the network canā€™t have stuff like CP free floating around on it and succeed. It just canā€™t, and nor should we accept that as a reality. Perhaps there should be a non profit specifically for tagging universally illegal things like CP, human trafficking, snuff films, or anything that causes direct (non speech) harm to people.

4 Likes

Iā€™m pretty sure that such non-profits would spring up naturally. As developers, I feel like our role is simply to provide them with the underlying tools (granular rating infrastructure) to facilitate their efforts.

2 Likes

I got interested in SAFE a couple of years ago and back then I donā€™t remember seeing as much of this sentiment. What appealed to me was the idea of everything being allowed and staying forever. If thatā€™s not the case, thereā€™s just no point in my opinion. If neo-nazis canā€™t trust it, I canā€™t either. If neo-nazis can trust it, that feels very safe to me. I donā€™t carry any abstract notion of my moral superiority that would make me more immune to censorship. Most people do though. Iā€™m tired of being controlled by the whims of the majority. SAFE network must be safe even from its own creators.

This is the attitude I remember seeing back then. Nobody should be able to pick and choose what can be on the network and accessed by those who want to. Nobody should get to decide where that line is.

9 Likes

@horace Iā€™m with you. My view is that: anyone can put/publish. Anyone can rate. Anyone can filter.

Now, with a rating/filtering system a danger could arise that certain filter templates become somehow ā€œmandatoryā€. Or people could attempt to game the ratings themselves.

But I think that something is better than nothing, and at least having the infrastructure in place to rate/filter provides a counter-argument to the ā€œSAFE Network is just for bad peopleā€ narrative. Keeping in mind also that an individual can set the filters to view EVERYTHING for all filters, or a single filter, or can reverse the filter logic. ie, for a given criteria, say: profanity, scale 1ā€¦5 where 1 is contains-no-profanity: Janā€™s filter could be set to profanity: <= 1, meaning only show me content without any profanity. Whereas Bobā€™s filter could be set to >= 3, meaning only show me content with moderate to extreme profanity, and Jimā€™s could be * meaning show me everything, including unrated content.

4 Likes

How would that be enforced though? Would everyone be required to to rate what they publish? Would groups go around rating every single thing(probably not even feasible). What scares me most is the idea of the autonomous network choosing what can and canā€™t stay, especially based on human input.

If people want a walled garden, the SAFE network could accommodate that. Anything within a subnetwork/service could be monitored, just like how websites work now or AOL. Things would have to be published to get into that service. Within the larger network though, I think any such thing shouldnā€™t exist, especially with how data is stored. If data can be labeled and those labels are recognized by the network, wouldnā€™t data be deletable? Or would people be able to choose not to store anything above a certain ā€œprofanity levelā€?

2 Likes

enforced? it wouldnā€™t. no force.

Would everyone be required to to rate what they publish?

no, though itā€™s an interesting idea to enable/encourage self-rating at time of publication. Canā€™t really be prevented anyway.

Would groups go around rating every single thing

possibly. various types of rating gaming would be tried.

What scares me most is the idea of the autonomous network choosing what can and canā€™t stay, especially based on human input.

Iā€™ve tried to be as clear as possible that this is not about what can stay, only about empowering individuals to include/exclude things they may not wish to view. and/or to get an idea of community opinion about a given piece of content before viewing it. Think of this forum if bodies of comments could be selectively hidden based on (a) your personal filtering preferences operating on (b) the communities ratings. But you could over-ride the filter at any time. And it is not the forum software hiding content for everyone, but rather your personal agent software doing it on your behalf, and unfiltering as you see fit.

wouldnā€™t data be deletable?

ImmutableData cannot be deleted, period. What I am describing is an opt-in rating/filtering system above that. In theory, such could be implemented by any third party.

4 Likes

This is what I was hoping we could get at in this discussion. Not so much how can we make sure there is no bad content (or maybe content I think is bad but you think is good.) Just how can we avoid the narrative of being the deepest darkest web and thus attract lots of bad people. Like if you research Tor you quickly get informed this is like chalked full of criminals. Then maybe you say I am not that kind of person so I wonā€™t use it. Thus there is a feedback loop that leads to higher and higher concentration of bad people in the solution.

1 Like

I agree that everyone needs to be able to trust it, and that censorship (by the network itself) is not a viable or good solution. As far as Iā€™m aware, the developers are not creating a system that can discriminate like that. I also donā€™t think anyone here is calling for that, or at least I hope not.

There is a difference between what is allowed on the network (everything) and what would be an acceptable starting point for your average user (not everything). Having your average new user do a network search and suddenly find themselves in a bunch of illegal activity would not be ideal. If the network was really a safe place that we all hope it is, those peddling in illegal activity would most likely want to label/tag their own stuff, anyway, as keeping it out of the mainstream is in their best interests as well, and would help others find the content they were looking for.

2 Likes

yes I really agree with this. As long as the starting point is not getting pushed what I will call ā€œquestionable contentā€ I think we might be ok. Then when my mom uses it for the first time her head doesnā€™t explode. But at the same time if you want to have an area to plot the overthorow of the local governmentā€¦ well you could get to that but it would take a slight bit of work. So the default of not putting in that effort is a happy friendly place.

So I envision filter templates. A template would be a list of filter setting across all available rating criteria.

Letā€™s say that I tweak and customize my filters exactly to my liking. I could then export those filters as a template for others to use if they wish, and those others could further customize and share, etc.

Ok, so given this generalized facility, itā€™s easy to see how default filters could be created for each new account (SafeID) using one of these templates. Or even, when creating a SafeID, one could perhaps specify a FilterID (SafeUrl) to start with, if one doesnā€™t want the default.

3 Likes

Agreed. As was stated, it is basically like the modern ad blocker or pi hole. The problem becomes, how do you educate your non tech savvy people what this all means and present to them a reasonable starting point without also favoring some kind of politicized or overbearing censorship? Are we going to rely on friends and family to present new users with a ā€œsafeā€ invite link which incorporates their filters when new users accept the invite?

Iā€™d be happy to start a non-profit to attempt to find and label/tag what I would consider ā€œuniversal illegal activityā€ without any political bent, but how do people find and differentiate between different filters when signing up?

3 Likes

Thatā€™s a pretty cool idea actually. Filtering by referral.

It incorporates some web-of-trust flavor into the filtering, at least at the very beginning.

If I were designing such a system, I would consider including spoken language as a rating/label criteria, so this could also help setup a filtered view of the network with the userā€™s own language.

Keep in mind of course that anyone can create new Safe identities at any time, like reddit throwaway accounts. Possibly each new ID could use the FilterID from the most recently used prior ID, unless otherwise specified. Just brainstorming here.

edit: the thing I especially like about this idea is that helps limit the power of defaults. Software defaults can be extremely powerful because most people tend not to change them. But if a few power users do, they will likely also be the most active sharing referral links, etc. So it gets a wider variety of filters out there in the wild.

3 Likes

just something I want to throw out thereā€¦ I was watching this video about ancient cave art. How did these people without filters make sure that their cave was good content that people wanted to keep going back to and adding more good content?

Of course maybe some of it was offensive to others. Itā€™s hard to not say some of those symbols we have lost the meaning to donā€™t mean ā€œscrew you goat lover!ā€ I tend to think they donā€™t though. Reason being they didnā€™t censor it later by rubbing it out or something (or maybe they did but so toughly there is not even evidence of censoring.) They left it there basically forever as an immutable record.

One theory I have is that when its truly a community collaborative effort that kinda compels the next people to make the positive content and not vandalize it. Filters canā€™t achieve this. If anything it encourages people to think whatever I have my own cave here and I am gonna paint swas dicas on it. Thatā€™s why I believe the most important thing is not filtering in retrospect. The most important thing is creating that positive momentum at the start.

2 Likes

Timing is powerfulā€¦

3 Likes

My biggest concern is still the current non-flexible NRS system in combination with bad actors.
If nazis claim safe://annefrank, pedophiles claim safe://mickeymouseā€¦
They will own these popular sites forever, nothing you can do.
It will cause such a big reputational damage. No company or organisation wants to be associated with a network that facilitates this.

You can have the best security, but 1 stupid 14yr idiot can take your network down (figuratively).
You canā€™t stop these people, you have to deal with them, but there should be a way to drive them away from popular nrs names to less popular places on the network.

There are a lot of discussions about alternative more flexible NRS systems, so I hope there are at least some plans to look into these before we launch :roll_eyes:

1 Like

Interesting thread Iā€™ve not had time to read but spawns a few reactionsā€¦ perhaps repeats of what Iā€™ve noted beforeā€¦

There is a edge but any delay in what is visible, can help what defences arise act.

To be expected that the network in its raw from might be a few steps away from the normal access.

A key advantage SAFE has is that media is set against single location; so, much less work once it is resolve what is out of bounds for a certain interest whether that is available to user.

Done right the network is baseā€¦ and the association is with the applications.

The compound of defences is always stronger.

The differences of opinions, can help support consensus that drives out minorityā€¦ thereā€™s good and bad in that.

but if the network is just the core content for sites and the header is set in a context more like real world DNS registrations, then that is stopped. Iā€™m not arguing for limits but there are likes to rise different interests in use cases; so, taking that for example, if CLI can work within context of ye old cgi-bin, then normal unSAFE websites could make use of the data storage and also run more flexible applications - along with the stop on anon registration of namesā€¦ DNS hogging is rather unnatural problem that reflects humans valuing what is familiar and simpleā€¦ xors donā€™t have the same appeal. Still, perhaps we just learn that names are not importantā€¦ or not have NRSā€¦ but what is better option than domains as names?.. thumbprints are hard to remember as xorsā€¦ could do numbers only NRS??

1 Like

Iā€™ve just skimmed this thread so may have missed it, but this issue is nuanced and difficult. So @andyypants mom doesnā€™t want to see boobs and he doesnā€™t want to see goats (I feel he complains too much) and so they should easily be able to block that. Agreed.

Then, as mentioned, thereā€™s the illegal stuff, by which I mean illegal everywhere, not just China. This area is more difficult and where I think something programmatic will need to be bought in (although I know this is very vague). I do agree that we need to set off on a positive footing rather than becoming swamped with illegal activity early on.

What worries me most is the propensity for directed hate speech. The difference between a network and a road or a postal system (using the pedos drive cars argument) is that roads and telephone lines and postal systems are mainly one to one, or one to a few, or a few to a few. A network is many to many. Iā€™m minded of the Rwanda genocide in 1994 which was enabled largely through a radio network (few to many). Before it all kicked off most people didnā€™t know if they were Tutsi or Hutu or if they did it didnā€™t matter. By the time it was over a quarter of the population were dead.

Countering large scale propaganda is going to be our biggest battle I think.

5 Likes

I think you misunderstood what I meant with that, they wouldnā€™t be able to deplatform anyone, the most they could do is disrupt it by spamming which will surely be fixed in time. (like, we need a solution for that anyway for the reverse, canā€™t have spammers post bad content on safetube or whatever)

In the mean time it would give more desirable communities time to get on the network and get more and more people to use it so it wonā€™t turn out like some even darker onion network in peoples minds.

2 Likes