How does SAFE stop censorship?

How exactly does SAFE Network stop or reduce censorship?

Looking for ideas, thoughts and general discussion: I am writing a new article on this topic.

Here is a good starter topic: Things That Would Not Have Happened On Safe

7 Likes

Quite a few ways I think so I probably won’t get them all:

  • all traffic is encrypted, so very hard to filter based on content
  • no servers or central dns, so no way to block a server/website (eg Wikipedia) with filters, DDoS etc
  • traffic is routed anonymously, ie client IP scrubbed at the first hop, and after that who knows where it goes or what is being sent or received

Anyone think of other features?

10 Likes

So in terms of; I create a user generated news website / social network but I dont like group XYZ, on the current clear net I can just block their posts, topics, discussions, comments, replies, I can shut down their accounts, block their accounts, delete their accounts and I can alter my websites algo so that the content being produced and shared by group XYZ is not easily found. How does this differ on SAFE?

You can’t do any of that unless you can identify the individuals and go after them - ie physically identify and control the person creating the content. Then make them take the content down, or identify their devices, hack them and take over their accounts.

Although if I understood him @dirvine has recently said it won’t be possible to take down content completely because old version will always be accessible.

4 Likes

The website/APP runs on the person’s computer so technically they can bypass any typical censorship at the APP level.

Although in theory you could create a website and you have already created the MDs where the data is stored and thus you could delete stuff since you are the owner of the MD. This is potentially how some forums will be written so that the forum owner can actually delete posts since the owner of the MD can delete it.

Now the anti-censorship includes what @happybeing said plus

  • immutable files cannot be deleted thus no censorship by deleting files
  • immutable files can be read by anyone at any time, so no censorship by blocking users from accessing certain files
  • APPs will be written by someone to act as nanny and do filtering of content at the APP level (ie on the users computer, not on network)
  • MDs can be censored by the owner of the data or the MD. Unless the permissions were set on creation to prevent this.
3 Likes

I’m still waiting for it to appear in the API @neo but a few days ago I understood @dirvine to say that all MD versions remain accessible and that it was just an omission from the API that we can’t access them now. Which I believe means even MDs, therefore websites or any data at all can’t be deleted by the owner, or anyone else.

See:

4 Likes

That is going to represent a lot of space required to keep these versions. A database can have rapidly (relatively) changing data. I was under the understanding that versions would only be if you set that in the MD. In other words some MDs would allow versioning and some would not.

If every MD had all versions kept then there is no anonymity with SAFEcoin since the transaction history is kept for all SAFE coins. Just trawl back through the versions.

Actually this is worse than blockchain since you are keeping a history for all MD data and not just ledger data like a blockchain. Typically data is kept on MDs because it is expected to be changed whereas many files are not expected to be changed.

Or was that “if the versioning bit is set” then the API will be able to get to all the versions

4 Likes

So called deplatforming could still happen to some degree, but it will be different than what we have today. Today the are some primary social data silos (Facebook, Twitter, YouTube, etc), and if you get the boot, it becomes hard for you to communicate with others, or at least to communicate with a large audience.
On SAFE there aren’t such data silos. So if for example Patter decides it doesn’t want people to see nasty words, maybe Patter users could no longer see your posts, but it is very easy for you and your audience to switch over to NotPatter and not lose any data. The data you generate was always under your control, and users can choose what apps they want to use to help them create new data and visualize existing data on the network.

3 Likes

Like the many worlds interpretation of data? So can’t erase even if you wanted to? No statelessness?

1 Like

But there could be search silo’s. If google (for one example) built a search system for the Safe Network, this would be a way to censor websites…

IMO we need to have an open-source search system (pluggable algorithms?) for the Safe Network to prevent this sort of censorship.

In omage to the once great google (and not to be outdone), I suggest it be named “Graham-search” lol …

2 Likes

He said that an API was missing but this doesn’t mean that all MD versions will be accessible.

He said also this:

A new RFC to definitively specify appendable only data is required.

This RFC can be about a new kind of object.

But I think that MD could be reused to implement this feature at the app level without any core modifications by interpreting each entry as a different version of the same object (version number would be stored in entry key). This solution would even ensure that history has not been tampered with, simply by checking that all entry keys are successive numbers and that all entry versions are 0. The limitation is that only 1000 versions per object would be allowed, but this is good enough for many purpose.

4 Likes

I agree. The reason though for the RFC is to cement this and remove all other bits from an MD data type. Also to ensure the ramifications and benefits are clearly articulated technically and then the marketing team can get the message out clearly and without ambiguity as well. It is a simple thing with very large consequences and will need a lot of mass market education or whatever that means.

The fact that data will be stored forever is pretty big but does not mean private stuff will somehow become public. If people make mistakes then they will need to “own it” and of course they can change their mind etc. but the illusion we delete Internet data will vanish with this network. I like that, but it is a big message that us devs will need a ton of help explaining, I just imagine the arguments here, they will be epic, but necessary.

It will lead to being able to use advanced tools that may or may not already exist to allow people to mask out some data such as data people do not want to see. These filters will hopefully be AI client-based filters. This will be one example of thousands of the debates that will rage, but a well-handled debate is probably some of the best marketing we will have. If we accept marketing is getting the message out and explaining the vision then we will all be in good shape.

tl;dr This will be one of the first messages to say here is your freedom and here is the responsibility that goes along with it, simple, honest and undeniable.

6 Likes

Thanks.

Some questions on these points so I can wrap my monkey brain around it as I am not a programmer or network engineer.

What do you all mean by MD? Mutable data? Or Management Domain? Or something else?

If these app nannies forum owners decide cant they just decide to delete user content being that they manage the app/website and its infrastructure to a point?

2 Likes

This. Its a up to 1MB data object that can have multiple fields in it. And it seems version copies will be a limited thing. Maybe just each field being the later version where versions are kept.

My thoughts on this is like the firefox addon “adblock” which uses lists to know what to block. The APP could be a custom client or an addon to the safe browser, and uses one or more list to block data objects/fields. The list would be maintained by different people/groups.

2 Likes

Maybe we are not on the same wave length.

I am suggesting that blocking content would be natural in this network just like any other (although less prone to censorship) because how else do you stop spam, scams and in general stuff you don’t want on your site if you are running a user generated content site?

2 Likes

A spammer will have to pay for each and every mail they send out. So it becomes expensive for spammers whereas on the normal internet it costs them virtually nothing to send out each email.

If you don’t want something on your site then don’t put it there. You are in charge of your site’s content. If you allow others to place content on your site via comments or advert system then you need to have a system to approve the comments and only display the approved comments. You could even have your APP allow users to mark comments up/down comments and when the vote goes below a certain amount then your APP never shows that comment. For a advert system then you have to trust that system.

That is the point in SAFE the user owns their comments and the best you can do is have your site/APP do filtering based on certain criteria. But if someone else modifies your site/APP as a new site/APP then they could have all the comments show up.

For the user themselves then they could do as I said above and have a adblock (or malware) style of APP or client and do their own filtering.

4 Likes

@goindeep I would describe ‘stopping spam on your own website’ etc as content management rather than censorship.

In the former, yes I can change what appears in my page at safe://blogtastic, change it, edit out things I no longer want, stop linking to certain comments posted by others etc. I do control what people see in that way, but that’s not necessarily censorship unless by doing so, I can stop others accessing the comments I no longer show on my website. If the content still exists somewhere, and someone with a link to it can still view it, then it is still accessible.

So, can I erase something (or can somebody else block access to it) so nobody can see it if they have a direct link to content I don’t want somebody else to see? That’s what I take as censorship. And this is where we need some clarification.

From what @neo and David have said it looks like people (website owners, publishers etc) will be able to decide whether content they publish will always be available or not - by choosing to set/unset a flag at the point it is published - which will determine whether or not previous versions will be accessible (forever). Sort of like deciding whether or not it gets into the wayback machine.

However, they won’t be able to prevent anything they publish from being preserved by somebody else, having grabbed and s stored in a SAFE wayback machine, and once there, there forever and no way to take it down or censor - except at the app level (eg by adding a filter in a popular search engine for example). But that just lowers the profile, it can’t be removed, and people will always be able to find it and share links to it that can’t be blocked.

8 Likes

I’m linking to this thread where this was discussed earlier:

You are saying “it looks like”. I’ve probably missed something, but it’s still very much unclear to me what the decision will be or even where it’s leaning. Maybe this question of “Forever” deserves its own thread.

2 Likes

That question will be one of the very things that define us. A fundamental and honest approach to life moving forward. Digital info deletion or rewriting history are both wrong, the former is misinformed and the latter is dangerous. I am 100% sure we can do better, be honest and wake up the masses with this message and of course the rest of the network fundamentals.

14 Likes

I’m starting to get confused, so please set me straight. Are we talking about the persistence of an MD or ImD? MD versions? @neo’s temp data flag idea?

I thought persistent versioning was what ImD was for… KISS? If an MD is immortalized, isn’t it just an ImD at that point?

4 Likes