Update 16 June, 2022

Well really the Internet was never finished either. Fortunately there is not a need for Safe to be finished before it can be started to be used. Just certain modules working properly and that is what is being worked on now. (not telling you anything new here LOL)

What I saw over in the comnets is that the test net worked until the known DB bug killed it. So once the new DB is built into the system then it should be much more stable for longevity

6 Likes

Problem is you can end up with a project like “Duke Nukem forever” or even another Theranos.

The later is probably a bit harsh comparaison, but sometimes things don’t pan out and the ideas behind the project are way too lofty for the available technology.

Hopefully this is not the case, but the possibility is still there that we never see a fully working network.

Crossing fingers this is not in the case, but would still not be surprised…

Out of 10 startups, 9 fail, the odds are definitely not in our favor.

1 Like

Should have made it clearer that I was actually referring to ANY project to which the Pareto “rule” is applied, when you have done the first 80%, then the remaining 20% can itself be subject to the 80-20 split and so on asymptotically, not specifically THIS project.

I have faith David and the team will deliver for us all to use the SAFE network for our own purposes, lofty or mundane. And I believe we are getting very close now.

4 Likes

I know we have been going around this topic for years but there has to be some moderation of Safe or it will just become a haven for criminals. Just take the issue of ‘free speach’. Peter Hain the verteran anti-appartide and gay rights campaigner always maintained a good line on this (and at one time his sexuality was considered a crime remember). You should be able to say whatever you like as long as it:
(1) didn’t threaten violence to someone
(2) encourage / insight violence
(3) wasn’t an outright lie.

Even this is quite problematic (think Ukraine - some of us may fall under 2 at the moment). Ultimately Safe will probably need a moderation board just like Twitter, FB and the rest that operates under broad rules and have some sort of ‘report’ function.

I know this idea will generate howls of ‘no’ but ultimately a lot of us just want a safe place to exist, work and play for us and our children, outside the control of corporates and (certainly some) governments.

Maybe safe needs zones that have different moderation and control. A Green zone that is fully controlled like a well policed city right out to a ‘bad lands’ red zone where anything goes. Each zone being subject to a seperate set of rules and control?

2 Likes

I would think this applies to application writers writing into their Apps these features. The network itself is what needs this regulation and really only for “horrific files”

Way too difficult for the core code to identify the tiny differences in language needed to even start moderating this.

5 Likes

You can get that through filtering what you can retrieve, rather than censoring what is stored.

I think the bigger concern would be what others can publish and see. Given that must apply to everyone (not just those with a filter), it’s a much more prickly topic.

5 Likes

Filtering is going to require classification of content. That might work as a self classification with the option for viewer to ‘report’ if the self classification is taking the proverbial … Traditionally that hasn’t worked very well e.g. when Princess Dianna died and various porn sites changed their tagging to be brought up in searches about her…

Surely censorship is going to have the same problem, but with much higher consequences for getting it wrong.

1 Like

The problem is, with the way national and internation legislation is going, it is increasingly likely that you either get into some sort of content moderation or end up part of the darknet and marginalised. Which will be a real shame after all the work that has gone into Safe!

The problem is that there isn’t a way to do content filtering without defeating the network. It can be done in higher layers, but for those who want full access to everything, that is ensured as well.

This seems doable and much better than what exists now, or in the near future where politicians (even in UK for example) want to decide what gets removed. If all they can do is tell businesses which ‘host’ content to use certain block lists, the content isn’t actually removed, it’s just hidden for users of those applications. So users can switch, or more likely use applications which give them control of the filtering (choosing, creating, sharing block lists, tagging etc).

Safe Network can avoid marginalisation by providing a better model, one that can’t be constrained by targeting centralised content but which gives users and communities control over what’s they access.

3 Likes

Thank you for the heavy work team MaidSafe! I add the translations in the first post :dragon:


Privacy. Security. Freedom

8 Likes

Who decides what can be deleted?
Who has the capacity and resources to scrutinise the information?

If censorship is introduced into the core of the network this is what will happen:

.-The foundation will receive a huge list (datamap o chunks) from the different “authorities” of material that does not comply with the laws.
.-The foundation will have no choice but to activate deletion protocols for this information otherwise it faces possible criminal charges. No information will be verified.
.-The nodes have neither the time nor the resources to verify anything and will proceed, out of self-interest and to avoid problems, to accept this deletion.

And so ends a beautiful dream of an incensurable network.

5 Likes

There’s nothing to stop the network from been forked. Sure it will happen.

4 Likes

If Safe is to be without moderation/censorship then it should support the creation of zones where a set of accepted norms apply. So, if Safe as a whole is left to be whatever anyone wants it to be, then defined and easily found/recognisable zones/groupings can be set up that ring fence an area (users and sites). to allow them to control what is allowed in from the rest. A bit like fortified villages and towns in a lawless countryside. While I understand the pureist aim to be free of censorship, in reality very few people have no limits, no red lines, on this topic. I do not want to see, under any circumstances, accidentally or otherwise, (ads for) sex trafficed children, religious executions, mysogonist violent porn etc… The majority of potential users will be in the same boat.

1 Like

This is for the applications that others will be writing on top on Safe network.

Let me reword what you are saying in Internet speak.

If The Internet is to have Sites without moderation/censorship then it should support the creation of zones where a set of accepted norms apply.

If Safe Sites/Applications are to be without moderation/censorship then it should support the creation of zones where a set of accepted norms apply.

The Safe network is replacing the Internet and storage levels of servers. It is not in itself providing the services that moderation is applicable.

The regulation that is being discussed for the swiss foundation is relating to storing files on the storage layer that are identified internationally as horrific files. They are identified by a list of hashes of the bad files and has no relation to moderation etc.

Moderation is applicable to the application layer or site layer of the Internet and Safe network.

Thus this is up to the individual application writers to implement and not for the Safe network developers

8 Likes

The app layer is where those zones will exist for certain. Hopefully no where else. Censors are the bane of free speech, without which we are slaves. I hope you don’t want to promote slavery here.

1 Like