Dealing with horrific content or something

Yep, all the lists in the world use the hash of the whole file.

Chunk level lists require someone to have original file and self encrypt then distribute the list.

Seeing as the lists are based on whole file hashes then the obvious place for determining if the file is on the list is where the file exists prior to self encryption. Also allows the network core to remain faithful to the goals.

Do the work where it is logical to do it, do not mix your layers, a golden rule. That is why internet explorer for most, if not all, its life was a malware’s dream since they mixed the operating system layers with IE allowing IE access to low level OS layers. Same happens for networking layers.


It’s more than that really. Govs are looking at straight out bans of some tech too. So any contributor and any org like MaidSafe can get into some trouble. What we are investigating is what can we do to ensure the network survives gov level attacks on freedom. It’s not simple in any way as we need to deal with humans.

I am glad this discussion is happening as we all need to learn more of

  1. What’s technically possible
  2. What’s politically likely

For 1. I do have faith that the network will never on it’s own ban any content, but node operators may wish to ensure they do not fall foul of crazy laws. This discussion though is valid and valuable.


It is still a client level issue. Wrong layer to be doing it at the Node level.

So simply state that the client software can implement any lists that is required. Also that solves the problem of creating the list in the first place.

The list you get given to you by the International Police is hashes of the files and not self encrypted hashes. Someone would have to get the original files and self encrypt them and then create a new list to distribute. Who would you expect to do that. BUT still that is silly since the proper place is the client where the file is attempted to be uploaded or downloaded. At that point the whole file is available to get its hash and check against the list


Nobody is suggesting that.

Every tech has people behind it. For Safe, I am keen to remove those humans.

That will not happen, well personally I can guarantee that I wont.

There is an unreal assumption here that we MaidSafe have decided to censor data. That is not and will not be the case, ever.

What we are doing here is facing up to and understanding the challenges, not for maidsafe alone, but for this very network and it’s goals of privacy security and freedom for all. So don’t read this as MaidSafe have decided or MaidSafe will censor or support censorship.

Bravery is not shoving your head in the sand, it means looking at the hard stuff I hope everyone can see maidsafe and I have a strong vision that has never faltered.

When we launch this so can anyone and if there were a censor list I believe others would, and many maidsafe staff would. The assumption we are gung ho censor freaks is very much a misunderstanding.


I think this makes sense. What we need to consider though, is it enough. Kinda like you had bad data on your PC but never look at it. I am not sure. I try to reason via logic, but governments don’t do that and hackers and scammers certainly don’t.


@dirvine Well you promote it as the file will not be able to be uploaded since it will be caught and if a file is only recently discovered then no one can download it if it was ever uploaded. It exists as encrypted chunks so no one can ever grab it.

Yea I know but its in the presentation isn’t it

1 Like

Oh and you can ignore the hacker will do it argument since that applies at whatever level its done, whatever amount of effort to enforce it. Hackers bypass everything.


That’s something that needs to be solved then.

1 Like

It’s all a mess. Thre is a strong belief some org can decide what is good/bad and we all know that’s not true. So there are many (most) projects involved with data having to be aware. This includes the matrix/signal/filecoin folks and more. It seems that ignoring the problem is less and less defensible.

If a DAO could work (I fear not) then it would not matter. If it did though then govs would just ban any Safe code being used and probably treat folk as child porn or terrorist enabling tech.

So the very tech could be targetted unless we at least show it’s concern we do consider as fully as we can.

My biggest worry is we involve humans though as they do seem to lead to corruption.


Well, I would focus then only on coin transfers and private data if I were Maidsafe. Leave the public part for web3 projects (they can store their public data as private on Safe). Or maybe some anonymous community can create a public data layer on top of Maidsafes Safe Network.

Positive point is that the project scope becomes smaller for Maidsafe. So maybe we have a working product this or next year…

If we need to think of all kind of solutions for this public data problem, we will need again a lot of extra time and resources…I guess we never launch then


Implementing any filtering at the client level would be an acceptable response in my opinion. The authorities need to be told that the client software is a major portion of the whole network and the network cannot operate without the client software.

Its not Maidsafe’s fault if people hack their client software since they can also hack their node software. So hacking is a null argument


My guess is outright bans are most likely

The node operators will then have to take the same risks the developers have taken.


If the network is forked and the alternative network(s) become the home of detestable media that actually hurts people (particularly women and children) in the real world, that just increases the value, usability, and SAFEty of the original network. Personally, I would not use or economically support any such forks, and I think most people would not do so either.

1 Like

OK, I promise I’ll shut up after this, but PLEASE don’t let yourself think of this primarily in terms of CSAM. That’s the example that people who want to ban things always lead with, but it is a bad, atypical example, and you’ll get in trouble if you treat it as the paradigm.

It’s a bad example precisely because there’s a very strong consensus it shouldn’t be there, a relatively strong consensus about what it is, and various semi-independent banning agencies in that space are relatively credible. I’m not saying that I believe NCMEC or IWF or whoever should actually get the kind of power they have. Even if they were absolutely perfect and incorruptible, that would be too much trust. But they do have a fair amount of credibility, especially compared to some of the more questionable government agencies of the world. Insofar as it can be determined, they don’t seem to have deliberately abused their power very much. Most people are willing to give them a fair amount of trust.

Nonetheless, even in that space, there are strong political forces that want to expand definitions. I believe that the desire to expand is a major, if unacknowledged, reason for people pushing to change the very name from the relatively unambiguous “child porn” to the potentially boundless “CSAM”. If I remember right, the UK has a relatively broad definition. There are political forces, primarily in the US, who are trying to brand any child-directed media that acknowledge the existence of LGBTQSMNOP people as “sexual grooming”, and you’d better bet that, if they got any mainstream traction, they would try to get that material included in various definitions of CSAM.

So that pretty consensus is subject to evaporating any time.

As soon as you go beyond “CSAM”, even as far as “terrorist content”, let alone copyright, miltary secrets, drug information, hate speech, Moral Corruption™, Winnie the Pooh, Gollum, or whatever, the consensus doesn’t exist to begin with. The international network of institutions isn’t there. The government interests change, including various governments coming into direct conflict. But the pressure is, or will be, or could be, even greater, in some places. If you don’t have an answer for those harder issues, you probably don’t have an answer that can hold together for the long term.

You’re trying to build a network with a long life. You can’t assume that an approach designed for one relatively easy case (CP/CSAM), on easy mode (at a time when there’s very broad consensus on what it is and what should be done with it), with an easy response (the answer is always an absolute ban), will help you much in the long term.

I also still don’t believe it will work even for that case, mind you…


You are currently using internet protocols that have also been used by pedophiles, murderers, etc. The SAFE network is the same thing but with more privacy and decentralised, censorship proof servers.


If the forked network allows true freedom of speech, censorship free videos, social media, etc… Eventually everyone will move over there due to the restrictions of the first network.

1 Like

Because there isn’t a better alternative. If I could choose to use an internet as described by @JimCollinson, I would.

Not everyone. You’re ascribing your crypto-techno anarchistic views to the world at large, which has largely demonstrated that it mostly does not agree with you. The average person will not agree to those views for the same reason that anarchy is not self-sustaining.


I thought we were building Safe Access for EVERYONE? If we are building safe Access for a select few we are creating something just as bad as the original web.

1 Like

The network should be agnostic / non-political / totally neutral.


For the children who may not be kidnapped, raped, and killed, this would be Safe Access for EVERYONE. There will always be competing interests, like the right to own guns vs the right to not be slaughtered in a mass murder. Maturity is required to critically asses and solve for competing interests.

There is nothing such as non-political. Every choice makes a political statement.