Dealing with horrific content or something

I think the initial reaction here is good. It was my reaction also, but Jim and Heather are doing an absolute ton of research here. ANY banned content would require SIGNIFICANT input from many authorities plus be accepted by node operations.

We are not talking here of the USA saying block Snowdon’s papers or kill Wikileaks. This could not be further from the suggestions being made. The data in question is that data at the extremes of depravity and identified and agreed as abhorrent by a really significant set of authorities and open source projects as well as node operations.

So don’t think censorship here in some lazy person X says kill content. The proposals are nowhere near that.

I am still on the fence though on all of this. But I am listening and trying to wrap my head around it all. However, I have never seen child sex images or beheading videos and never want to. I never want anyone to, but I am a strong believer in education and evolution being the real way we get beyond that. However, life is changing and regulators are swarming where even in the UK they are looking for nominated names to jail for failing to at least try and manage that kind of content.

So don’t think government-controlled censorship here. It is not that.

6 Likes

Who are these authorities and how do they reach agreement?

My initial reaction to this is shock, hopefully better understanding is all that is required.

Right now I am picturing some kind of governing body that decides good from bad.

4 Likes

I’m not worried about being “banned” by or from some network. I’m worried about being captured, tortured, and killed, because somebody at some point in time thinks what I write or store is “horrendous”. I’m sure Edward Snowden and especially Julian Assange aren’t worried about being “banned” either.

I’m not interested in “plausible deniability” in some courtroom. I want mathematical impossibility of identifying me and my communication/information.

There are many things I wish I could unsee. Images from US torture facilities are just one example. I would want community managed filters that I could and surely would voluntarily apply for myself, but that has nothing to do with the core SAFE principle.

I also have nothing against regular police work where they capture offenders committing violent crimes in real life. Looking at or posting sickening pictures is an entirely different issue.

Please take it off the table.

10 Likes

From a purely practical standpoint I think the idea of providing a mechanism to bypass the immutability of public data in extremely rare, provable cases is a good one. The world seems to be driven by perception vs. facts these days. If the foundation takes a hardline approach it will invite controversy and attack based on extreme examples. If there is a way to give just a very little in an open and non-negotiable way I think they gives cover to not come after the project. It will feel safe and wholesome for all and hopefully keep the world’s homeland defense departments at bay. They key will be a mechanism that is not easy or arbitrary, and also cannot be expanded in a manner beyond what was initially intended. It sounds like that’s the plan (or one potential plan) and I think it is prudent. The same goes for money laundering. As was mentioned above there could be mechanisms facilitated at the edges of the network for KYC where it interfaces with the fiat world. Again, if the foundation is open and positive about helping those limited efforts it can hopefully avoid any push-back on full economic freedom inside the network. These are good issues to get in front of and lead the solutions I think.

EDIT: I do also agree w/ Sascha that there should NOT be a mechanism to break anonymity to allow governments to go after people. Either data storage is anonymous or it isn’t. Taking something down is different that revealing the specific nodes hosting said data, etc. In that case there is tremendous legal liability and the project dies. This may be a tough nut to crack.

5 Likes
  1. Who is “node operations”? How many actual people have to participate in that decision? Do they get to see the material they’re banning or not banning?

  2. Presumably at least part of the point of this whole thing is to placate “authorities”, primarily governments, which have actual power over you, or actual power over node operators… because they would otherwise decide to wield that power in a destructive way. If it’s that hard to ban things, what makes you think that they would actually be satisfied?

    Governments (and especially courts) tend to want to say “take this down”, not “take this down unless you can’t get consensus from a bunch of other people”. I mean, that’s what their whole perception of sovereignty is really about. It would be a mistake to think in terms of the “due care” framework of the EU’s anti-CSAM proposal, or even the UK online safety bill. Other governments, or later legislation, may be even less flexible.

    The “consensus” approach could easily create a worst-of-both-worlds system, where you’ve seriously compromised the network’s goals, but still not gotten those governments to actually leave the network alone.

On edit: A question to think about: which governments actually do have power over you or over node operators? Lots of them will claim jurisdiction, but which ones will be able to enforce it? Does that depend on the subject matter? Is it something you can actively change?

5 Likes

I havent read thru the whole thread so maybe it was mentioned already.

Privacy is the bases of the SN, the reason why it is being created. Just don’t make any concession on privacy.

If Switzerland proposes the SN to control its users, than Switzerland just is the wrong choice.

For sure there are more and better jurisdictions than Switzerland that could harbor the foundation,

5 Likes

I’m not worried about being “banned” by or from some network. I’m worried about being captured, tortured, and killed, because somebody at some point in time thinks what I write or store is “horrendous”. I’m sure Edward Snowden and especially Julian Assange aren’t worried about being “banned” either.

I’m not interested in “plausible deniability” in some courtroom. I want mathematical impossibility of identifying me and my communication/information.

That’s an issue anyway though. You upload a bad chunk, it happens to go to a snitch node, that node tells its sponsoring government you uploaded the chunk and gives them your IP…

1 Like

I dont think they need to in order to try to not allow such content as csam. Just ban the material without compromising privacy.

But thet must at least try to comply or else face landing up in jail.

2 Likes

But that would be a whole new ‘animal’ that has to be created, some sort of SN artificial intelligence, that scans content, transactions etc.

I think it is too complex and will put launch even farther into the future.

When a company develops a new technology, it is not responsible for how its technology will be used.

Switzerland is a non-match with all these regulatory requirements. What about El Salvador?

One thing to learn from this is that MaidSafe the company needs to become separate from the code and the network. Like, it can launch with whatever restrictions, but all the code and development has to be transferred to pseudonymous developers on the network, so no-one can be held liable. Otherwise, at some point in the future, someone in government will hurt the network or go after the developers.

10 Likes

I dont know much and dont wish to know much about csam, but i believe alot of the older stuff is already indexed.

So its just opt in filtering as was mentioned up the thread.

Crawlers can search public shared data looking fot it and flagging it.

Does not need to compromise privacy.

1 Like

Sigh. After 8 years of following SAFE, this update has made me actually sick to my stomach. This sudden desire to fellate the state sickens me. I’m off to get drunk and look at other projects that have made better recent non-technical Yoko Ono hires. End of a dream.

8 Likes

Questions 1 and 2 were answered well. The answer to question 3 missed the mark entirely. :-1:

Yes, that’s fine. But that can be done through public outreach and education about the network via the foundation. Any talk of embedding technology or global consensus for censorship at the network level is a farse and fork worthy. The network is supposed to be a separate autonomous entity. The real problem here is that it appears you are willing to accept a premise that has you assume the liability of all the world’s problems when you should not be doing that. No other corporation does that. Any concerns raised by authorities about any kind if evil or abhorrent content can be directed at client endpoints residing in their own jurisdiction.

indeed. any network that allows nodes to identify, flag, or decifer the contents of specific public or private chunks has failed from the start.

7 Likes

Node operators in this context is anyone running a node—storing data for the Network on their machine.

In this proposal, no, they do cannot see or decipher the content. I’ll try and paint more of a picture of such a proposal would work.

An organisation, such as the National Crime Agency in the UK, updates a list of image hashes of content they’ve identified as CSAM. This list is stored on the Network in a way that it can be referenced by nodes, but not used as a reverse lookup (for obvious reasons).

A node is asked to store a chunk that appears on this list. But the node operator, and the wider community, may not trust this government agency, they may worry that it’s not just CSAM, they are putting on this list, but other content that the government wants censored or suppressed. They’d be rightly wary.

So it is corroborated, not just by another national agency, but also flagged by the community, and perhaps cross referenced to another decentralised project’s shared list.

If the node is given chunk that is flagged and corroborated by say 3 of 5 of these lists, then they are permitted to drop this chunk without penalty on node age.

It’s also up to the node operator which lists, and the team/agency behind them they trust. If they have doubts about the NCA, they can choose not to act on it, and rely only on OSS, or community derived lists for example, or choose to just go on serving anyway. It’s up to them. We can’t force them not to.

And of course many nodes must make these decisions on a single chunk before it is taken down, and these chunks are, through the nature of the Network, geographically disbursed, so not under any one jurisdiction.

It is in this way that agencies must foster trust through transparency in their policies and operations should they hope to have approaches to tackling things like this in the decentralised realm. Because it’s transnational, and cooperative by its very nature.

4 Likes

Any censorship have to be only on individual level as Add-On. You live in UK, than Add UK blacklist if you want comply with local law. If You live in China Add China blacklist if You want… And all this does not have to be made by MaidSafe, but anyone else.
With low priority before start of network.

2 Likes

How can nodes be made to abide by these rules?

If the network can be truly autonomous then the laws of a nation or group of nations or of the future global government (I mean it’s going that way right?) … those laws should be upon the individual who uploads the data … not on an autonomous network. In the same manner that bitcoin can’t be held responsible for the nature of itself.

edit: also it seems that any consensus mechanism to evaluate all data is really going to add a lot to the cost of uploading data - is it not? This may really nerf the network.

There is also a user workaround here - simply encrypt your public data and share the keys publicly. A browser plugin could be made that would pull the keys from some repo and then auto-decrypt data on the fly as it’s downloaded. In this manner, the nodes won’t be able to evaluate any public data.

So as far as I can see, this system would be broken from the start, but would add a lot of overhead to the network.

I suppose nodes could also track those public keys, but it’s a war of attrition here and there are more users than nodes … seems like it’s just going to nerf the network overall to me,

1 Like

They cannot. Hence the reason that agencies, monitoring orgs, and communities need to operate in a transparent, cooperative, and consensual manner to maintain trust of the node operators and the Network community at large.

Again, they can do this through garnering international support for societal norms that uphold human rights and don’t attempt to subvert the Network. Otherwise the node operators won’t stay onboard.

And at the same time node operators would should be cognisant of the potential reputational attacks on the Network and the ecosystem (even a form of Sybil) that failing to address the most harmful content could have on the Network… which could also collapse it.

2 Likes

The guys with the biggest guns and the least conscience take power over anybody. “Stop what you’re doing, or people will start dying. It doesn’t matter who.”

See e.g. Los Zetas vs. Anonymous.

The whole point of SAFE was that taking things down would be impossible, no matter how convincing some government/oligarch/cartel or human rights organization is.

9 Likes

When you think you’ve got all angles covered then it might turn out none of above but something else entirely. Namely: lack of pop corn while watching video being live streamed.

2 Likes