The Difference Between Can't and Won't

Ethereum has already done it. It’s called smart contracts. It’s a bit inconvenient how they are upgraded, but it’s certainly doable. I am very interested in finding out if maid can do this somehow too.

Smart contracts, or autonomous custom code is planned for SAFE but not even scoped yet, so not going to be worked on until after beta afaik. So the same functionality (or better I would expect) should be available. I don’t think that is necessarily needed for this use case, but it may be.

3 Likes

I’d just like to add to this that the US has jurisdiction over the entire world (with a few exceptions). If any US user is able to access the site, you’re under US law. At least that’s how it’s been practiced so far. Example: Freedomhost case, megaupload, liberty reserve.

Going back to the main question:

I would assume such questions are raised only in the case of you dealing with illegal stuff, and you set the tone by giving the primary example about the hypothetical case of violating DMCA.

If the matter is illegal, then none of the three scenarios will exonerate you. You are still responsible in all three cases, the only thing you are doing is trying to find technical ways to find subterfuges for plausible deniability.

For some reason you seem to have the misconception that the first two option you describe (losing a key, having a multisig) somehow makes it more “legal”. No, it doesn’t.

So then the quid here is talking about the technical ways of proving attribution.

2 Likes

@piluso Yes, the way I see it, if you can prove that you no longer control the website, you’ll no longer be legally responsible. This is similar to how tokens on ethereum works, the code is uploaded to the blockchain and can no longer be controlled by the creator. This way, the creator of the smart contract avoids a life sentence in jail for operating a money laundering scheme. It’s a way to have a money laundering operation completely legally (all cryptos are money laundering systems in the eyes of the law because they ignore kyc/aml laws, but there is no owner, so nobody to punish).

We have to realize that the definition of illegal will change with time, so things that are not looked as problematic today will be in the future. This could go as far as making any speech criticizing the government to be illegal. It’s important that maid websites can withstand this, and to avoid a fragmented safe net crippled by censorship, it would help to have the ability to legally operate websites where laws are ignored, similar to how cryptocurrencies can do so.

3 Likes

I am sorry, but this is wrong in so many levels.
And, again, if the SafeNetwork is going to fulfill with its design goals, you won’t need anything fancy to be anonymous.

1 Like

Sorry, but this is just wrong. And I’m not gonna say why.

1 Like

This will be a mostly untested arena. The reasons I say this is that as you know APPS (sites & non-site APPs) run differently and therein lies the issue. And the crux lies in who runs the APP and

  • The App is not run by the person who write the APP - very important
    • Thinking of simple programs that allow say viewing of content, but no content supplied by the program writer. Can you name me one of these types of simple programs ever written that has seen the program writer/owner charged with a crime. I am not talking of criminal intent/activity actually written into the program, but simple viewing programs that allows people to keep collections of their content.
  • The APP is run by the person viewing content - no criminal activity here except in countries that have a form of thought crime where one can be charged for viewing certain material online
  • The APP is run by the user uploading their content to their files on the network and writing a record and all of that they own (not owned by anyone else)
  • The program being run by another person, not hosted anywhere on anyone else’s server or personal computer, can then access the links written (and owned) by other people.
  • Unless the person who wrote the program writes their own data then no data on the “site” is owned or controlled by the person who wrote it. The programmer owner has no control over the data, unless they write that function into the program, like a secondary indexing system.

In my opinion this is a new concept where the writer of a program that runs on an interconnect network does not actually run the program, its not a site hosted by them (remember safeNetwork site is not like a clearnet site, but is a Application which is a program). The site is entirely run/executed by the person using the site.

Because of this I think new legal precedence needs to be laid down concerning the writer of a site/program being used by others for criminal activity. In my limited opinion I would think they are safe unless a case can be made it was intentionally made for criminal activity. The closest is bittorrent program (not tracker sites) and the maintainers of it are not charged with anything and yet bittorrent basically does what you are talking of. Its the sites hosting tracker/seed information that are in trouble, not the program writer

And of course the person who uploads/links content breaching the laws is entirely responsible for their actions and really cannot be certain that they cannot be found out.

6 Likes

As Neo make list of some possible ways.

As creator of that site you have to move main responsibility to other users…that which upload illegal content or only keep link to that content. Similar to torrent site with no illegal content.

And if authors will get back some fee you just need monetize that site or let happy users to pay directly small tips directly to authors.

1 Like

SAFE Network can and probably will change the world, but the world will be able to adapt to it. I also think the concept of “website” will be irrelevant, and that Dapps won’t be critically needed for locating information either:

  1. A globally distributed SAFE Network will operate autonomously, disregarding any local or even global laws (just like the Internet basically allows anyone to connect to anyone else).

  2. Uploads will be anonymous and storage permanent and final. Unless the uploader leaves clues in the data or upload patterns, there is no way to prove non-compliance.

  3. Access to the addresses of uploaded information could be made permanent as well by incorporating addresses into “The Great Index of SAFE Network” that will undoubtedly come into existence and make all public information permanently searchable.
    This index does not need to be a Dapp owned by anyone. It can be a permanent graph of references which anyone can traverse starting from a number of easy to remember starting addresses.

  • The right to be forgotten won’t exist - maybe that’s fine as society should replace it with the right to be forgiven, and modernize outdated approaches such as social security numbers for life.

  • There won’t be a way to reverse disclosure of information - laws may make it illegal to search for and access specific information instead.

  • Privacy will worsen - privacy will need to be protected by preventing data collection in the first place and creating secure private environments.

Governments might try to shut down all vaults by making it illegal to run vaults on a global scale which may be difficult, or try to make it illegal to access the network using unauthorized browsers. But when people feel that accessing certain information is worth it, even at great personal risk, SAFE Network will be there to allow them to access it. This is a check on governments that has proven valuable and rightful throughout history.

(I will note here that there will be many other more relevant benefits of course that have nothing to do with law, etc., but that are out of the scope of this topic)

So, yes, there will be a TBD way to put information out there anonymously, and keep it anonymously accessible forever, without needing Dapps.

7 Likes

Good replies. The non-site app idea is certainly fascinating. But I have some questions on how that would work. To avoid confusion I will refer to non-site app as app and site app as site/website.

I have only programmed for the website part of safe, so not as familiar with the app. But from what I gather, the app is personal in the sense that you upload content that you control (basically to a site that you own), you can then share that link (site) with other users, who can do the same with you. There is no centralized meeting place, no organized website curating content, comments, likes, search results etc.

I can certainly see the appeal of a p2p solution, but I’m unsure how you could build, let’s say a youtube clone this way. How would this work?

@drirmbda you mention in point 3 that there could be a large index. That index has to be accessed somehow and someone has to control the site where this index is located right? If not, how would that work exactly?

@neo If I understood you correctly (I find it a bit confusing that you mixed apps and sites into just “app”, but I’ll use my definitions mentioned in the beginning of my post), the part of your argument regarding a site not being run by the person who created it doesn’t make sense to me because the person who has ownership is who “runs it”, or rather: is the one responsible. Similarly, the bittorrent analogy fails because the creator of bittorrent does not have any control over it.

The user will upload content that they own, if it’s submitted to a site, the owner of that site is responsible to moderate it. While content itself cannot be purged, links to it can be edited out. I can’t see how just because it’s hosted by the network, he doesn’t have any responsibility.

If we’re merely talking about an app here, I agree.

1 Like

Basically the core of the access to the safenetwork is through APIs and we have helper programs to do that, which at this stage include the “client”, the “browser”, the “CLI” (command line interface)

Now programs can be written that call the APIs directly or through the “client” and these are what people usually refer to as APPs.

But on safe any javascript running on the browser can also be referred to as an APP as many have already.

But also technically a simple html document on the browser could be loosely called an APP because the browser is the APP that displays the page references.

Now a simple html document is not normally referred to as an APP and also we will not call it an APP on the SAFE network for hopefully obvious reasons and does not come into what you are discussing above.

But what does apply is APPs running in the browser, running using the CLI, running as a native program using API calls.

So like a bittorrent program it is the person who is running the program (APP) who is technically responsible for what they do with the APP.

Now depending on how someone creates a youtube replacement on the safenetwork will determine the legal liability they hold for the content that is uploaded with the program.

  1. If the program ONLY uploads the video and updates an index somewhere then the following applies
  • all data is owned by the person running the program.
  • The author of the program has no liability since they did nothing. AKA the bittorrent situation
  • Basically this is all any good safe program will do anyhow
  1. The author of the program has pre-purchased ADs in order to create indexes they can manipulate later on. EG. The person uploading a video also writes (via the program) records in those ADs owned by the author
  • Then there is liability for the use of those ADs owned by the author.
  • The extend of liability will depend on what those ADs are used for
    • If for indexing the videos and other things then the law will be broken if the indexing is not updated to exclude content violating the law.
  1. The author of the program runs a secondary program that takes the data written by the users of the youtube program and creates whatever (eg indexes)
  • the author is then liable for all the data the secondary program writes.
  • AKA the bittorrent (thats OK) and the author also runs a bittorent tracker site and this have been deemed as liable under the law and upon the person(s) running the tracker site

The issue is who owns the data and on safe the normal thing is the one running the program (App) owns the data

Sorry but that is the way things are on the SAFE network. Sites are ONLY run by the user since it is being run on their own computer.

If no one is “accessing the site” (using terminology of the current internet) on the SAFE network then the site only exists as data spread across the network.

3 Likes

What does “index somewhere” mean? Doesn’t this index have to be owned by someone? The owner of the index would have to remove the copyrighted video from the index don’t they?

And the liability is for the author of the program right?

Well yes, technically users “run” the website: they download the html, css, images and javascript and then run it locally in their own browser. How is that different from clearnet? It isn’t. These files are also transfered on clearnet and run in users own browser.

The only difference is that the owner has outsourced hosting to a decentralized platform rather than a centralized hosting. He still remains the owner and legally responsible for what occurs on his platform because he is in control.

The distinction you could make is that there is no server-side code on maid, while there could be, and usually is on clearnet. I don’t believe that is a significant point because the data transfer itself is illegal, the server processing is not relevant. If a clearnet host sent you a html file and accompanying image file with copyrighted material, it would be in trouble because it has the ability to modify and/or stop serving those files. Similary on maid, the owner, the person who uploaded those html files has the ability to stop serving them on his site name, and should do so. He cannot stop the version history, but the current latest version, the one that people see when they visit his site name, that’s the one that must be moderated. If he doesn’t do it, he is legally responsible. That’s how I see it.

1 Like

Means some sort of index.

For instance a user wants people to search on their channel so they create a index AD under an upper AD and both the entry in the upper AD and their index AD are owned by the user.

Thus the user is liable and responsible for their uploads and index entries.

Depends on how the legal system sees a user writing an entry in the AD owned by another (eg program author). For instance how does the law deal with someone who supplies a cork notice board and someone else pins a URL to illegal material on the cork notice board. This is the same thing.

So I do not know and I suspect it would be up to the legal jurisdiction and/or ability of a lawyer to explain the difference between owning a notice board and the person pinning a link to illegal material. Who knows, maybe a lawyer does.

Not technically but in actuality. All traffic is generated by the user, all code is run by the user, all data accessed is by the user, all data stored is by the user. The user is the ONLY entity and I mean the ONLY entity that does anything for that session. There is no server in the background running site code, no server backend storing the data, so there is noone else responsible for the data read/stored by that user running the site.

How can I be more clear. The author had no say or control or did anything when the user stored their data. The author owned or controlled no infrastructure, no server, no database storage

1 Like

Now that I have more time I want to explain why I said that OP was so wrong in many levels.
From the legal perspective the author of either a smart contract or an illegal Dapp will still held responsible for the illegal app.
OP’s interpretation of the law is absolutely wrong, just because it can’t be technically attributed to you it doesn’t mean you aren’t a criminal under the law, under such circumstances in all the three scenarios (losing the key, collective administration and anonymous administration) would all be required to have the same level of opsec to slither under the radar.

The moment someone testifies that you authored it but deleted your key to cover your tracks, it can make your doubly life difficult as it can be interpreted as destruction of evidence. Even if you think that you are safe because the direct evidence linking you to the illegal dapp has disappeared, witnesses are counted as direct evidence.

The CFTC already decided about this, and they will prosecute any author of illegal smart contracts, so by saying that you are essentially “running a money laundering operation completely legally” on a dapp is very wrong.

Cryptos aren’t money laundering operations, in the same way cash isn’t a money laundering operation.
It can be used as an instrument for money laundering, but it is not one in itself.
You are not thinking this clearly.

The only way to have a legal operation is if the design of the dapp is not overtly promoting illegal purposes.
Going back to your example of DMCA violations:
You create an open app to “share home videos”, instead of creating an app to “pirate videos”. Then you transfer that responsibility to the user, it is up to the user to misuse the purpose of the site.
And as the design of the network makes deletion of public data impossible, it is out of your hands as the author of the dapp.

And yet, you still need opsec, as if the word is spread that your intention of such an apparent vainilla project was to actually facilitate DMCA violations on purpose, you are again in trouble.
Proving intention is hard, but if you confessed it to a friend or a partner, you have now a witness.

2 Likes

Let’s use your cork board analogy, I like that one. We’re in a town where a man named John has set up a gigantic cork board with big letters telling people they can pin up pictures for the town to see. There is one caveat: this is a magic cork board. People can put any pictures they like on the board, whenever they like, but nobody except John himself can remove pictures from the cork board.

Days, weeks and months flies by and the cork board fills up nicely with beautiful art, drawings, stunning landscape imagery, portraits and other fine images. People are enjoying the board and visit it many times a day. The board is slowly becoming a meeting place, a place to get together and have a great time.

After enjoying the cork board and all it has to offer for many months, suddenly, one dark and gloomy morning, there’s a surprise on the cork board. There’s a picture of a naked girl, involved in sexual activity. It is child pornography. People are gathering around the board in anger. “Who is this evil person who put this up on the board?”, “Why can’t we remove it?”. The crowd is chanting for the image to be removed as John, the owner and creator of the cork board comes rambling along, whistling cheerfully and smiling at the crowd as he asks confused “What is the matter with you all, are you not enjoying the cork board?”

The crowd is furious! “John, can’t you see it? Somebody has put child porn on the cork board. You have to remove it!” they chant.

“Well”, says John. “This is not my responsibility”. “The person who pinned cp on that wall did it all by themselves, there is nothing in the background helping to pin that to the wall, he did it, not me. The guy who did that, was the ONLY entity and I mean the ONLY entity that did anything tonight, I had no say or control over this, I did not do anything when he pinned the cp to the board. I don’t own the building the cork is placed on, I don’t own the road that leads to it, I don’t even own the board anymore. All I have is my magic ability to remove pictures from the wall.”

“So remove it!” screams the crowded at John, a few are holding up pitch forks pointing towards John.

“Nah” says John. “Not my responsibility, I don’t really give a shit”.

This story, is exactly how this works. What do you think happened with John? He was hanged by the mob. As long as there is someone with private keys who controls a site and retains the ability to moderate it, they are responsible. I also don’t know how to be more clear about this. You can’t just refuse to delete copyrighted material, terrorist videos and child porn from your own web site, if you CAN do it, you MUST do it. If you don’t do what you can to remove illegal material, you are a criminal.

2 Likes

On SAFE, John doesn’t have the ability to remove the material. He can remove the link to it from his board, but the material remains, suspended forever accessible to anyone who made a note of its location on the network (not John’s board).

4 Likes

The difference would be this cork board is on a road that has only that and you must decide to walk down it for that and no other reason. If that makes sense? I mean you don’t need to look at those kind of sites/images etc.

Your points are being made well, but the responsibility of not losing data is an absolute for humanity IMO, that freedom we provide of perpetual data has a cost. You are challenging the cost, but the challenge has a cost also. Who decides? Who watches then the watchmen and so on.

5 Likes

If you set up a website with non-criminal intent and lose the key and the site is taken over by people posting illegal terrorism propaganda, how are you at fault? I have never heard of such case. Show me one single episode where a person had zero control over a decentralized application and was punished for that?

This makes no sense at all, tell me, why is Vitalik not in jail for creating a massive money laundering operation (ethereum) ? You do realize that launching a crypto currency without KYC/AML and ignoring all other financial regulations and licenses would send you to jail for at least a couple of decades if it was centralized, right? The difference is between can’t and won’t.

Yes, that’s what all the criminals would do to mask their criminal activity. Pretend that it’s not supposed to do anything illegal, and then do it anyway.

I never said that the app’s purpose would be to pirate videos. I can agree that it would be easier for government to justify some kind of prosecution if you did that. I am still not aware of any single event where this has happened, maybe you do? That said, you would still be responsible when you make the decision to not remove links to illegal videos, so you are right in that you’d need to have good opsec, because quite frankly: the alternative would be jail.

1 Like

Yes, correct. I feel like that is more or less semantics though. The point is that John must spend a lot of time now moderating content because if anything slips through the cracks, he’s in trouble.

I completely agree from a user perspective. My concern is rather the admin of a site will get in trouble for what his users are doing on the site (e.g linking to illegal stuff). Not everybody has time to moderate content all day and some might not want to. For example, maybe I want to launch a site with true free speech, but then there are laws in countries against “hate speech” that means I have to censor speech on my website to stay compliant, against my will. It would be neat if there was a way for the site to be self moderating. I can’t really think of a good way to accomplish this, maybe it’s simply not possible.

Either way, I am completely aligned with your ideas regarding anti-censorship on the network as whole, it’s comforting to know that you believe this and are not willing to make exceptions.

3 Likes