The Difference Between Can't and Won't

This will be a mostly untested arena. The reasons I say this is that as you know APPS (sites & non-site APPs) run differently and therein lies the issue. And the crux lies in who runs the APP and

  • The App is not run by the person who write the APP - very important
    • Thinking of simple programs that allow say viewing of content, but no content supplied by the program writer. Can you name me one of these types of simple programs ever written that has seen the program writer/owner charged with a crime. I am not talking of criminal intent/activity actually written into the program, but simple viewing programs that allows people to keep collections of their content.
  • The APP is run by the person viewing content - no criminal activity here except in countries that have a form of thought crime where one can be charged for viewing certain material online
  • The APP is run by the user uploading their content to their files on the network and writing a record and all of that they own (not owned by anyone else)
  • The program being run by another person, not hosted anywhere on anyone else’s server or personal computer, can then access the links written (and owned) by other people.
  • Unless the person who wrote the program writes their own data then no data on the “site” is owned or controlled by the person who wrote it. The programmer owner has no control over the data, unless they write that function into the program, like a secondary indexing system.

In my opinion this is a new concept where the writer of a program that runs on an interconnect network does not actually run the program, its not a site hosted by them (remember safeNetwork site is not like a clearnet site, but is a Application which is a program). The site is entirely run/executed by the person using the site.

Because of this I think new legal precedence needs to be laid down concerning the writer of a site/program being used by others for criminal activity. In my limited opinion I would think they are safe unless a case can be made it was intentionally made for criminal activity. The closest is bittorrent program (not tracker sites) and the maintainers of it are not charged with anything and yet bittorrent basically does what you are talking of. Its the sites hosting tracker/seed information that are in trouble, not the program writer

And of course the person who uploads/links content breaching the laws is entirely responsible for their actions and really cannot be certain that they cannot be found out.

6 Likes

As Neo make list of some possible ways.

As creator of that site you have to move main responsibility to other users…that which upload illegal content or only keep link to that content. Similar to torrent site with no illegal content.

And if authors will get back some fee you just need monetize that site or let happy users to pay directly small tips directly to authors.

1 Like

SAFE Network can and probably will change the world, but the world will be able to adapt to it. I also think the concept of “website” will be irrelevant, and that Dapps won’t be critically needed for locating information either:

  1. A globally distributed SAFE Network will operate autonomously, disregarding any local or even global laws (just like the Internet basically allows anyone to connect to anyone else).

  2. Uploads will be anonymous and storage permanent and final. Unless the uploader leaves clues in the data or upload patterns, there is no way to prove non-compliance.

  3. Access to the addresses of uploaded information could be made permanent as well by incorporating addresses into “The Great Index of SAFE Network” that will undoubtedly come into existence and make all public information permanently searchable.
    This index does not need to be a Dapp owned by anyone. It can be a permanent graph of references which anyone can traverse starting from a number of easy to remember starting addresses.

  • The right to be forgotten won’t exist - maybe that’s fine as society should replace it with the right to be forgiven, and modernize outdated approaches such as social security numbers for life.

  • There won’t be a way to reverse disclosure of information - laws may make it illegal to search for and access specific information instead.

  • Privacy will worsen - privacy will need to be protected by preventing data collection in the first place and creating secure private environments.

Governments might try to shut down all vaults by making it illegal to run vaults on a global scale which may be difficult, or try to make it illegal to access the network using unauthorized browsers. But when people feel that accessing certain information is worth it, even at great personal risk, SAFE Network will be there to allow them to access it. This is a check on governments that has proven valuable and rightful throughout history.

(I will note here that there will be many other more relevant benefits of course that have nothing to do with law, etc., but that are out of the scope of this topic)

So, yes, there will be a TBD way to put information out there anonymously, and keep it anonymously accessible forever, without needing Dapps.

7 Likes

Good replies. The non-site app idea is certainly fascinating. But I have some questions on how that would work. To avoid confusion I will refer to non-site app as app and site app as site/website.

I have only programmed for the website part of safe, so not as familiar with the app. But from what I gather, the app is personal in the sense that you upload content that you control (basically to a site that you own), you can then share that link (site) with other users, who can do the same with you. There is no centralized meeting place, no organized website curating content, comments, likes, search results etc.

I can certainly see the appeal of a p2p solution, but I’m unsure how you could build, let’s say a youtube clone this way. How would this work?

@drirmbda you mention in point 3 that there could be a large index. That index has to be accessed somehow and someone has to control the site where this index is located right? If not, how would that work exactly?

@neo If I understood you correctly (I find it a bit confusing that you mixed apps and sites into just “app”, but I’ll use my definitions mentioned in the beginning of my post), the part of your argument regarding a site not being run by the person who created it doesn’t make sense to me because the person who has ownership is who “runs it”, or rather: is the one responsible. Similarly, the bittorrent analogy fails because the creator of bittorrent does not have any control over it.

The user will upload content that they own, if it’s submitted to a site, the owner of that site is responsible to moderate it. While content itself cannot be purged, links to it can be edited out. I can’t see how just because it’s hosted by the network, he doesn’t have any responsibility.

If we’re merely talking about an app here, I agree.

1 Like

Basically the core of the access to the safenetwork is through APIs and we have helper programs to do that, which at this stage include the “client”, the “browser”, the “CLI” (command line interface)

Now programs can be written that call the APIs directly or through the “client” and these are what people usually refer to as APPs.

But on safe any javascript running on the browser can also be referred to as an APP as many have already.

But also technically a simple html document on the browser could be loosely called an APP because the browser is the APP that displays the page references.

Now a simple html document is not normally referred to as an APP and also we will not call it an APP on the SAFE network for hopefully obvious reasons and does not come into what you are discussing above.

But what does apply is APPs running in the browser, running using the CLI, running as a native program using API calls.

So like a bittorrent program it is the person who is running the program (APP) who is technically responsible for what they do with the APP.

Now depending on how someone creates a youtube replacement on the safenetwork will determine the legal liability they hold for the content that is uploaded with the program.

  1. If the program ONLY uploads the video and updates an index somewhere then the following applies
  • all data is owned by the person running the program.
  • The author of the program has no liability since they did nothing. AKA the bittorrent situation
  • Basically this is all any good safe program will do anyhow
  1. The author of the program has pre-purchased ADs in order to create indexes they can manipulate later on. EG. The person uploading a video also writes (via the program) records in those ADs owned by the author
  • Then there is liability for the use of those ADs owned by the author.
  • The extend of liability will depend on what those ADs are used for
    • If for indexing the videos and other things then the law will be broken if the indexing is not updated to exclude content violating the law.
  1. The author of the program runs a secondary program that takes the data written by the users of the youtube program and creates whatever (eg indexes)
  • the author is then liable for all the data the secondary program writes.
  • AKA the bittorrent (thats OK) and the author also runs a bittorent tracker site and this have been deemed as liable under the law and upon the person(s) running the tracker site

The issue is who owns the data and on safe the normal thing is the one running the program (App) owns the data

Sorry but that is the way things are on the SAFE network. Sites are ONLY run by the user since it is being run on their own computer.

If no one is “accessing the site” (using terminology of the current internet) on the SAFE network then the site only exists as data spread across the network.

3 Likes

What does “index somewhere” mean? Doesn’t this index have to be owned by someone? The owner of the index would have to remove the copyrighted video from the index don’t they?

And the liability is for the author of the program right?

Well yes, technically users “run” the website: they download the html, css, images and javascript and then run it locally in their own browser. How is that different from clearnet? It isn’t. These files are also transfered on clearnet and run in users own browser.

The only difference is that the owner has outsourced hosting to a decentralized platform rather than a centralized hosting. He still remains the owner and legally responsible for what occurs on his platform because he is in control.

The distinction you could make is that there is no server-side code on maid, while there could be, and usually is on clearnet. I don’t believe that is a significant point because the data transfer itself is illegal, the server processing is not relevant. If a clearnet host sent you a html file and accompanying image file with copyrighted material, it would be in trouble because it has the ability to modify and/or stop serving those files. Similary on maid, the owner, the person who uploaded those html files has the ability to stop serving them on his site name, and should do so. He cannot stop the version history, but the current latest version, the one that people see when they visit his site name, that’s the one that must be moderated. If he doesn’t do it, he is legally responsible. That’s how I see it.

1 Like

Means some sort of index.

For instance a user wants people to search on their channel so they create a index AD under an upper AD and both the entry in the upper AD and their index AD are owned by the user.

Thus the user is liable and responsible for their uploads and index entries.

Depends on how the legal system sees a user writing an entry in the AD owned by another (eg program author). For instance how does the law deal with someone who supplies a cork notice board and someone else pins a URL to illegal material on the cork notice board. This is the same thing.

So I do not know and I suspect it would be up to the legal jurisdiction and/or ability of a lawyer to explain the difference between owning a notice board and the person pinning a link to illegal material. Who knows, maybe a lawyer does.

Not technically but in actuality. All traffic is generated by the user, all code is run by the user, all data accessed is by the user, all data stored is by the user. The user is the ONLY entity and I mean the ONLY entity that does anything for that session. There is no server in the background running site code, no server backend storing the data, so there is noone else responsible for the data read/stored by that user running the site.

How can I be more clear. The author had no say or control or did anything when the user stored their data. The author owned or controlled no infrastructure, no server, no database storage

1 Like

Now that I have more time I want to explain why I said that OP was so wrong in many levels.
From the legal perspective the author of either a smart contract or an illegal Dapp will still held responsible for the illegal app.
OP’s interpretation of the law is absolutely wrong, just because it can’t be technically attributed to you it doesn’t mean you aren’t a criminal under the law, under such circumstances in all the three scenarios (losing the key, collective administration and anonymous administration) would all be required to have the same level of opsec to slither under the radar.

The moment someone testifies that you authored it but deleted your key to cover your tracks, it can make your doubly life difficult as it can be interpreted as destruction of evidence. Even if you think that you are safe because the direct evidence linking you to the illegal dapp has disappeared, witnesses are counted as direct evidence.

The CFTC already decided about this, and they will prosecute any author of illegal smart contracts, so by saying that you are essentially “running a money laundering operation completely legally” on a dapp is very wrong.

Cryptos aren’t money laundering operations, in the same way cash isn’t a money laundering operation.
It can be used as an instrument for money laundering, but it is not one in itself.
You are not thinking this clearly.

The only way to have a legal operation is if the design of the dapp is not overtly promoting illegal purposes.
Going back to your example of DMCA violations:
You create an open app to “share home videos”, instead of creating an app to “pirate videos”. Then you transfer that responsibility to the user, it is up to the user to misuse the purpose of the site.
And as the design of the network makes deletion of public data impossible, it is out of your hands as the author of the dapp.

And yet, you still need opsec, as if the word is spread that your intention of such an apparent vainilla project was to actually facilitate DMCA violations on purpose, you are again in trouble.
Proving intention is hard, but if you confessed it to a friend or a partner, you have now a witness.

2 Likes

Let’s use your cork board analogy, I like that one. We’re in a town where a man named John has set up a gigantic cork board with big letters telling people they can pin up pictures for the town to see. There is one caveat: this is a magic cork board. People can put any pictures they like on the board, whenever they like, but nobody except John himself can remove pictures from the cork board.

Days, weeks and months flies by and the cork board fills up nicely with beautiful art, drawings, stunning landscape imagery, portraits and other fine images. People are enjoying the board and visit it many times a day. The board is slowly becoming a meeting place, a place to get together and have a great time.

After enjoying the cork board and all it has to offer for many months, suddenly, one dark and gloomy morning, there’s a surprise on the cork board. There’s a picture of a naked girl, involved in sexual activity. It is child pornography. People are gathering around the board in anger. “Who is this evil person who put this up on the board?”, “Why can’t we remove it?”. The crowd is chanting for the image to be removed as John, the owner and creator of the cork board comes rambling along, whistling cheerfully and smiling at the crowd as he asks confused “What is the matter with you all, are you not enjoying the cork board?”

The crowd is furious! “John, can’t you see it? Somebody has put child porn on the cork board. You have to remove it!” they chant.

“Well”, says John. “This is not my responsibility”. “The person who pinned cp on that wall did it all by themselves, there is nothing in the background helping to pin that to the wall, he did it, not me. The guy who did that, was the ONLY entity and I mean the ONLY entity that did anything tonight, I had no say or control over this, I did not do anything when he pinned the cp to the board. I don’t own the building the cork is placed on, I don’t own the road that leads to it, I don’t even own the board anymore. All I have is my magic ability to remove pictures from the wall.”

“So remove it!” screams the crowded at John, a few are holding up pitch forks pointing towards John.

“Nah” says John. “Not my responsibility, I don’t really give a shit”.

This story, is exactly how this works. What do you think happened with John? He was hanged by the mob. As long as there is someone with private keys who controls a site and retains the ability to moderate it, they are responsible. I also don’t know how to be more clear about this. You can’t just refuse to delete copyrighted material, terrorist videos and child porn from your own web site, if you CAN do it, you MUST do it. If you don’t do what you can to remove illegal material, you are a criminal.

2 Likes

On SAFE, John doesn’t have the ability to remove the material. He can remove the link to it from his board, but the material remains, suspended forever accessible to anyone who made a note of its location on the network (not John’s board).

4 Likes

The difference would be this cork board is on a road that has only that and you must decide to walk down it for that and no other reason. If that makes sense? I mean you don’t need to look at those kind of sites/images etc.

Your points are being made well, but the responsibility of not losing data is an absolute for humanity IMO, that freedom we provide of perpetual data has a cost. You are challenging the cost, but the challenge has a cost also. Who decides? Who watches then the watchmen and so on.

5 Likes

If you set up a website with non-criminal intent and lose the key and the site is taken over by people posting illegal terrorism propaganda, how are you at fault? I have never heard of such case. Show me one single episode where a person had zero control over a decentralized application and was punished for that?

This makes no sense at all, tell me, why is Vitalik not in jail for creating a massive money laundering operation (ethereum) ? You do realize that launching a crypto currency without KYC/AML and ignoring all other financial regulations and licenses would send you to jail for at least a couple of decades if it was centralized, right? The difference is between can’t and won’t.

Yes, that’s what all the criminals would do to mask their criminal activity. Pretend that it’s not supposed to do anything illegal, and then do it anyway.

I never said that the app’s purpose would be to pirate videos. I can agree that it would be easier for government to justify some kind of prosecution if you did that. I am still not aware of any single event where this has happened, maybe you do? That said, you would still be responsible when you make the decision to not remove links to illegal videos, so you are right in that you’d need to have good opsec, because quite frankly: the alternative would be jail.

1 Like

Yes, correct. I feel like that is more or less semantics though. The point is that John must spend a lot of time now moderating content because if anything slips through the cracks, he’s in trouble.

I completely agree from a user perspective. My concern is rather the admin of a site will get in trouble for what his users are doing on the site (e.g linking to illegal stuff). Not everybody has time to moderate content all day and some might not want to. For example, maybe I want to launch a site with true free speech, but then there are laws in countries against “hate speech” that means I have to censor speech on my website to stay compliant, against my will. It would be neat if there was a way for the site to be self moderating. I can’t really think of a good way to accomplish this, maybe it’s simply not possible.

Either way, I am completely aligned with your ideas regarding anti-censorship on the network as whole, it’s comforting to know that you believe this and are not willing to make exceptions.

3 Likes

Hello everyone! I don’t think I understood the conclusions you made above.

If person A makes an innocent website, say, a forum, and then person B makes a not-so-innocent post (say, an illegal copy of a protected by copyright laws piece of content), is person A held responsible?

It is clear to me that in this scenario person B violated the law by performing an illegal distribution. But what about person A?

It does bear resemblance to torrent websites, users of which are trying to hide behind vpns or something. But it is different in particular ways because of the reasons you mentioned. So what do you think?

Hello @ch3rn0v and welcome to the forum! The current precedence in law is for person A to be responsible if he refuses to remove the post after being notified of it / made aware of it. Article 13 which is coming up might change this. I don’t think there’s any reason to assume this law would apply different to a safe site as long as the admin retains ownership/keys and thus control over content posted there.

Thanks for the reply. But if person A does want to comply, there is still no way to do so because of information’s perpetuity, is that correct?

Just to rewind here - the owner of the cork board is just a UID. The creator could choose to remain anonymous and may also choose not to moderate. ‘John’ would not be known as the cork board owner, it would just be a hashed UID.

IMO, there will be two sorts of sites. One will make obvious their identity and will ensure moderation is done. The other will remain anonymous and will make no such moderation guarantees.

Those who do not wish to view data which may not be legal/tasteful, will go to the moderated site. Others will take their chances with the unmoderated site.

I was replying with the perception that you wanted an illegal service from the get go, hence that first response. Your later example was about money laundering, there is no way to have that set up in a non-criminal way:

My statement still remains correct if it is an illegal dapp (not a non-criminal one being abused by criminals), the creator of the dApp will be held responsible.
That’s what the FinCEN says, and it is what the CFTC says, and it is what the SEC says.

Ethereum is a platform from which you can build infinite amount of useful services. The platform is a tool from which it’s users/adopters can do whatever they want. It wasn’t built with the purpose of laundering money.

What would be an illegal service would be a smart contract that does ether mixing, that would constitute money laundering. The creators of the ether mixer would be liable, but not the developers of Ethereum.

Ethereum was a bad example. The best example of a crypto with dubious purposes would had been Monero or Zcash, which it’s sole purpose is to be untraceable.

I think you are speculating about the subject, that is not how it is.
FinCEN’s interpretation is that it is okay with privacy focused software, but they will prosecute anyone who provides privacy services (violating BSA)
There is a huge difference, this is why even Monero’s developers are in the clear.
In short making the tools is not regulated, but using the tools to transmit value is regulated.

For now, they are content with regulating the gateways to the network (exchanges, businesses).
A decentralized exchange that doesn’t have its own custodial wallet, it is also not regulated. But if the decentralized exchange had it’s own custodial wallet, it would qualify as a MSB.

Ripple got in trouble not because of the development of the protocol, but because Ripple Labs was a money service business (especially with “RippleTrade”). If you are directly involved in exchanging fiat for crypto, you are an MSB.
(Btw, that’s what you get from being a centralized crypto, any other crypto that has mining/farming by it’s users would never have to face that)

Technically that falls into a very murky area known as oblique intention.
If you truly hadn’t designed it for illegal purposes, but you knew about the high probability of being misused for illegal purposes, you have then an indirect criminal intention.
Now, how to prove that, that’s a whole another story.

On the SafeNetwork there are no ways to remove content once they are published publicly. One of the main properties of the SafeNetwork is that everything that is uploaded is saved and accessible permanently, not even the uploader can take it down.
Your app might have a way to restrict/report the access to such content from your app, but the content will still be accessible directly if you have the direct link.
And there is nothing that would stop to get the data from any other app.

Btw, an obligatory disclaimer:
This is the extent of my knowledge about the law, which I gathered from law courses in college and from being an old fart in crypto who’s been up to date with the regulatory guidances. Please don’t take any of this as legal advice, and if you need one, hire a proper lawyer :slight_smile:

2 Likes

The idea of the index was abstract; a usable efficient and perpetually extendable index needs to be invented. However, it could be an immutable graph of linked files (serving as nodes/vertices in the graph) linked by references to a next file name. This is a distributed data structure. Below is a very simple and naive example. The top node could be called “index”.

  1. The index consists of one file initially: “File1” with a couple of key-value pairs:
    File1 {“artist=warz”: “File2”, “more”: “File3” , … }

  2. Someone PUTs anonymously an illegal copy of your video in a file named “FileVideo123”.

  3. That person then searches the index anonymously for a node with a key “artist=warz”, and checks if the file named “File2” already exists. (If it does, he checks for the existence of file named “File3”, etc. until he finds a file name that does not exist yet.)

  4. That person then adds to the index, by PUTting a file containing the name “Video123”. The index could then look like this:
    File1 {“artist=warz”: “File2”, “more”: “File3” , … }
    File2 {“youtube_video”: “Video123”, “more”: “File4”, …}

This way, anyone who can write a script to traverse this graph can find the illegal video. There is no way to delete the illegal file, no way to prevent anyone from finding it, and no way to prevent access.

There are obvious challenges such as keeping the graph permanently extensible and efficient and how to deal with dis-information intended to hide actual information, but the cost of PUTs, and cryptography, proofs of time, location, etc. could help solve those problems. We probably do not need to solve these problems now. SAFE Network just needs to provide a minimum set of basic features.

SAFE Network needs to do a few things really well, I think in this order of importance: 1) Guarantee global availability, 2) Guarantee perpetual immutable storage, 3) Guarantee anonymity of PUTs, 4) Guarantee anonymity of GETs.

6 Likes

IMO copyright is wrong. Personally I don’t give a damn about any Statist laws - I care about what is rationally right and wrong. The Safe Network is a fundamental change of human existence. And just as the State COULD legislate and make ‘rain’ illegal … it doesn’t stop the rain.

3 Likes