Storj Beta Update

My point was simply that if you can know what “public”* chunks are being stored in your vault, then the path is open to make it mandatory that you exclude whatever is stipulated by the state. And the easier that becomes, the more arbitrary such mandatory exclusions become.

* …which can be anything accessible to more than one person, where one of the people is an agent or snitch.

EDIT: In practice it’s fine if Storj has that soft standard, but SAFE should aim for the more stringent standard of complete deniability. We are going to have market specialization anyway, so might as well acknowledge it.


How is that a soft standard? This is a mere question of political ideology and strategy. Of course, you can make up hypothetical scenarios in which people are enforced to subscribe to a certain black/greylist, but that doesn´t take into account what this enforcement would mean for political discourse. I could also argue that states will ban distributed storage in general and enforce ISP to block sort of SAFE related traffic. It really doesn´t help imho. Authoritarian states will always find a way to lock people (or rather: the mainstream) in so they can´t access certain content and/or ressources, but such an intervention would need to have legal character and therefore (in comparison to the encroachments of certain security agencies) visible to every citizen. Of course, if you have a conception of the human being where people are programmed and act without free will (i.e. like @Pierce) I can see how it makes sense to be sceptical, but to me that is some kind of double-standard (“everyone is manipulated, but me”)

Honestly, I wonder how that path to make filters mandatory would look like in execution. I personally see the merit of allowing users to take stance on public content, not only because I think it´s fair to have a say on what public content is stored on your own computer, but because it allows to deal with the argument that SAFE is going to be a harddisk filled with disgusting stuff. This is where it becomes a strategical question. But sure, different people, different opinion. And of course, since the final product isn´t there, we have a lot of time to discuss :wink:

I already stated it succinctly in my previous comment. The rest of your comment goes off on a irrelevant (in so far as the mechanism rather than a motivation for being concerned) tangent about politics.

If you have such a choice then that is one step, a slippery slope, from being forced to make that choice a particular way.

1 Like

I have to say I have seen better handwavings. While I said that your idea of “standards” is determined by political ideology, I also argued that your point is inconsistent since you make up an highly hypothetical argument (regimes making some grey/blacklists madatory) while neglecting other hypothetical (but more likely) arguments (i.e. that regimes will be even more interested in banning the distributed network if there was noway to distinguish content) that would lead the conclusion to the opposite direction. I also asked you concretely how you think a particular regime would implement “mandatory grey/blacklists”. I guess that was also only an irrelevant tangent about politics…

In the end the SAFEnetwork will allow grey/blacklisting if people are able track chunks from public content data maps, so there is no real difference between StorJ and SAFE beyond the fact that StorJ apparently allows that out of the box.

I can’t (and don’t need to) give an exact description of how greylists might be implemented. It is sufficient to note that there is nothing to stop their implementation to any degree of granularity and arbitrariness, once they are possible. if you know of some such fundamental barrier then please state it.

Looks like a ‘no’ and some hand waving.

1 Like

And trivial examples such as cats and Isis.

The question is not whether they can be implemented, but how a regime would make users to subscribe to them mandatory [and control and penalize those who don’t] (hopefully you see the qualitative difference). The mere possibility of isn´t really enough to make a plausible argument. The fundamental barrier you are asking for is the individual user who doesn´t have to implement greylists and there are a lot of user who will not subscribe to these kind of lists.

Nope, if he can be threatened with severe legal action for hosting proscribed content then he will comply, with rare, heroic exceptions. The recent legal carte blanche granted to the FBI to remotely access any machine shows the way things are actually moving.

1 Like

Think you answered your own question. They only have a piece of encrypted data. Data that they shouldn’t have the keys to. Unless you have the decryption key to the file on your drive, then you have full plausible deniability.

I can get into the specific legal (I have a long legal expensive brief on it) how how the users are legally covered, but I think the underling question is can it be abused by governments. The answer is no. They can try to force users into greylists, but they can’t enforce that on every single user in a distributed and open network. If that was the case wouldn’t they just put a backdoor in 51%+ of all Bitcoin nodes?

Answer is yes. Technically, and you have to be specific about the legal jurisdiction we are talking about.

There are also a few more technical tricks up our sleeves we can use so that you can’t actually tell the original farmer of the file. Basically it involves creating a poor mans Tor network with network tunnels. In either case, you are on like your 3rd level of protection of the farmer. You should not have to go that far (unless you live in North Korea or the future).

If you can find out which ID is saving questionable data and block it, I don’t see how this can facilitate plausible deniability. You can find out what is being hosted and grey list it.

@cretz’ question seemed quite clear to me. I don’t think he was asking about whether you thought it mattered or not.

This post was flagged by the community and is temporarily hidden.

My whole point was that if you don´t start with free will, you should apply to yourself as well. Those who bluster about social programming are often the ones who count themselves to the group of enlighted people who can see through the manipulation - I don´t buy into that. “Manipulation” is not a disease one can fall ill with - it is the modus operandi of every social relation. There are different quantities, yes, not different qualities. I also don´t believe that you stand for neutrality. People are not neutral, they have stances and it is absolutely fine if they can have them. For instance it is totally up to you to not use the SAFEnetwork if it technically allows to block content (which is - as I read the other thread - very likely). The question is not whether it is efficient or not - the question is whether it is possible or not. If it is possible, it will be done. I would do it certainly.

This is not about “illegal” files. It is about a personal choice not to host certain PUBLIC files. Public and private data are equal on chunk level, but public data is openly published and it’s location shared. If it is my choice I could build a list that plainly blocks public content and gather chunks from private content in small vaults. If the differentiation is possible, I don´t see any added instability. However, again, if it is possible it wouldn´t matter because then the network has to deal with it anyway.

In the end, it seems hard to make a case without the actual software being there. I don´t even know if the decision that public data is cheaper than private data is still up to date. So many aspects are still missing in the network, but I´ll be happy to continue the discussion once the software is released.

I see the discussion has shifted from the “could” to the “should”, which was probably the concern all along.

OK, SAFEnet should have strong “deniability” (i.e., unknowability on the part of the host of the origin of chunks) because if it doesn’t then it will be much less distinguishable from other distributed storage systems that will come into that market space. You’ve got to wonder why someone would argue strenuously for greylistability for SAFEnet instead of just joining IPFS or Storj.

It is not unlike the statement that bankers often come out with, that Bitcoin is impractical/undesirable but blockchain technology is wonderful, and how can they run it in a walled-garden, keeping their institutional structures the same. But Andreas Antonopoulos has plucked that canard featherless (look up his speech at a banker conference). The coin is the fuel of the system, payment to miners for keeping everyone honest, and the lack of central control, its feralness, has toughened it against the best attacks that can be thrown at it. Ungreylistability/unknowability* is as important for SAFEnet’s future as the lack of a control center is for Bitcoin.

Another analogy is the fungibility of money: It is a very good thing that it is infeasible for people know what the currency notes in their wallet were used for a few exchanges ago and that banknotes cannot (legally or practically) be earmarked as good or bad. Without that property, the money system would break down.

* More accurate terms than deniability, since deniability implies that the host might know the origin of chunks but can plausibly pretend not to know. Ungreylistability/unknowability means that it is infeasible for the host to find out the origin of chunks, and that relieves him of a great burden of concern and much complication.

1 Like

You´ve got to wonder why people have different opinions :slight_smile: probably because we are not living in binary world. Btw. there is no need to argue for the “greylistability” within the SAFEnetwork, because one way or another it will be possible for public content. But you never know, philosophers stone and such…

Are you saying that ungreylistability/unknowability is impossible? That’s a strong statement that you need to support, but I see no such support so far. And if you believe that to be the case then what are you on about? :slight_smile:

I suspect that a host, or anyone, cannot prove that he doesn’t know something, such as the origin of a data chunk, but it is enough for our purposes simply to make it sufficiently expensive that no-one will try to find out that information. “One way or another”, that will surely be possible. I argue that it is essential, in order for SAFEnet to fulfill its destiny.

1 Like

In fact, I literally said “you never know”. It is you who creates ideas like “ungreylistability” and then asks me the proof they are null. When we started this debate you feared that states could make greylists mandatory, simply because greylists are possible. I asked you (repeatedly) how this could take place, because it would be a profound legal intervention into the autonomy of people that I don´t expect people in liberal democracies would agree to. If you consider this level of intervention as possible - which is, of course, a valid opinion as well - then you should also consider it possible that the same regime would simply block all access to distributed networks or pursue people who use Mesh networks or or or - which then brings us to the “conclusion” that there is no sense developing SAFEnetwork. One way or another, that is what I said, based on your own assessment of what states could do to enforce what they think is right.

I don´t believe that the mere possibility will form the path to implementation as you did earlier, I believe that this a social and political question. If you come up with a network that is easy to ban with the child porn hammer, it is more likely to take place, that´s what I would suppose - again, just my opinion.

As far as I understand David in the other thread, it will be possible to track chunks based on the public data map. If you want to argue the possibility of “ungreylistabiliy” it is up to you to provide support, but I see no such support so far. Actually you went on arguing to make it “sufficiently expensive”, but expensive is precisely not making greylists impossible.

Even though I don´t consider it important that the network prevents people from determining the public use of their personal space as you apparently do, I´d love to see the solutions to that “problem”. Keep me updated.

There is a distinction that I must make clear again: there’s mathematically certain ungreylistability/unknowability and practical, good enough greylistability/unknowability (that might take an attacker 500 years to break, for example). I have no argument that the former is possible or impossible, and neither do you. I actually suspect that it is not possible. I do argue that the latter form of ungreylistability/unknowability is not only possible but, by analogy with other areas of information security, eminently achievable. If that is clear?

Having made that distinction clear, I go on to argue that practical ungreylistability/unknowability is both desirable and necessary for SAFEnet to be more than just another also ran in the area of distributed networks.

As far as the discussion between David and others more technically competent than either of us: He was, as I understand it, thinking about ways to achieve practical ungreylistability/unknowability, and the consensus seemed to be that mathematical ungreylistability/unknowability was not possible, at least with current technology.

On the question of the political feasibility of the intervention scenario that I described, it is clear from historical trends that there are no longterm obstacles. As evidence I would cite the numerous intrusions into our lives by agents of the state that increase with each passing year. I also consider the democratic electoral process to be a complete sham, but that’s another discussion.


This post was flagged by the community and is temporarily hidden.