Attacks: can SAFE protect against tracking of client computers?


Two sophisticated tracking techniques just came to light, along with the already known methods using cookies etc. New and seemingly unblockable methods such as:

Can MaidSafe protect against these and other methods of tracking physical devices by content providers, and if so how do we achieve this?


I think the biggest thing we do is provide a network that is super simple and cheap to build new facebooks/dropbox and pretty much any web service on. So if a diaspora came along again there would be no setup, or technical hurdles for users, instead a just use model. If even facebook went on SAFE and did this snooping in conjunction with others the picture is the same They could make their app communicate via ssh/ssl connections back to them outside the SAFE network.

Our response is to give every builder the ability to create a network / application that will scale at pretty much zero cost and actually improve with adoption. So the significant costs of deployment are gone or at least reduced to a minimal amount. Very importantly though this will all be secured and hidden from users, so they should see no difference and perhaps a positive difference.

My feeling is this new way of networking will allow worldwide innovations and competition to flourish. If open source versions are predominant then they should cross borders easily as governments and people will be able to audit what’s happening.

I am very hopeful that this will lower the innovation ceiling to allow everyone to create a facebook or reddit or dropbox et al and this is the best way to win.

Technically unless an app snooped on your machine then you never send IP:port info anyway on SAFE (past close connected endpoints) so there are definitely differences in how they can do all this today, if they are enough to overcome this level of spying then I think its down to us as a community to disclose abuses like this and dissuade those kinds of apps being used. It should be possible to actually and fully challenge this kind of behaviour and skirting privacy rights for profit. A bad app of any kind can do untold damage, so it’s a matter of letting people know. I think this is something the wider community will be active in anyway as they are today.

I also think a verified by XXX for privacy and security would be neat for users in our app store.


We have to determine what “reasonably private” is. None of this technology is “absolutely private”.

Also we have to put a stop to companies that sneak and steal people’s private data with confusing nearly unreadable TOS agreements to then make a profit off their data. In my opinion we should encourage users to sell their own private data if a profit is to be made from it at all. If users don’t want to sell the data then it should remain private.

Security in this case would be about making invasion of privacy sufficiently expensive and sophisticated. If it takes a lot of effort, time and money to do it then it will only be done if it’s really necessary. On the other hand if it’s as easy as a cookie, or a canvas tracker, then they can use this to steal your information without your knowledge or permission. It’s really no different from what a hacker would do only in our legal system we don’t guarantee a person’s IP or device as personally identifiable information.

Fully homomorphically encrypted unique device databases might be a possible solution.

In my opinion if device information is collected it must always be encrypted so that the personal or device identifiable information is not human readable. I think there is fully homomorphic encryption which could allow for collection of this information without leaking any private details.

It was mentioned before that the Google attack could be stopped if the SAFE Network could identify each unique computer. We could do this and have a fully homomorphically encrypted database which no human being could make sense of but which could return a true or false value by running calculations on the encrypted values. The problem is the added complexity creates risk and I don’t know if it’s worth it unless it’s the only way to defeat the Google attack.

Secure Computation on Genetic Data