Terrible way to go about it. If you have kids then you should download certain apps for people who have kids. Those apps can provide content filters, lockdown and other mechanisms.
Also if you have proof of identity you can prove your an adult (which is a way of proving you’re not a kid) to the app. If the kid is a hacker they’ll still get past all of this but it can eventually work better than the Internet of today with enough creative thinking.
Revocable privacy aims to break the impasse of the debate to achieve the status quo. In essence the idea of revocable privacy is to design systems in such a way that no personal information is available, unless a user violates the pre-established terms of service. Only in that case, his personal details (and when and how he violated the terms) are revealed. The data is only revealed to authorised parties, of course.
If you opt-into a set of rules to use a certain app then your privacy can be revoked if you break the laws governing that app. You could then make apps which require that any user opt-in to a set of rules/contracts compatible with your tribe.
If for example you want to minimize violence then you can design your app to do this by giving it a set of rules which actually have teeth. If a person violates the rules of the community using that particular app then their privacy could be revoked.
That is just one example of what you could do. It’s opt-in decentralized authority. If you use an app which has a set of laws governing it’s use then you can make the conditions clear that if the rules are followed then there is no risk. Humans don’t enforce these rules, the code does.
We do have to be careful with this because most people don’t read the terms of service. There is danger that subversive individuals could disguise via complexity the true nature of a decentralized app to create honeypots. So I do think there is a risk but there are greater rewards which result from taking this approach than there are risks. If something like this approach were to work then the police/governments would have no argument for trying to ban or impose rules on decentralized autonomous communities.
It allows us to create virtual laws written in code which self enforce according to clear unchangeable rules/indicators. In the real world law can be changed at a politicians whims so they don’t have much meaning. In the real world the constitution is selectively interpreted. In our world there could be clarity.
If an app developer uses this technology then someone who is abusing the community would be breaking the rules if those rules are coded in. What those rules should be is anyone’s guess.
Here are some examples of beneficial uses
If Alice trades with Bob, she might want privacy revoked if a trade deals with more than a certain amount of money. She could present the contract to Bob and when they make a deal worth more than $10,000 for example it could give the transaction history to a third party. Alice and Bob would never have to know each others identities but the third party would receive both their identities if and only if the transaction is an amount more than $10,000. Bob would see this in the contract Alice requires to do business and could reject it if he feels he has more to lose than to gain from the deal.
Having revocable privacy can either give rules/laws teeth or it can be used to allow Alice and Bob to determine conditions in which a third party or third parties would be given private information.
For example through a contract Bob could set up a dead mans switch so that his identity isn’t unlocked unless specific people in the community suspect something bad as happened to him. If enough people in the community suspect that he has died or if there is some evidence that he is dead because of how the deadman’s switch is designed then every bit of useful data related to his private dealings could be released to his selection of third parties.
A final example, suppose you have a social network like SAFEBook. There are people on this social network who are friends and who care for one another. Some tragic event happens and we find out that it’s impossible to investigate the tragedy because the victim did not choose to have a revocable encryption scheme. As a result it’s not possible to investigate what happened to them.
Now suppose they did set up a revocable privacy scheme and so did some of the people they interacted with? Now you would have a situation where if enough of their friends think something bad has happened to them then these friends can vote to revoke privacy after the fact. The threshold would have been set by the victim and the selected third parties would also have been selected by the victims.
So let’s say Alice is the victim here. She could have set up in advance a contract which says if more than a certain threshold of my selected peers believe something has happened to me then according to my wishes they have the capability to revoke my privacy which will automatically forward it to these specified third parties. Alice’s friends would not even have to know her identity themselves as they would only have the ability to revoke privacy and have it forwarded to the selected third parties which might not include any of them, but this would allow for an investigation to trigger if enough of her friends believe it should be triggered.
I think this is very powerful conceptually and as a feature if implemented. If somehow Alice is dead or something happens then an investigation could happen if and only if she wants that in her contract. This would mean the power is in her hands but it also would give the network a way to investigate if people opt-in to the contact (and I would think most people would).
To determine if this approach has any merit try to think and see if there are any circumstances where you would want your privacy revoked partially or entirely. Would you revoke part or all of your privacy to save a friends life for instance? The peers in the network can provide the intelligence which could trigger the privacy being revoked but if it’s controlled so that you remain anonymous to your peers?
.[quote=“stuffminer, post:4, topic:771, full:true”]
I have a young family and yes this is a concern.
The existence of dodgy people will be a given but I guess we have to think of a way to mitigate harm to vulnerable individuals like the young ones.
My questions are:
how can we detect if abuse had happen/happening… A reporting mechanism?
Do we have the ability to shut down a node/service using maidsafe?
Is ‘KYC’ idea good (now we’re destroying our anonymous philosophy)
Do we farm parental control mechanism to app developers.
Can we employ smart algorithm to detect images that abuses young people? I doubt this… We have difficulty doing this now let alone maidsafe. I stand to be corrected.
The answer is to use the SAFE Network to protect children as a way to head off the bad press. Use the power of smart contracts to empower investigators in unexpected ways but without giving investigators unnecessary authority. They don’t need to monitor everything everyone does in search of a crime.
In your own contract with the network you could set up the conditions in advance when you want your privacy to be revoked. You could select friends whom you would give the power to initiate an investigation or to revoke your privacy. You would be able to determine where your information goes in a situation where your peers vote to revoke your privacy (you select the third party or parties). This means you’re ultimately in control of what happens to your information even if there is a tragedy.
This level of control should be built into SAFE Network. There is no reason to give authority to external entities. The SAFE Network itself could facilitate network wide investigations through a web of smart contracts. So for example if I am willing to give up my privacy in a matter of national security for example then it could easily be a self enforcing contract which would revoke my privacy any time a national security investigation is initiated. That is the power of revocable privacy and how it can also keep me anonymous to everybody.
This would mean none of my friends would ever receive my information. When they vote it would be a vote to revoke my privacy to the specified third parties that I chose and these third parties would then be able to see everything I did and who I am.