Parental control mechanisms - Heading off bad press

Firstly, I’m not a parent…but I do know a couple who are parents and school teachers and so I can bounce this discussion off them for a reality check.

Will Parental control mechanisms be possible? I can just see the headlines/outrage if the network gets off on the wrong footing in this regard…brand new and scary.

1/ I’m assuming here that a Network ID can be created from any App, with Lifestuff being the only one on offer so far?

LifeStuff Demo

2/ Could an App offer a user the possibility of self regulation associated with an ID?

3/ How do users find Apps on the network?

4/ How does the Private ID/Public ID function in practice, I cant find any clear explanation on this.

Of course, kids will reach a certain level of maturity and have the realization that they can create their own account, the same as it is now.

If I went around to schools and presented to parents within the next 12 months, how am I going to explain all of this to them…it needs to be very simple, clear, precise allaying any fears of boogey men. Parents do have a modicum of control now up to a certain age.

Are we entering the wild west or are light controls available, suitable for certain stages of age and maturity…either inherent in the network or available in the API for implementation.

I can possibly see the need for another of those hand-drawn videos ‘SAFEnetwork explained for parents’ and indeed anyone interested in operational basics of the system…so far we only seem to have an overview of actual usability.

Appreciate all input on this, SAFE for parents needs to be bedded down I feel…after all these kids are going to be SAFE natives.

Edit** Brilliant post from Developer Viv:

Personally am not a fan of imposing too many guideline’s / requirements at the system level to one allow ease of use and second allow creativity.

Having something like “safe::public-name/” is fine since it’s fairly basic, you’re connecting to the network to some user. After that point, not really a fan of adding /blog, /www, /something-else, that’s just glitter we don’t know if it’s going to be useful later on or just a hindrance.

Just as a thought people often ask us, how would children be protected in a network like this where anyone could post anything they want. For all we know in a while a whole new set of apps could come along that can provide content that are deemed safe for children. Now how they do this “safe for children” is a whole other topic since these apps doing the filter could either be valuable or pretty much block you from seeing content they don’t want you to for their own benefit.

Now if we assume, We got an app that does filter on some agreed terms to get content only appropriate based on a criteria, I’d be completely fine in it imposing it’s own url scheme such as

child-safe::any-name

Now this app’s code can be alerted based on the child- prefix and then check if the url entered matches it’s whitelist and show content by essentially going to a safe://public-name in the background. While it’s pretty much just a forward the app’s doing, as far a user’s are concerned, this app’s helping them keep their children safe from content they don’t want them seeing.

2 Likes

Good point @chrisfostertv
The parental control mechanisme should have a different approach this time around. Kids always go to kids sites in general, so maybe the Safebrowser should recognise this and not allow an account going to kids site to watch adult content.

1 Like

I’ll offer my thoughts here, but I haven’t discussed this in detail with David, so I may get a few points wrong. If so, I hope he’ll correct me :slight_smile:

The ID in this case could be a Public ID (similar in concept to an email address - at least human-readable) or a Private ID (the main anonymous one). Any app should be able to use either of these to allow the user to interact with the network; there wouldn’t need to be a different one per app.

I don’t see why not. The SAFE network’s equivalent of websites will be called “shares” - at least that’s what we’ve been calling them so far in-house. A share will be owned (i.e. can be written/modified) in some cases by just a single user, and in others with a group of users.

I would think that it would be fairly easy to have a whitelist of such shares held by an app, with access restricted to just this list. The tricky part will be populating that list of course!

The crux I guess will be that any such filtering will be done by apps - client-side. There are no plans that I know of to ever build such restrictions into the core network protocols.

Hopefully a team like DuckDuckGo or Google will provide search functionality. Again, I think it will mainly be client-side work, but we’ve toyed with adding the ability to make encrypted public data taggable to allow for searches.

The Private ID (also called MAID, MaidSafe Anonymous ID) is the main one users will need to be able to store data to the network privately. Without this, you’ll be still be able to access public data on the network, but not modify anything. It’s also the one which ties you to your vault(s), each of which has their own ID (aka PMID, Proxy MaidSafe ID). It won’t be human-readable and is unlikely to need to be touched by any user. For example, it’s this ID which we currently pass to Drive to allow the virtual filesystem to store chunks to the network. But there’s no user action involved there beyond logging in to the network, the app will just retrieve the key from the session data once the user logs in.

The Public ID (aka MPID, MaidSafe Public ID) will be a human-readable ID chosen by the user. I baggsy “Fraser” :slight_smile: I’m not up on the finer points of the remit of this ID, but basically it will be allowed to perform public actions, e.g. writing a public (non-anonymous) share, or messaging a friend at his/her Public ID.

It really is just a modicum. I’m the parent of a 14 year old lad, and I don’t trust any of these controls. I’m not saying that apps shouldn’t try at all to provide controls, but they’re so easily broken now I can’t see their usefulness. It’s even arguable that they’re counter-productive, serving only to lull parents into a false sense of security, and allowing them to devolve their responsibility for moderating their child’s viewing to a tool which appears to work.

For example, for the past decade I’ve spent a considerable amount of (usually extremely boring) time Googling and watching clips of “[insert name of current best PC game in the world ever] gameplay”. I don’t even really trust the ratings on games. I’m somewhat of a tyrant there actually. Whenever he’d ask for some realistic war game, I’d make him go and research a particular battle or war in some detail and tell me his findings before he’d be allowed the game :slight_smile:

However, that’s all for naught when he visits a friend who owns a game which I’ve banned. What else is he going to do but play it? I know what I’d have done at that age :slight_smile:

3 Likes

I have a young family and yes this is a concern.

The existence of dodgy people will be a given but I guess we have to think of a way to mitigate harm to vulnerable individuals like the young ones.

My questions are:

  1. how can we detect if abuse had happen/happening… A reporting mechanism?
  2. Do we have the ability to shut down a node/service using maidsafe?
  3. Is ‘KYC’ idea good (now we’re destroying our anonymous philosophy)
  4. Do we farm parental control mechanism to app developers.
  5. Can we employ smart algorithm to detect images that abuses young people? I doubt this… We have difficulty doing this now let alone maidsafe. I stand to be corrected.

You monster! I’m contacting social services…lol

2 Likes

I agree this should probably be among the first apps but unfortunately I doubt the SAFE Network team is wise enough to do it.

I have mentioned it many times already. I think more than likely SAFE Network isn’t for children and probably will never be. Adults who have children should be able to lock it down.

This is why I was in favor of biometrics. A child could put a keylogger to find the password to an account but with biometrics there and other forms of proof of identity you can prove you’re an adult.

Terrible way to go about it. If you have kids then you should download certain apps for people who have kids. Those apps can provide content filters, lockdown and other mechanisms.

Also if you have proof of identity you can prove your an adult (which is a way of proving you’re not a kid) to the app. If the kid is a hacker they’ll still get past all of this but it can eventually work better than the Internet of today with enough creative thinking.

http://www.win.tue.nl/ipa/archive/falldays2010/Jacobs.pdf

Revocable privacy aims to break the impasse of the debate to achieve the status quo. In essence the idea of revocable privacy is to design systems in such a way that no personal information is available, unless a user violates the pre-established terms of service. Only in that case, his personal details (and when and how he violated the terms) are revealed. The data is only revealed to authorised parties, of course.

If you opt-into a set of rules to use a certain app then your privacy can be revoked if you break the laws governing that app. You could then make apps which require that any user opt-in to a set of rules/contracts compatible with your tribe.

If for example you want to minimize violence then you can design your app to do this by giving it a set of rules which actually have teeth. If a person violates the rules of the community using that particular app then their privacy could be revoked.

That is just one example of what you could do. It’s opt-in decentralized authority. If you use an app which has a set of laws governing it’s use then you can make the conditions clear that if the rules are followed then there is no risk. Humans don’t enforce these rules, the code does.

We do have to be careful with this because most people don’t read the terms of service. There is danger that subversive individuals could disguise via complexity the true nature of a decentralized app to create honeypots. So I do think there is a risk but there are greater rewards which result from taking this approach than there are risks. If something like this approach were to work then the police/governments would have no argument for trying to ban or impose rules on decentralized autonomous communities.

It allows us to create virtual laws written in code which self enforce according to clear unchangeable rules/indicators. In the real world law can be changed at a politicians whims so they don’t have much meaning. In the real world the constitution is selectively interpreted. In our world there could be clarity.

If an app developer uses this technology then someone who is abusing the community would be breaking the rules if those rules are coded in. What those rules should be is anyone’s guess.

Here are some examples of beneficial uses

If Alice trades with Bob, she might want privacy revoked if a trade deals with more than a certain amount of money. She could present the contract to Bob and when they make a deal worth more than $10,000 for example it could give the transaction history to a third party. Alice and Bob would never have to know each others identities but the third party would receive both their identities if and only if the transaction is an amount more than $10,000. Bob would see this in the contract Alice requires to do business and could reject it if he feels he has more to lose than to gain from the deal.

Having revocable privacy can either give rules/laws teeth or it can be used to allow Alice and Bob to determine conditions in which a third party or third parties would be given private information.

For example through a contract Bob could set up a dead mans switch so that his identity isn’t unlocked unless specific people in the community suspect something bad as happened to him. If enough people in the community suspect that he has died or if there is some evidence that he is dead because of how the deadman’s switch is designed then every bit of useful data related to his private dealings could be released to his selection of third parties.

A final example, suppose you have a social network like SAFEBook. There are people on this social network who are friends and who care for one another. Some tragic event happens and we find out that it’s impossible to investigate the tragedy because the victim did not choose to have a revocable encryption scheme. As a result it’s not possible to investigate what happened to them.

Now suppose they did set up a revocable privacy scheme and so did some of the people they interacted with? Now you would have a situation where if enough of their friends think something bad has happened to them then these friends can vote to revoke privacy after the fact. The threshold would have been set by the victim and the selected third parties would also have been selected by the victims.

So let’s say Alice is the victim here. She could have set up in advance a contract which says if more than a certain threshold of my selected peers believe something has happened to me then according to my wishes they have the capability to revoke my privacy which will automatically forward it to these specified third parties. Alice’s friends would not even have to know her identity themselves as they would only have the ability to revoke privacy and have it forwarded to the selected third parties which might not include any of them, but this would allow for an investigation to trigger if enough of her friends believe it should be triggered.

I think this is very powerful conceptually and as a feature if implemented. If somehow Alice is dead or something happens then an investigation could happen if and only if she wants that in her contract. This would mean the power is in her hands but it also would give the network a way to investigate if people opt-in to the contact (and I would think most people would).

To determine if this approach has any merit try to think and see if there are any circumstances where you would want your privacy revoked partially or entirely. Would you revoke part or all of your privacy to save a friends life for instance? The peers in the network can provide the intelligence which could trigger the privacy being revoked but if it’s controlled so that you remain anonymous to your peers?

.[quote=“stuffminer, post:4, topic:771, full:true”]
I have a young family and yes this is a concern.

The existence of dodgy people will be a given but I guess we have to think of a way to mitigate harm to vulnerable individuals like the young ones.

My questions are:

how can we detect if abuse had happen/happening… A reporting mechanism?
Do we have the ability to shut down a node/service using maidsafe?
Is ‘KYC’ idea good (now we’re destroying our anonymous philosophy)
Do we farm parental control mechanism to app developers.
Can we employ smart algorithm to detect images that abuses young people? I doubt this… We have difficulty doing this now let alone maidsafe. I stand to be corrected.
[/quote]

The answer is to use the SAFE Network to protect children as a way to head off the bad press. Use the power of smart contracts to empower investigators in unexpected ways but without giving investigators unnecessary authority. They don’t need to monitor everything everyone does in search of a crime.

In your own contract with the network you could set up the conditions in advance when you want your privacy to be revoked. You could select friends whom you would give the power to initiate an investigation or to revoke your privacy. You would be able to determine where your information goes in a situation where your peers vote to revoke your privacy (you select the third party or parties). This means you’re ultimately in control of what happens to your information even if there is a tragedy.

This level of control should be built into SAFE Network. There is no reason to give authority to external entities. The SAFE Network itself could facilitate network wide investigations through a web of smart contracts. So for example if I am willing to give up my privacy in a matter of national security for example then it could easily be a self enforcing contract which would revoke my privacy any time a national security investigation is initiated. That is the power of revocable privacy and how it can also keep me anonymous to everybody.

This would mean none of my friends would ever receive my information. When they vote it would be a vote to revoke my privacy to the specified third parties that I chose and these third parties would then be able to see everything I did and who I am.

I expect all such efforts will be largely or wholly client-side, i.e. up to apps. Ultimately, trying to have the network provide such filtering would necessitate some centralised authority, and that’s about as un-MaidSafe as it gets!

I would hope that some client apps would try and cater for this need, and the complexity will be in populating and maintaining a fair filtering mechanism. The apps could allow a reporting mechanism to their company, allowing their whitelists/blacklists/filter protocols to be updated. Users should be able to adjust their local filters I expect. The better a company does this, the more likely it will be able to gain traction and increase its uptake. There’s plenty of scope for innovation here.

As for shutting down a node - we’re working very hard to try and make that near impossible :slight_smile:

1 Like

A bold statement :slight_smile:

I’m interested in the core protocols, sure - but I’d also like to dabble in some client-side stuff too once I get some time. The priority right now is the network.

3 Likes

Are you unaware of the whole raison d’etre of Maidsafe? You suggest they should prioritize developing an app where id is required? As to the wisdom of the team, I think they have more than shown this already…and I have no clue how biometrics could ascertain one’s age - even if that was desirable, which it definitely isn’t in my view once you think that through. Meh

3 Likes

I just showed a way you could do it without a centralized authority. You can have a decentralized authority. You can have a crowd intelligence or swarm intelligence. You can do all of that without a central authority by using revocable privacy and fully secure attribute based encryption. The fully secure attribute based encryption can allow you to select a third party by the precise attributes so no one without those attributes can decrypt your identity.

http://blog.covertix.com/tag/attribute-based-encryption/

Privacy is good but there are situations where you would want it revoked. If you cannot think of any situation where you would want your privacy revoked then are you saying you wouldn’t choose to revoke your privacy if your best friend were kidnapped? Also if something happened to you would you not want investigators to be able to decrypt your experiences?

The way to determine someone is an adult or not isn’t too sophisticated. You could try using a zero knowledge proof or SNARK which uses a proof of passport type scheme or social security number. This could still be faked but combined with biometrics it’s better than how people confirm their age currently.

The important thing is no one would ever know anything about your age. It would remain a secret and the only question is whether you’re an adult or not which is a true or false question. You could do this with a zero knowledge proof for sure. I think it’s something enough people would want that it’s definitely worth adding that functionality (if people didn’t want it there wouldn’t be all these panicked threads every few weeks asking the same questions).

Maybe its smarter to not build that into the base system, though, @luckybit? Leave that part up to third party groups so that the people developing the core protocol don’t have to bog it down. I’d imagine the core functionality of an autonomous system needs to be pretty lean. Same way Bitcoin is relatively simple. It doesn’t have a store, or bidding system, or accounts, etc. Probably to keep it lightweight for continued development.

3 Likes

Considering they plan to do secure multiparty computation it’s not like it would be a big deal to let someone scan their birth certificate, passport or something similar and then use a zero knowledge proof scheme to determine if they are over the age limit. This would remove concerns from parents and reduce the anxiety of the law enforcement community.

It’s really up to the core team if they want to make this a priority. I think for sake of marketing it might be a good idea to make it a priority. It would at least send the message that the core team at least has some blueprint or plans for this. In my opinion the core team would benefit by adding API level features so that app developers can integrate the revocable privacy contracts and zero knowledge proof of identity/age checks.

If the core developers don’t do it then we have to trust a bunch of shoddy app developers who might not do it in a standardized way which could result in a mess of code (and this code is very important code). A lot of people say Bitcoin is relatively simple but that isn’t really true because Bitcoin has had scripting and most of the scripts just aren’t used. It’s not like Satoshi never thought about all the different use cases.

1 Like

I’m sure the Maidsafe team is stellar, but I personally wouldn’t hold them on a pedestal like that. They’re not deities. They’re just some delightful folks trying to make something smart. There’s piles of other people out there doing the same thing who’ve probably made it their lives’ work to solve that problem, in the way the Maidsafe team has seemingly dedicated a good portion of their lives’ doing this massive project. In fact, I’d almost feel better if someone else was doing that project so Maidsafe can focus on making the autonomous system (a huge feat!).

I feel like, in this forum, I’m the least afraid of government/spying, and that comment frightened me. I would never, ever, send that private information to a new autonomous machine.

2 Likes

+1

Not sure that the whole age thing is so trivial either. I know we use it for determining to the nanosecond when every human is deemed legally competent to drive a car/drink alcohol/have sex, but I’m sure we can do better than that. It’s pretty brute-force I think.

What I mean is that different people have different standards as to what is acceptable content, both for their own consumption and for their children. Age is a hard, mathematical delineator which doesn’t allow room for parental opinion, mental disability, local legal stipulations, to name a few.

4 Likes

It’s not just a matter of being smart or a good programmer. It’s a matter of being smart, brave, and principled. There aren’t a lot of people like that in the world which is why we didn’t have this happening sooner. I do think maybe in the future people who are smart can become more brave or develop principles but it’s not easy to find them right now. I really hope you’re right.

I’m not sure what you mean by this comment. If you would never send that private information to a new autonomous machine then why would you trust SAFE Network which is an autonomous network?

That being said I think it’s just that you don’t understand how zero knowledge proofs work. I’m not going to claim I’m an expert on the subject either because it’s quite complicated math but the logic of it does seem to make sense enough to me that I would trust it. There is a risk of course of bugs in the code and I understand that but those risks are the same risks people take already storing their data on the regular Internet or on any computer.

Each user could have their own contracts which represent what they believe in. If there are apps which are compatible with their encoded principles then it can work. When you’re dealing with a script you actually have far more flexibility than you do with traditional laws because you can use “conditionals” which are the if-then statements to represent all the possibilities you mentioned.

As long as we have an API to make writing these smart contracts easy, as long as revocable privacy and zero knowledge proof functions are built in, then we could take Python and make it happen. Ethereum is going to be capable of this so I suppose if the SAFE Network core devs don’t do it then Ethereum will lay a foundation to do it and SAFE Network can integrate with Ethereum.

1 Like

I have often thought about things like this, but have never managed to calculate a foolproof way that works without leaking privacy, never mind who would scan who’s passport etc. Perhaps go a ‘trusted’ authority like the police? have them certify things. (now scary mary struff :D)

Any judging of people by the network seems an inordinately difficult task and may put the machines in charge with incorrect assumptions. I do think though that there is an opportunity to work with app developers to test some hypothesis and find a solution. If that proved to meet all the criteria of privacy etc. then its another story. It could make its way into core protocols.

I know you are pretty security savvy and will know that its amazing how little you need to divulge to allow an amazing amount of knowledge about people to be made. So I never say anything is impossible, in fact relish when folk do, but this needs incredible research and testing and core code/design knowledge. The issue right now is we are ploughing through getting the network up and we have a couple of parts I really must get to finalising (not inventing, just finalising) such as
1: Accurate numbers for ranking mechanism and consensus chains
2: Safecoin fixed magic number (I want removed) that dictates the overall farming speed/difficulty. (this is like the 10 minute/tx size in bitcoin, I dislike it a lot, but we could just copy that if we want, we know it works, but its just not good enough for me)

They are related but need a week or two in the conference room with a bunch of us in house, you guys on line and the university folks as well. As I say its not a worry or unknown, its very known but not measured properly for my liking just yet. For instance we know a chain of 2 requires up to 140-160% bad node attempts (for a chain of three we could not get an attack, but its not yet conclusive), but how long a node takes to be bad is controllable ( I figure it will be same as the correct farming rate and based on network average rank/coin generation and therefore no magic numbers there, also time increases for attack as network gets older etc. its not hard at all but needs careful thought)

In any case this gives us a solid grounding to move forward from and look at many of the brilliant issues this forum brings up. One thing we know in house so far, whatever seems easy has massive side effects and the analysis of those is where the hard work is. From 10,000 feet its all simple though :slight_smile:

For instance we spent at least two man weeks there debating the correct parameters / locations and rights for bootstrap file cache, the small parts like everything hold the most interesting and surprising side effects (there was an excellent attack evasion introduced). These larger parts will require significant thought and we will involve the whole community in those steps as I feel we need to as it increases the pool of knowledge.

So not so much not smart enough as nobody can measure that, but careful considered and tested to oblivion first is our way. When the networks up our job is millions of times simpler as at the moment we need to imagine nodes in our minds and think through all actions and attacks etc., like template programming, you need to compile it in your head really. The network gives us real world testing capability and makes a lot of this so much simpler its gonna be great.

The impossible parts are where I live and love it though, so lets just see when we get a chance to breath.

7 Likes

I appreciate that you took the time to comment and I understand it’s very difficult what you and the core team are trying to do. I hope that when the SAFE Network is up and running that it can attract even more developers so that we can try to figure out how to solve some of the really interesting yet difficult problems which we are aware of.

Ethereum will be going with full scripting language from the start so in a way we might be able to get an idea of what can happen and the attack vectors by observing that project. Anything they learn through trial and error is something which can be adopted by SAFE Network in the future.

2 Likes

Don’t fully trust this just yet, its very new and there is no mechanism to confirm the machine state or environment it is run in, so the code may execute, but where and on what (i.e. sign a blockchain but which one and was it disjoint from the network etc.) . Its amazing stuff but no panacea (yet). It deserves loads of testing and perhaps zerocoin etc. can provide a testbed.

I try and help but am time limited just now, I hold massive hope for it though, the math is amazing actually and its another impossible thing so thats also great, but its got warnings all over it just now and perhaps still belongs in the hands of not only cryptographers, but specialist cryptographers as its a very specialised area and involves gcc register code as the ‘assembly step’ For me when I get time this is one for the spare hours though.

If you want in my gihub repo there is a version that will clone and build on linux with no dependencies yet. Needs a 64 bit arch really at the moment. If somebody wants they can integrate a cross platform prng and serialisation mechanism, then it will work across the board. I have changed enough else to make that happen, so a couple of days max to get that in order. Just fork and pull request back to the team there.

Thanks Fraser, I’m glad we can have this conversation now before launch.

SAFE is bound to have a seismic impact on society and the whole area of ‘think of the children’ is an easy target for attack from the behavioural ‘scientists’ and freedom haters.

The framework is in place, so it’s up to the Builders to satisfy market demand for age appropriate content.

Promoting the data security aspect of SAFE might be a first step for introducing the whole concept of a new internet to concerned parents.

Apps can be assessed on their own merit as to their suitability.

That’s put some perspective on the subject…cheers

5 Likes