Mutual Assured Transparency

Aside from that tool being invented called MaidSafe which ensures privacy, security for everyone?

The reason there isn’t more leaks is because hacking is hard. All kinds of people speak of consipiracies, but most of them are written off as nutcases. You must be able to document your scandal for anyone to believe you. To do that you steal documents off a central server someplace. MaidSAFE removes the server, making it much harder and less transparent. There will be less leaks once we kill the centralized data warehouses to hack. Not more.

Fantasize away, but reality is that a tool for increasing privacy and security will not make things transparent. Your contention is SILLY! Laughably so.

I’m not so sure about that. Snowden for example didn’t ‘steal documents of a central server’ to get his hands on them. He either had authorized access to them or he used social engineering to get other people with authorized access to get them for him.

Leaking from insiders cannot be prevented by using MaidSafe, since if you have the login credentials you can still grab everything from anywhere. That sort of thing can only be countered by changing your organization’s security model (like information sharing on strict need-to-know basis, never individual access but always in pairs, etc). Which has severe efficiency draw-backs.

Leaking is possible now and it will be possible then. Maidsafe is a game-changer but in both directions.

“Transparency” is not on the feature list. Privacy and security are. As such, It stands to reason it is going to change the game much more in the Privacy and security direction than the transparency direction.

There is no “mutually assured” Leaking is still a criminal act. It will still be prosecuted. People will still be caught… The game doesn’t change much in that direction.

@jreighley

Seems like some cognitive dissonance going on but glad you’re seeing:

The goal is privacy and security and the leap in transparency will play an important part in that. Right now if you know something and go to the police, it could be death, harassment, a gag order . Full data sets aren’t all that necessary up front. The body is here, this is why they were killed, this is who did it and how they did it and who they else they killed to cover it up. Info like this goes in an open data base and with proper search. Over time accumulating evidence will self corroborate and justify or stimulate formal investigation. If an investigation is scuttled that too will tend to end up in the DB.

Imagine the RIAA’s precious IP (stolen as it was) to be protected by DRM and legal manipulations. That didn’t work against e-mule, e-mule plus etc. Under these decentralized systems it becomes hard to keep info contained, at least the kind that needs to get out. Some of it will be people ratting on each other, but all of it will cascade. Past dirty hand stuff will finally get out and current and future dirty hand stuff will become much less practical. This works toward a safe, more stable and equitable world.

It will literally be “the writing on the wall.”

I would suggest that the advent of Bitcoin, the Blockchain etc is highly enabling of DRM… Now there can be global consensus on weather you have a valid license to use a particular piece of intellectual property – and that token cannot be forged, cannot be faked, cannot be altered, and may or may not be allowed to be transferred.

Double edged sword.

In the case of MaidSafe, for all intents and purposes the data doesn’t even exist unless you are given keys to allow it to exist…

Technologies are always morally neutral. They are going to be used for good and evil And usually they are going to be used for things that Warren thinks are good and Josh thinks are evil and vice versa.

I think Wordpress and Wikimedia enable leaks… Maidsafe may allow those apps to be run more anonymously, but with anonynymity comes at the cost of credibility.

All that is apt. With regard to drm people will find ways to capture and run unlicensed content, even in the case of streamed only content, even streamed games. Source code leakes on games even before release. With regard to credibility, good search an some voluntary time stamping may help. I agree a lot of stuff is already out there but people hedge by by hiding things a bit with symbols etc and it gets lost.

1 Like

Ultimately, privacy is the default position - you control which thoughts in your mind are broadcast through you mouth.

While a culture of honesty may occur through the ease of anonymous whistle blowing, I doubt the level of transparency suggested will occur.

@Traktion Yes privacy is the default position. But we may either be evolving toward more openness or our minds may already be more open than than we like to think. And consider some of the scifi stuff we are on the edge of right now.

There is some demonstrated tech that works with types of aphasia that picks up the EM emanations that would innervate the vocal musculature- in short picks up the sublingualization that leaks into this process and runs it through modified speech recognition yielding a device that could be part of glasses that would pick up speech from that leaky analog process whether we were speaking or not. I can see millions of people wearing these on their heads and leaving them on all the time. Presumably there might be some search with AI like features riding on top of that, linking people in real time to silent conversations past present and anticipated- silent because they’d have pass through ear buds as well. A good bit would get lost in translation but there would be enough in similar scifi scenarios to triangulate things out of the heads of some people who shunning these systems- almost a guilt by association.

There is a thread on this site proposing decision or probability trees that would be applied to reputation to consider the degrees of freedom for prevalence for input on trust-ability to weed out shills, that same type of logic could be applied to a Watson debater type front end to cull ideas and hypotheticals regarding individuals and events based on all the cataloged speech. That may well be what a system like Total Information Awareness is already doing in listening to our phone calls. Imagine if that dumped into the transparency side of system like SAFE.

Think also of the outing of General Petreaus as the US director of intelligence over his own emails, betrayed for public shaming by his own agency. Think of John Kerry recently suggesting that living with increasing transparency will make governance more difficult. Consider Assange, Snowden, Manning, and all the others, even Greenwald. Consider also Zuckerberg, who thinks that people are just going to have to get used to living without privacy- a murderous tyrannical idea if ever there was one. Look at the Sony leak and successive prior hacks- although in that case I suspect Sony as having done it to itself to try to further SOPA, PIPA,CISPA TPP, ACTA, NDAA (riders.)

Consider Microsoft’s Xbox One Kinnect 2, it can track your heart beat and determine if you are actually looking at a side bar ad and by correlating the two drive up the cost of the ads. If Kinect 2 ever catches on and makes it into more living rooms… who else will be looking through those cameras? Consider shows like “Person of Interest.” A government type develops a working AI and tries to use it to find patterns to right corruption. Being ethical he designs it so it will only point law enforcement in the right direction by pointing out the right people but not provide details. Unfortunately the tech itself leaks out and another groups builds a much bigger system with no safe guards for much and more invasive full scale AI aided search. Stuff like TIA will be turned around on its developers, its part of our as responsible public(s) in watching these often self appointed watchers.

Some old legal type formulas would seem to apply. Low risk is license and invites wanton reckless results- dirty hand results. Regulation or higher risk yields caution. The degree of precaution will be governed by the
the amount of damage a failure would cause. Its the long tail, its not new, you see it in bridge design, the weight of the possible catastrophe merits large expenditures and efforts. Most dirty hand stuff can domino yielding waves of failure and damage if its not stopped. Normally more dirty hand stuff will be used to stop a leak but not with SAFE. The same tech that SAFE would provide to keep huge amounts of data safe is possibly even much more resistant in its transparency DAO format which might as well be thought of as worse than broadcast as it will be totally persistent and instantly globally disseminated and searchable.

On the input side one could design keys (inefficient as pointed out by Seneca and also impractical as using the keys is almost a leak in itself) that required two or more people present with simultaneous key entry etc., but those same people could be compromised or held at gun point and the genie and its domino effect would be out of the bottle. There won’t be a way to spin this. Dirty hand players that want to make Machiavellian tactics more palatable for the public have tried in Hollywood and spent their ‘political capital’ and it failed. The premise of the show 24 seemed meant to put forward the foolish ticking time bomb scenarios and in later seasons it seemed to be railing against it. The public rejects this stuff.

The transparency side will be enormously empowering for the public, at the same time firms like Google can and must be prevented from putting other people’s data for profit (especially) or otherwise in such a container. The key is that what is being done has to work in the public’s interest. Either way elite status loses. I hope this alone is enough to kill of hereditary power and the idiocy that measures that limit it are ‘death taxes’ instead of seeing gross hereditary wealth (billions) as the enslavement of others.

Tbh, I see maidsafe as a tool against such abuse of privacy. This is an arms race between those who want (others to have) no secrets and those who wish to defend that right until the bitter end.

Maybe we will have the modern version of tin foil hats to stop/scramble tools which try to read minds. Maybe people will refuse to use non-secure means of communication.

What people earn legitimately is also theirs to give to who they want. Death taxes are no more just than those levied on the living. If the gains are illegitimate, by all means go after it, but people should be treated as innocent until proven guilty.

I find your views strangely at odds with the concept of maidsafe. Maybe you hope that corporations will be weakened and states strengthened, but I can’t see how this combination will be achieved - freedom empowers people to resist the tyranny of both.

@Traktion

Let me clarify that I am on the side of privacy to the bitter end. I hate Zuckerberg’s attitude as reported in the media and don’t accept the result as inevitable. If we are to have any chance at a good life we need privacy. Someday it may not matter to us and to some of us it may not matter but to most of us its of the utmost importance. Privacy is the core of most attempts to find a basis for rights assertions. Exposure makes sense for efforts that would attempt to strip us of privacy and liberty.

As for wealth taxes, I am only opposed to destabilizing amounts of money transferring over. Anything that can wisely free people from economic drudgery and misery I am in favor of. I agree we need to get rid of the state as soon as we find a solution that will be better and where the transitions costs will be acceptable. I don’t like corporate power but I do acknowledge that the entrepreneurial approach can achieve things the collegiate and bureaucratic approach cannot even if at times tyrannical in itself. I want a less coercive world and to me that means less nepotism and less propaganda.

Who watches the watchers? Its us, its our responsibility. Are we going to leave it to human nature, human nature on one side only? These people who are pointing a weapon at us are supposed to be protecting us? SAFE is your bullet proof vest and also in its transparency function your countering weapon. If some entity is going to come along and spy and use our private lives against us to be take out of context and to threaten and corrupt with then we too will have a way of putting that activity in context and holding it accountable.

http://slur.io/

1 Like

@jreighley raises the hair on the back of the neck

As awful and aptly named as slur seems it would run counter to “paid to lie and censor” sponsorship. After all what if we aren’t directly related to the sponsors and they aren’t friendly and we can’t out spend them?

That is a more realistic picture of what you get when you get what you are asking for, I am afraid. That is why my hair stands on end from your original post.

“Its estimated that 5% of the general population are psychopaths. Introducing financial incentive in an anonymous framework will produce a greater yield of leaked information than from say the ideology that drove patriots like Edward Snowden. For every idealist willing to selflessly sacrifice their freedom, assets and even risk their lives for a greater good, there are 1000 psychopaths willing to anonymously sell out their peers for material gain. - See more at: slur.io is available for purchase - Sedo.com

1 Like

@jreighley
“Except that in the future journalists will need to compensate whistle blowers for the extreme risks they take. - See more at: slur.io is available for purchase - Sedo.com

This was always a potential for a ubiquitous instantaneous persistent network with wide spread access. That they have monitized secrecy would only be a matter of time. But I would contest that journalist will have to pay for it generally. Most journalists won’t be able to cover what their sponsors don’t want them to cover. This is the need for end user owned accountable media that only takes money from its end users to prevent this conflict of interest. But I can see with services like slur end users and blogs will consult the primary or secondary sources in some case for pay and others without. The pay prospect will catalyze things. And there will be an initial surge. This will monetize corruption fighting but even if it might also trigger some vigilantism
And naturally I think we question if bitcoin will be the proper money medium but do see that money can catalyze the process.

Also I don’t see people spending to defend their own data. Slur says data will not be re-posted on the Slur network but that is small consolation and how do they implement that? Because of competition I don’t think they get people trying to pay for retractions. But I don think this will cause organizations to reevaluate and try to weed out psychopaths. I can see US HR law becoming a lot less about empowering psychopaths and more about demoting or screening them out. The same psychopaths that put the organizations in jeopardy will be trying to cash in. It may be charismatic leader types can’t ladder climb. Also there is a problem with buying secrets. Is that payment going to provide the equivalent of witness protection, unlikely. So payment is unethical but it will start the ball rolling and help remove “psychopaths.”

But as for the rest of us, what do they really have ?They may have your associated data but in some important ways they have nothing. Its always struck me as funny that the US government can try to compel people to protect its secret name for people its SS#s as it sells it out of the other side of its mouth and how lately its been conflating secrecy with privacy in its Privacy and Information Act. Compelling participation in the census seems more reasonable by comparison than be force to hold some secret access code.

Lets say they have your DNA data, a picture of people they think you’ve screwed on the side and all you phone calls and all your texts and all your tax records and all your school grades. Maybe they have all your account records and numbers. They feel they can liquidate your life and lock you out of it. They could make you stateless and erase all your data (or the requisite amount of it.) They feel empowered but I think in a very basic way they have nothing.

They delude themselves if they think they really know something about you or that they could really predict you with psychological models or know your motives and real intents. They have your data but in a very real way they have nothing, they have numbers that don’t define you or predict you. They have measurements that society has imposed on you- its a kind of violence and now they say they can with hold it or lock you out.

Even a jury trial on every action you’ve ever taken wouldn’t yield anything like real understanding or even predictability that wasn’t generic. They can’t even do that for chess games. We know from recent experience that the best AI can now beat the best human. But recent experience also tells us that the best AI presently aided by grand masters can be beat by a couple novice humans with a couple of weak AI’s running on laptops in a free style match- accounting of this was in the recent book “The Second Machine Age” MIT Press. A superior collaborative process won out. Say that the top chess AI is up against Gary Kasporov and a weak AI on a lap top. As above we know it can win, but would it be aided by also consulting a simulation of Gary Kasporov as a chess player that took into account all his recorded games and all the questions he’s ever answered about chess? It might, I think the Deep Blue team did something like that, but put that in the context of two novices and two laptops winning against GMs aiding a top AI. Now try to predict Gary Kasporov in real life or in conjunction with things he’s working on with strangers or friends. They can’t model him in two dimensions, they delude themselves. What they have is the power of exile which they could always do by gunpoint or by disappearing. Its produced the strongest opponents of the state to the point that they stopped doing it. As as the Slur “implications” point out its not just a state that would have some access.

On the front of scrambling accounts there is some reason for worry that on average in place locks may be too weak. Another thread on this site referenced an accelerating series of breakthrough discrete algebra that put public key prime factorization approaches at risk where breakthroughs normally take millennia- it wouldn’t just be convenient NSA hack-able locks. Imagine a disruption or compromising of those locks on any scale. If I understood correctly these revelations are shaking the foundations of crypto and hits at something truly bizarre behind the scenes. In Sci-fi its much stronger AI that makes these leaps. This has been part of the implication of AI for a while that it would make search more invasive and unreasonable. To a really powerful connected AI the world would be transparent and imagine a scenario where human nature would have some input.

.