Yesterday I’ve been watching the dutch show “Zondag with Lubach” about deepfakes. I was already familiar with the concept, but this time I was wondering about the effects of deepfakes, combined with the SAFE Network. As by artificial intelligence created videos, pictures and audio are getting close to being indistinguishable from the real thing wouldn’t it be really dangerous to have the ability to post a video from lets say the president of the United States in which he declares war to a country, religion or perhaps company? Is there any way for SAFE Network users to identify such AI created video and prevent others from making the mistake of thinking it is real?
There is no declaring war in 21th. century. If one country attack you, you are in a war.
Casual people can be mistaken Today and will be also with Safe network. There is only ability to read the history up to original post with the guarantee, that no one changed or deleted it.
Official images\videos may be fitted with digital signatures?
I think that more attention could be put towards the source of information rather than the content itself
The benefits of Safe Network will be in the ability to verify the source if the publisher so wishes.
For instance a rock band might advertise their ID with their name. Thus anything published under that ID can be assured to have come from that source (the rock band)
Deep fakes will come from sources other than the official source
(as @Future said)
This is “key” So in Safe id’s are crypto keys really. Linking these publically to a known entity (however that’s done), is how folk can protect their own authorised content.
It’s a rather large story, but you are 100% on the mark as usual @neo we need to alter the narrative to authorised content. Then unless we can deepfake crypto keys we have a much better position to work from.
What it hinders will be investigative journalism and sting operations. However, even proving the journo was valid can help as well I think. Anyhow the narrative is certainly different on Safe.
You don’t have to prevent it, you only need to get respectable sources to verify it. There are plenty of stupid content on the internet and it would be stupid to believe everything is true that you read or see.
I already commented earlier on another topic about how can we implement a NSFW or Mature filter which doesn’t ultimately deny you from finding the content but hides it until you enter a code (for kids) or click on it to reveal the content.
If the Safe Network becomes the next internet with many useful resources parents could set up their kids with a family-friendly account and they could enter what Safe-URLs they can visit. E.g. allow them to browse to their school website and domain, Wikipedia and other educations and kid-friendly confirmed adresses.
Or just use mice to detect deepfakes
‘coincidentally’ also in the NYT.
i take the comforting side of: with all the crap coming out about the high elite, the incredible censoring of this, the coming cases in ‘human trafficking’ with the highest persons involved, from the Clintons to Royal families, to a lot of politicians… i guess some video evidence will be brought up and then put away as ‘deepfake’
the amount of power to let AI make correct animation (beside a bit of facemoving) is still humongous and to let an animator do the work is still more attracktive.
you are watching realistic graphics for over 20 years so this is about nothing than preparing you for something.
Should be easy, since every computer has a mouse.