By KIM BELLARD
The Tom Cruise TikTok deepfakes last spring didn’t spur me into writing about deepfakes, not even when Justin Bieber fell so hard for them that he challenged the deepfake to a fight. When 60 Minutes covered the topic last night though, I figured I’d best get to it before I missed this particular wave.
We’re already living in an era of unprecedented misinformation/disinformation, as we’ve seen repeatedly with COVID-19 (e.g., hydroxychloroquine, ivermectin, anti-vaxxers), but deepfakes should alert us that we haven’t seen anything yet.
ICYMI, here’s the 60 Minutes story:
The trick behind deepfakes is a type of deep learning called “generative adversarial network” (GAN), which basically means neural networks compete on which can generate the most realistic media (e.g., audio or video). They can be trying to replicate a real person, or creating entirely fictitious people. The more they iterate, the most realistic the output gets.
Audio deepfake technology is already widely available, and already fairly good. The software takes a sample of someone’s voice and “learns” how that person speaks. Type in a sentence, and the software generates an audio that sounds like the real person.
The technology has already been used to trick an executive into sending money into an illicit bank account, by deepfaking his boss’s voice. “The software was able to imitate the voice, and not only the voice: the tonality, the punctuation, the German accent,” a company spokesperson told The Washington Post.
One has to assume that Siri or Alexa would fall for such deepfaked voices as well.
Audio deepfakes are scary enough, but video takes it to another level. As the saying goes, seeing is believing. A cybercrime expert told The Wall Street Journal: “Imagine a video call with [a CEO’s] voice, the facial expressions you’re familiar with. Then you wouldn’t have any doubts