Deepfake Technology: Deepfakes are synthetic media in which a person in an existing image or video is replaced with someone else’s likeness.
In this age of technology, people misusing it is a common sight. Deepfake is such an invention. This technology is used to create fake images or videos that look completely flawless and are aimed at targeting different types of people in our society. Since its first appearance in 2018, Deepfake Technology has evolved from hobbyist experimentation to a potentially dangerous tool.
Don’t believe every video you see!
What is Deepfake?
Deepfakes are fake videos or audio recordings that look and sound just like the real thing. It is developed with the help of deep learning using artificial intelligence to perceive the original facial recognition of the targeted person and matching the same while putting on fake images or videos. As of today, anybody can download deepfake software and create fake yet realistic videos in their spare time. So far, deepfakes have been limited to amateur hobbyists putting celebrities’ faces on adult stars’ bodies and making politicians say funny things. However, it would be just as easy to create a deepfake of an emergency alert warning an attack was imminent, or destroy someone’s marriage with a fake video, or disrupt a close election by dropping a fake video or audio recording of one of the candidates days before the voting starts.
How do Deepfakes work?
Deepfakes use an AI algorithm called the deep neural networks (that’s where the “deep” comes from). Deep Neural Networks algorithm is especially good at finding patterns and correlations in large sets of data. While creating Deepfakes, the deep learning system helps to create persuasive counterfeit by examining the photographs and videos of a target person viewed from multiple angles, and then imitating its behavior and speech patterns accordingly to make it seem very genuine. To make such such images or videos, a method known as GANs, or Generative Adversarial Networks, is used to detect flaws in the forgery which ultimately leads to its improvement to make content that is much more believable.
Deepfakes use a special type of neural-network structure called an “autoencoder”. This structure is composed of two parts: an encoder, which compresses an image or video; and a decoder, which decompresses the compressed data back to its original image or video. This autoencoder is not the typical encoder/decoder software. Apart from groups of pixels, it also operates on features such as shapes, objects, and textures. An autoencoder which is well-trained can perform more advanced tasks such as, for example, generating new images or removing noise from videos.
When trained on face images, it learns features such as eyes, nose, mouth, ears, hair, eyebrows, etc. Deepfakes use 2 autoencoders: one trained on the face of the actor and other trained on the face of the target. The inputs and outputs of the two autoencoders are swapped to transfer the facial movements of the actor to the target.
Perils of Deepfakes
AI-doctored videos were previously used for creating fun, educational videos. However, today the darker side of such doctored videos has become much more prominent. Shortly after its release, the deepfake program caused a flood of fake videos starring celebrities on social media platforms. The AI-powered technologies have made it possible not only to fake the face but also the voice of virtually anyone.
In politics too, these can be used as weapons by opposition parties. Fake sentiments can be created amongst the masses against the ruling party. And the worst part of Deepfake is when people start believing and nobody is able to detect whether it is fake or not. This can badly damage the reputation of the targeted person.
While social media platforms are trying their best to curtail the spread of false information, the threat of fake news triggered by Deepfake Technology has become a serious concern, especially as US prepares for its presidential elections. US lawmakers have flagged deepfakes as a threat to national security.
How to Detect Deepfakes?
Detecting deepfakes is a hard problem. Other signs that machines can spot include a lack of eye blinking, shadows that look wrong or abnormal skin colour variations. GANs, or Generative Adversarial Networks, that generate deepfakes are getting better all the time, and soon we will have to rely on digital forensics to detect deepfakes — if we can, in fact, detect them at all.
“Theoretically, if you gave a GAN all the techniques we know to detect it, it could pass all of those techniques,” David Gunning, the DARPA Program Manager in charge of the project, said in an interview. “We don’t know if there’s a limit. It’s unclear.”
In September, Facebook, Microsoft and several universities launched a competition to develop tools that can detect deepfakes and other AI-doctored videos. “This is a constantly evolving problem, much like spam or other adversarial challenges, and our hope is that by helping the industry and AI community come together we can make faster progress,” Facebook CTO Michael Schroepfer wrote in a blog post that introduced the Deepfake Detection Challenge.
The greatest protection we have around deepfakes, as of now, is the hype around it. We are aware that the video can be forged in this manner, and that makes the current situation less unpleasant.