Before getting into the concerning matters of the deepfakes, we should all know what this process means. It’s simple: you basically take a photo, or a video of a person and with the help with an AI you can make that photo move, or put the face of the person onto another video to make it seem like the respective person says or does something.
The term “deepfake” comes from “deep learning”, the process of the machine learning algorithms and “fake event”, which is the output of that process.
It all may seem interesting, or at least, fun or silly to make images speak or move, but there are more or less obvious dangers of this technology. How was this developed and how would affect us?
It was first developed for the movie industry
The “deepfake” term is relatively new, given by a Reddit user back in 2017, yet the process is much older. The Video Remake program, which was published in 1997, was a simple, yet pretty effective way of animating a person’s mouth in order to make them look like they are speaking different things. The technology advanced and the very known “FaceSwap” feature and app was released to the world wide public as soon as 2015. Various other apps, such as “Reface” emerged and became pretty popular.
From fun little personal videos, to memes, internet users could modify the faces of others in such a realistic way that you can hardly distinguish the real from the fake versions anymore. Out of curiosity I tried the “Reface” app and I was surprised by how well done some of these deepfakes are. At first it was fun, but then I thought: “What if someone took my face, deepfaked it and post it on creepy sites?” Of course, it sounds a little bit paranoiac, but it happened to others.
The deepfakes are all fun and games until psychopaths play with them
It happened to Bella Thorne, a former Disney Channel star who had her face swapped on an adult video. One video of her, in which she was crying over the death of her father, was deepfaked into a sexually explicit adult video. Not only does this type of video manipulation destroys the confidence of people, it also leads to harassment, threats and even blackmailing. And let’s be honest, a huge amount of people are still falling prey to e-mail scams, would they really be able to distinguish between real photos/videos and deepfakes?
The risks are getting bigger and bigger as the technology is available to almost anyone to do anything with it. Many social media platforms have officially restricted (or are planning to ban) the use of these deepfakes. However, things are handled differently in each countries. While some of them have adapted and made some laws regarding this type of harassment, many ignore the problem. And this ignorance will bring upon many people, not only celebrities, greater challenges as the culture of “revenge porn” grows day by day
How could someone protect themselves?
There is almost no way of protecting yourself, besides eliminating every single picture and video of your face on the internet. But ever since the pandemic forced us to continue with our everyday life online, with video-meetings, it’s easier than ever to have your pictures taken without your consent. The most effective solution would be have a this type of programs limited or restricted to the vast public, or at least, control them in order to protect people.
The deepfake technology has evolved a lot and while it is really helpful for the cinematic use or in the production of video games, there are risks that should be taken into consideration. Laws should punish those who attempt to blackmail or harass others via deepfakes and companies that own this type of technology should keep an eye open and limit the possibility of producing harmful media content.