Deepfake technology has increasingly been in the news recently, but what is it? And why has it become such an issue? Deepfake technology reflects perhaps more than any other modern technology the dangers that such technologies can pose. It also taps into broader questions about what is and isn’t real, how trusting we are and whether tighter controls and regulations need to be present online.
The team here at Nutbourne provide managed services in London, including cybersecurity solutions for companies. We wanted to look at this phenomenon (which is seeing increasingly insidious usage) in a bit more detail, as well as how you can look out for it.
What Is Deepfake Technology?
First, though, what actually is deepfake technology. Put in the simplest terms, deepfakes are video recordings in which artificial intelligence has been used to create hyper-realistic copies of ‘the real thing’, whatever that happens to be. The term itself comes from the merging of deep learning (a particular branch of AI machine learning) and fake.
The technology has seen its applications in everything from the innocence of Disney movies through to the type of blackmail known as revenge porn as well as interfering within highly contentious political matters like elections. Just this week, in fact, the BBC reported on a woman who used deepfake technology as a means of framing/slandering her daughter’s main cheerleading rivals. Channel 4 even used deepfake technology to create a lookalike of the Queen in its 2020 ‘alternative’ Queen’s Speech (for which it received a huge number of complaints).
Is It Becoming More Common?
It’s an unfortunate by-product of technological advancement, the use of those advances as a means of deceiving and exploiting. So long as technology improves so too will the number of hackers, scammers and other criminals. How much more common is deepfake technology becoming, though? Sadly, very. In 2019, for instance, there were (a still sizeable) 7,964 deepfake videos online. By the end of that same year, according to Deeptrace’s report, that figure had exploded to 14,678 deepfake videos.
As the technology becomes more readily available, easier to use and more understood, the number of deepfake videos (and thereby also the number of illicit activity involving them) is only going to continue to increase. Do we just resign ourselves to falling victim to them, then? No, of course not, and there are signs you can look out for to determine whether a video is genuine or not.
Warning Signs To Look Out For
This technology is undoubtedly impressive and becoming more sophisticated almost daily. That said, there’s no replacement for the real thing and there are subtle signs and red flags you can look out for to help you work out whether the content you’re viewing is genuine or not. According to cybersecurity giant Norton, these include the following:
- Unnatural eye movement. This is one of the biggest tell-tale signs that content is deepfake. If it seems ‘off’ in some way, then take another look; look at blinking patterns (or lack of) and the movement (or lack of). When it comes to eye movement, trusting your gut is usually a safe bet.
- Blurring. Now we don’t necessarily mean a low video quality overall, rather, inconsistencies in the video’s quality. If parts of the featured person’s body blur, for example, but everything else is in crystal-clear focus, then something might be wrong.
- Facial features that don’t quite look right. If you’re watching a video and the face seems to be all at odds with itself – angles that don’t seem right or natural, parts of the face looking like they don’t match or belong together – then you’ve every right to be sceptical and question the video.
This Isn’t A Fight You Have To Face Alone
With the technology becoming so complex, you might feel a bit daunted and that you too could fall for one such video. Don’t worry, though, because industry leaders around the globe are doing their bit to challenge deepfake technology, as well. Google, for instance, has created countless thousands of deepfakes themselves to help ‘train up’ their AI to better detect real-life examples of deepfake technology.
Microsoft has also put its hat into the ring in terms of tackling deepfakes and the ‘disinformation’ they can help to spread. They’ve come up with a tool that content producers can use whereby a hidden piece of code is added to the content and subsequent changes made to it can therefore be flagged.
Deepfake technology can do good in the world, but that’s equalled if not surpassed by its potential to cause harm. Ultimately, at this stage, look for anything unnatural in the content you’re consuming, and if something seems even slightly off, then take a closer look.
So, if you’d like to find out more about our work as a managed service provider in London, who offer extensive cybersecurity solutions and services, then get in touch! Contact Nutbourne today on +44 (0) 203 137 7273. Alternatively, you can fill out one of our online enquiry forms on our website. We look forward to hearing from you!