The ability of AI platforms to produce extremely convincing audio and video fakes to commit fraud, distribute misinformation or disinformation is growing at pace, and bad actors are increasingly leveraging these capabilities and creating new technology and awareness challenges for cybersecurity professionals and organizations as a whole.

Disclaimer: The views and opinions expressed in this article belong solely to the author and do not necessarily reflect those of ISC2.

Most technologies can be used for both positive and negative applications, and artificial intelligence (AI) is no exception. Countless projects have explored the use of AI for positive purposes from cancer diagnosis, to climate change research, to saving the world’s bee population. Sadly, though, AI can be equally effective in more nefarious activities.

The dictionary defines a deepfake as: “an image or recording that has been convincingly altered and manipulated to misrepresent someone as doing or saying something that was not actually done or said”. Regarding deception, the American Psychological Association has a great definition of the two key terms: “misinformation is false or inaccurate information – getting the facts wrong; disinformation is false information which is deliberately intended to mislead – intentionally misstating the facts”.

Misinformation and Disinformation

We are used to the concept of misinformation already and have been for much longer than AI has been part of the mainstream. The concept of “garbage in, garbage out” has been around since the 1950s, and we are used to treating anything we see on singular sources on the internet with the view that it might be wrong.

Similarly, many cybersecurity professionals will have seen examples of disinformation in their cybersecurity careers – it is a wide-ranging concept that can include the likes of social engineering attacks (where the bad actor pretends to be someone they’re not), the old favorite of bogus invoices and fake bank account change letters, along with attempts to defraud investors by lying about testing.

The Rise of Deepfakes

With regard to deepfakes, there are two key reasons that they, along with the disinformation they propagate, are a growing threat. First, it can be incredibly cheap – around $100/£100/€100 or less – to make a reasonably convincing deepfake video using widely available tools on the internet (and on the dark web where, according to research by Accenture, the number of deepfake tools available more than doubled between the first quarter of 2023 and the same time the following year). Second, at the other end of the spectrum, some bad actors are willing to spend thousands, or even tens of thousands, on high-quality deepfakes in the hope of a handsome reward from a successful crime. Success does happen – as engineering company Arup discovered when it lost HKD $200 million (about U.S. $25 million) as a result of an employee falling for a series of deepfake video calls.

Worse, deepfake attacks are surprisingly common. In a Deloitte poll in May 2024, 25.9% of executives said their companies had experienced at least one deepfake attack – that is over a quarter of respondents. In another Deloitte report, the firm argued that by 2027 AI could be the enabler of as much as U.S. $40 billion of fraud in the U.S. alone by 2027. Even the World Economic Forum has disinformation at the top of its threat list.

This prompts the obvious question: if deepfakes are here to stay and are being used more and more against us, what can we do to defend against them?

Using Good to Detect Bad

AI tools designed to do good are springing up to help identify fake videos; a useful paper by the research team behind the entertainingly-named DeepFake-O-Meter platform summarizes a number of the products available, though the list grows constantly. Complete reliance on technology as a defense is unwise, however: anti-malware software, for instance, has existed for decades and remains imperfect, so new tools such as these AI deepfake detectors will inevitably be less than 100% effective.

There is an apparent need for other controls alongside technology that can spot fake videos. There are plenty of techniques and controls that cybersecurity professionals should be familiar with. Let us look back at one of the examples mentioned earlier, as the opportunities for improvement are significant.

Addressing the Scam Risk

First, financial controls. A single individual made 15 separate payments based on instructions received in video calls from people who appeared to be senior management.  The absence of a four-eye check on the finance system was a big help to the scammers in committing the fraud – a second individual may have questioned the validity of the payment. One must question what controls were in place around verifying account numbers and recipient account names when they were first entered into the system.

Second, procedures. Should it even have been acceptable to instruct and agree a payment on a video call? Well, possibly, but only if it’s considered and done thoroughly. For years, large transactions have been taking place on the strength of a phone call – as exemplified in a 1995 documentary (start at 40:18) when United Airlines paid Boeing for its first 777 aircraft via a conference call – but with suitable controls and multiple levels of checking (and even a little misinformation in the form of a typo on the receipt, which was picked up by the checks in place). In principle, there is nothing preventing the use of voice or video calls for sensitive matters, but the right checks need to be in place to make it safe.

Finally, though, there is one standard control that is at the heart of all defense against deepfakes and disinformation – zero trust. The principle of verifying everything, trusting nobody and being skeptical by default is arguably one of our best defenses against many of the threats our organizations, systems and ourselves face. In particular, it can help repel those that are so novel and are evolving so rapidly that a purely software defense will struggle to keep pace with the evolving threat.

Related Insights