ISC2 Security Congress attendees experience just how easy it is to create convincing deepfakes that can be used both for good and to cause harm.

Kyle HinterbergToday, we find ourselves dealing with an advanced landscape of artificial intelligence (AI) and with it the creation and detection of equally advanced and convincing deepfakes. In response, we are also having to address the sophisticated methodologies that produce and identify these digital deceptions.

“We’re now at the point where we really can’t trust what we’re seeing.” With this bold statement, Kyle Hinterberg, CISSP, Senior Manager at LBMC began his presentation at the 2024 ISC2 Security Congress in Las Vegas.

What’s a Deepfake?

In Decoding Deepfakes: AI’s Dual Role in Digital Deception and Detection , Hinterberg offered a definition of deepfakes that he generated from ChatGPT. “A deepfake is a synthetic media created using artificial intelligence, typically involving manipulated video, audio or images that make it appear as though someone is doing or saying something they never did” he said. “It leverages deep learning techniques to convincingly alter or fabricate content, often making it indistinguishable from real footage or recordings.” He went on to describe five main types of deepfakes:

  • Image Deepfake – This might be the result of a prompt using GenAI software to create a photo of a person doing something they never did. The example he showed was of Pope Francis wearing a stylish white puffer jacket
  • Audio Deepfake –This would be using a computer-generated voice that sounds exactly like someone else saying things they never said. The example given was a robocall that targeted New Hampshire voters in January 2024, seemingly from US President Joe Biden, urging people not to vote. Incidentally, he went on to report that the perpetrator is facing a US$6M fine and over a dozen charges and that the telecom that transmitted the message is being slapped with a US$1M fine
  • Text Deepfake –This is text that could result from the GenAI prompt, “Write a blog post in the style of [name].”
  • Video Deepfake –A generated video with audio that appears to be real. The example referenced was a public service announcement produced by Jordan Peele in 2018 where “President Obama” warns us about the use of deceptive videos (deepfakes) that seem to be real. By today’s standards, it’s easy to tell that it’s a simulation, but the technology has improved tremendously since then. Better results can be now achieved by anybody with access to off-the-shelf software
  • Live Deepfake –Perhaps the most concerning deepfake development has been the ability to map an image to the face of someone in a live video. This technology, in combination with audio deepfakes, makes it possible for threat actors to impersonate people in real time during virtual meetings and events

How Did Deepfakes Get So Good?

Basically, the AI has taught itself how to create human simulacra more accurately. Using Generative Adversarial Networks (GANs), two AIs are paired where one generates an image and the other determines whether it’s real or fake. The process continues and the result is an image that is indistinguishable from the real thing.

Hinterberg demonstrated the uncanny accuracy possible through the use of GANs by showing the audience four images, which he asked the audience to identify as genuine or fake. Most of the people in the packed room believed the images to be those of real people when, in fact, they’d been generated by a website, thispersondoesnotexist.com.

Most Deepfakes Are Created for Malicious Purposes

Many of us have used GenAI to create images for fun. But the majority of deepfakes are created for malicious purposes: to deceive, to bully, to blackmail. Hinterberg noted the explosive use of AI-powered fraud, citing a report that revealed that between 2022 and 2023, this type of malicious behavior increased between 477–4,500% globally.

It was noted that the stage of development of AI-powered fraud is similar to the “Nigerian Prince” scams from 20 years ago. However, it can only get worse, with GenAI being able to pass as human with ever-increasing accuracy.

Deepfakes Can Be Beneficial

Hinterberg pointed out that not all deepfakes are bad. He cited examples such as the country music star Randy Travis who, although he suffered from a career-ending stroke, was able to record new music with the help of a singer with a similar style combined with some GenAI tweaking. He also reported on Venezuelan journalists who created realistic avatars who stood in for them on camera so as to protect their identities from those who would do them harm and the use of makeup filters in popular online meeting platforms such as Zoom and Microsoft Teams.

What Can Be Done to Combat Deepfakes?

During the session, Hinterberg covered some of the ways we can fight the bad actors who use deepfakes against others:

  • Laws and Regulations – “Deepfake laws and regulations only serve to keep honest people honest. They don’t really deter the bad guys,” Hinterberg stated. While that may be true, it does give jurisdictions another prosecutorial tool
  • Deepfake Detection Software –While available, it’s not without its faults. Many organizations don’t even use it, and false positives can be a problem
  • Move Away from Physical Likeness as Identification –Instead, we could use other defining characteristics that are difficult for GenAI to reproduce
  • Teach People How to Spot Deepfake Images –Things like reflections in people’s eyes that don’t match, strange digital artifacts and facial hair (or the lack thereof) can be used to tell generated images from real photos
  • Raise Social Engineering Awareness –Teach people to watch for the telltale signs of a scam, such as a heightened sense of urgency, messages that are out of character for the purported sender, emotional manipulation and offers that are too good to be true. He further suggested establishing passwords or phrases that family members, employees, etc., can use to verify whether texts, phone calls and emails are really coming from who they say they are and establishing standard communication methods (e.g., messages from the CEO only come via WebEx and email from their business address)

Key Takeaways

Deepfakes are a powerful tool in any threat actor’s arsenal, often being easy to make and difficult to spot. As we continue to explore the capabilities of deepfake technology, it is imperative to remain vigilant about its potential dangers.