Following on from our look at the 2025 predictions of ISC2 members, we asked some of our ISC2 subject matter experts and board members to share their cybersecurity predictions and expectations for the year ahead.

The leadership team and subject matter experts at ISC2 are fortunate to be exposed to a great many sources of insight in their work. Whether it is talking to members, legislators, other industry bodies, vendors and academics, as an organization we are fortunate to have access to a great deal of knowledge and analysis about the cybersecurity world and where it is going.

We asked some of our experts across the organization and board of directors to share with us some of their cybersecurity predictions for the year ahead.

2025 Will Be the Year of Deepfakes

The examples of deepfakes that we’ve seen in 2024 have been worrying to say the least. From AI-generated images of faux Taylor Swift fans claiming their support for U.S. election candidates to fake videos of Ukrainian President Volodymyr Zelenskyy bowing his head in surrender, deepfakes have largely targeted major celebrities and world leaders.

“In 2025, I predict we’ll see the use of deepfakes taken to the next level and used as a core tactic in financially motivated cyber-attacks on companies large and small. While business email compromise (BEC) certainly isn’t going anywhere anytime soon, we can expect to see the use of deepfakes to accomplish similar goals,” noted Jon France, CISSP, CISO at ISC2. 

“I also expect that deepfake technology will become increasingly commoditized in order for adversaries to use it on a larger scale and to target more ‘everyday’ people. Deepfakes in the form of audio, video and image manipulation are on the rise and it’s imperative that organizations plan accordingly. How? Educating employees on what deepfakes are and what they look or sound like, clearly outlining processes for employees to report incidents, and exploring the use of deepfake detection tools are all solid starting points. All in all, deepfakes got a decent amount of “screen time” throughout 2024, but the impact on businesses is going to skyrocket in 2025 as deepfake technology becomes commoditized.”

This prediction was echoed by ISC2 board member May Brooks-Kempler, CISSP, HCISSP, who said: “AI has been a very big buzzword for the last couple of years. But we are now seeing the use of AI, deepfakes and other elements in day-to-day practice. Not just in cybersecurity, but in terms of the security side of things, I think that 2025 will see a lot of work on differentiating real and factual data from fake data. We already see the impact of fake data in misinformation in media, politics, phishing and more.”

It’s Not All About AI

Whether it is good or bad actors making use of AI in cybersecurity, the term has dominated the headlines and industry discussions for the last year. However, one of our notable predictions pointed to an important reset in expectations and attitudes towards AI. The hype cycle around AI has been going for several years, suggesting it’s time to start recognizing the limitations of the technology and that it isn’t a magic wand that can solve everything for everyone.

“AI is not going the change the world and it’s not going to change cybersecurity in 2025. It is developing rapidly, it is creating new challenges and new risks, and of course it’s very exciting. As our understanding of AI develops, we are learning more and more that cybersecurity can benefit from the use of AI. However, AI is limited in many areas, and so it is not the answer to everything,” ISC2 board member James Packer, CISSP, CCSP, explained.

Regulating Emerging Tech and Supply Chains

2024 saw a great deal of moment in regulation, with new laws coming through in the EU and U.S. covering AI, disclosure and cybersecurity responsibility for products amongst others.

“I predict that the hot regulatory landscape we’ve seen in cybersecurity in 2024, specifically around emerging technologies like AI, will stay hot well into 2025. The usage of Gen AI (and not always for innocent purposes) has sparked an outcry for placing meaningful regulations on the technology and how it’s used,” France noted.

“We’ve seen household name brands like X/Twitter and LinkedIn face backlash for training AI models on user content. That trend of heightened scrutiny around how AI models are trained will absolutely continue into the new year. That backlash and increased focus will likely spark additional legislation worldwide,” he added.

Supply chains have also been a critical consideration, with greater reliance on digital integration of supply chains and on software supply chains.

“In addition to AI, we can expect to see heightened legislative attention on an area we know and love – the supply chain. Protecting critical national infrastructure has always been a top priority, but our world as it relates to technology is pulled across all compass points,” France said.

“If you think of North and South being the depth of which digital technologies have penetrated operations, and East and West as being the spread between legacy technologies and emerging tech, everything we do is rooted now in some sort of digital environment…meaning everything in our world at some point touches a digital supply chain. That depth and breadth will only continue to expand and spark the need for additional protections through regulation.

Related Insights