Artificial intelligence (AI) has rapidly become a key fixture on both sides of the cybersecurity equation – supporting professionals in defense and in proactive detection of risk, but at the same time being used by criminals to add new layers to their attacks and scams. We look at some of the considerations ISC2 members are dealing with in relation to AI-based cybersecurity.

AI is reshaping the cybersecurity landscape, offering powerful tools for defense while also introducing new challenges in the form of AI-based attacks. Organizations need to balance the benefits of AI with the risks it poses, continuously adapting their strategies to stay ahead of both attackers and evolving technologies, but also considering the non-technical impact of AI on users, operations and more.

Over 2024, ISC2 members have produced a number of articles, sharing their real-world experiences in relation to AI with their peers. From ethics to understanding Generative AI, members are already dealing extensively with on a daily basis, necessitating the need to develop and maintain skills to make best use of AI systems and services.

AI Regulation is Evolving

The Artificial Intelligence Act (AI Act) from the European Union (E.U.) is the world's first comprehensive regulation on artificial intelligence. It’s an important moment for AI regulation for a number of reasons, in particular it being the first example of significant regulation of the AI sector by a government or political bloc. It also brings significant structure and process to what has thus far been an unregulated part of the technology sector with a wide array of standards, practices and policies.

The National Institute of Standards and Technology (NIST) has also published a comprehensive framework for AI governance.

Having a clear basis for guidelines and regulation is seen as valuable, as Vaibhav Malik, CC, highlighted in his article looking at some of the complexities at the intersection of AI and cybersecurity.

“The most proactive organizations I have worked for have adopted proactive and holistic approaches, encompassing technological and human factors to navigate the complexities of AI and ML in cybersecurity. This has involved implementing robust AI governance frameworks, fostering a culture of security awareness, and collaborating with the broader cybersecurity community to develop effective countermeasures against malicious AI,” he said.

Who am I?

Generative AI (Gen-AI) represents a shift in how we use and interact with AI. However, one critical aspect often overlooked in this new direction is identity management. In the article Identity in the Gen-AI Era, we took a closer look, drawing on member perspectives.

“Gen-AI has demonstrated significant potential for identity-related use cases, transforming how we approach access security. It's no longer just about authenticating and authorizing users to access data; the focus is shifting towards the adoption of machine identities and advanced authentication/authorization technologies that can keep pace with the rapid flow of data,” said Mohamed Mahdy, ISSAP, CISSP, SSCP.

However, there is more to the AI identity and verification consideration, as Mike Reeves, CISSP, CCSP, highlighted in his exploration of AI’s ability to distort reality if the accuracy and provenance of the data feeding it is not assured.

“Most people just learning about AI today are unaware that hallucinations are possible – let alone a feature of the system. They have taken the word of the AI as gospel, in some cases with horrible consequences – such as the Texas A&M University professor who attempted to fail their students because ChatGPT falsely claimed the AI wrote the student’s papers,” he said.

“The best, recommended practice when employing AI is to [also] have some form of validation outside the AI. Such checks may be in the form of human review, existing code review analysis tools, utilizing Expert Systems, or even cross-referencing using AI agents,” he added.

The Ethical Dilemma

Technology aside, one of the biggest areas for member consideration has been the ethical implications of AI. There are clear practical implications for the use of QI, such as automating time-consuming and repetitive tasks to free people up for more valuable work and verification, or to provide a first-line engagement and triage for support teams, reducing the volume and, again, cost of requests that can be resolved without human intervention. However, the use and reliance on AI raises considerations of bias, profiling, ethical use and sharing of data, and more.

“Many cybersecurity experts may have experienced the substantial influence of AI when defending against cyberattacks. However, I have also grappled with the intricate ethical and moral challenges that arise when applying AI technology in this domain,” said Mathura Prasad, CISSP, in his article looking at the ethical implications of using AI in a cybersecurity context.

The trade-off between privacy and security is one of the most notable ethical conundrums in AI-driven cybersecurity. Through its capacity to process vast amounts of data, AI creates user privacy and equality concerns.

“A biased AI can result in profiling or unfairly targeting certain groups. For instance, an AI-based malware detection system might flag software disproportionately used by specific demographics, creating ethical concerns around bias and discrimination,” he added.

The future of AI is promising, disruptive and transformative, with the potential to significantly impact numerous aspects of daily life and industry. However, balancing innovation with ethical considerations and workplace impacts will be crucial as we navigate this evolving landscape.

  • ISC2 is holding a series of global strategic and operational AI workshops. Find one near you
  • Watch our webinar on “Five Ways AI Improves Cybersecurity Defenses Today”
  • Replay our two-part webinar series on the impact of AI on the cybersecurity industry: Part 1and Part 2