Delivering cybersecurity awareness training for organizations can be enhanced with the thoughtful use of Generative AI tools. However, best practices for this particular use of Generative AI has not been well codified, creating challenges for its effective and reliable use.
Security awareness training – some of us have seen it done well and it’s likely that all of us have seen it done badly at one time. The Holy Grail of security awareness training is to make it engaging, relevant, interesting … exciting, even. In the 20 years or so since awareness training became a thing, there has been a near-infinite number of attempts to produce training that users look forward to and then enjoy the experience.
Given that the 2020s is the time when Artificial Intelligence (AI) and Machine Learning (ML) hit the mainstream, it makes perfect sense to look at whether we can use these technologies to make awareness training work at last. This is where Shoshana Sugerman and Brian Callahan, PhD, ISSMP, CISSP, CCSP, SSCP, CC, of Rensselaer Polytechnic Institute (RPI) took us at ISC Security Congress 2024 in Las Vegas.
The presenters’ first point was that AI is already very impressive when it comes to doing tasks that would previously have been limited to humans. It was pointed out, for instance, that in recent tests ChatGPT 4.0 performed very well indeed in the Turing Test: “ChatGPT currently passes the Turing Test about 54% of the time. Humans can't tell if GPT output is human or machine, so that counts as a pass for the Turing Test. We can debate whether or not the Turing Test is currently a good test for AI, but the fact of the matter is ChatGPT is in fact passing the Turing Test on its latest models”.
In academia the cleverness of AI already has its problems: “Academic research [is] one of the most transformative in terms of what generative AI has to bring; that does not necessarily mean it's all good … there is an epidemic right now in academia of people submitting entirely ChatGPT generated research papers”. AI is also tremendously useful for cyber attackers’ purposes, of course. For example, AI-generated phishing campaigns are now far more convincing than most human-written ones, as they are far less bound by the writer’s capability in writing convincing material fluently in the language of choice.
What AI Can Do for Awareness
The point of RPI’s experiment in AI was to establish what it can do for good in the realms of enabling security awareness, along with how (or, more accurately, whether) we might actually get there.
It is worth addressing the concept of Prompt Engineering (PE), which was key to the presentation, and which is perhaps not familiar to many. PE is defined as “the process of structuring an instruction that can be interpreted and understood by a generative AI model”.
The concept of PE threw up an interesting paradox. Cybersecurity trainers know cybersecurity and how to train people, but if you are using PE to train an AI model, do you even need to be a trainer at all? “Is PE a worthwhile skill for security trainers?” asked the presenters, following up with: “If it is, do those trainers also need security skills?”. The experiment took people who considered themselves PE specialists, those who classed themselves as security specialists and people who created awareness training programmes (and hence were also skilled in both PE and cyber).
An interesting nuance was that each of the groups took a different approach to informing the AI engine. The PE engineer gave ChatGPT free rein, while the security specialist pointed it at RPI’s own policies and tools. One member of the team had a hard time at the output end and ended up doing some manual work with Word to bring it into their desired form.
The Experiment Outcomes
The hypothesis for the experiment was that if AI were any good, there should be no real difference between the outputs of the three different sources of input into the AI engine. So, what actually happened?
Well, on one hand the team was a little surprised that ChatGPT was making jokes; also, that although it can provide efficiencies, it cannot substitute entirely for human direction. There was disagreement on whether the various outputs were ready for use in awareness training without further development, but all agreed in general that the initial outputs were not good enough to use for training and that some human users needed to be involved at some point. There was a hint that the outputs could possibly be good enough, but even then, only for entry-level candidates.
The results varied widely. Sometimes the output was simply wrong but with a little bit of pointing the AI engine in the right direction the result leapt to completely correct. As an example, the presenters observed, when they nudged the model toward the organisation’s policy library, that the planets of correctness aligned even though: “[the policies] had gone live maybe two or three days before this person made their security awareness training”, observing that “Sometimes, depending on where your policies may be, ChatGPT very well might know your policies - if you know how to prompt it”.
The Results
The end results were varied. Recipients of training from the PE engineers’ efforts improved social engineering avoidance and password attacks. Those on the receiving end of security experts’ training were the same but added things like better detection of phishing. Meanwhile, those whose AI models came from dual experts (security specialists who new PE) added cyberthreat recognition to their results.
With regard to the perception of the training recipients’ views of AI-based training versus human-oriented, there was a mix of “so what” versus actual excitement and interest. In the former case some expressed a resignation to the fact that AI is everywhere and so it’s no surprise that AI is being used to deliver training. In the more positive views there was still an element of caution and suspicion, but this was combined with comments including: “I would be excited to see how much and how far AI could teach”. Trust was also a big element: one observation from a recipient was that they could see potential benefits, but could they really trust the model to be right?
Was there a definitive conclusion? No: research of this type is very much in its infancy and so one would not expect such a thing. The verdict in the presentation was scattered liberally with words of warning: some parts of the training turned out to be too general; there is the potential for them simply to be wrong, and hence to give wrong advice; and perhaps most importantly some people will be sceptical of AI and will remain in the mindset that human input is needed alongside it. The presenters’ advice echoed this: don’t assume that AI will always be right, because it won’t; if you’re using it, be transparent to those on the receiving end that you’re using it; and be mindful of the potential for AI to plagiarise content, no matter how unintentional.
The team’s suggestion? Perhaps unsurprisingly (but scientific following a valid experiment), a trade-off of AI and human security expertise is probably the way to go; and on balance the view is that some element of PE knowledge is probably going to be useful given the tendency toward AI.
Although there is no absolute certainty over the conclusion, there is a nudge for us all regarding where we might lean going forward: “We gathered tentative evidence that generative AI … can be a worthwhile tool to help develop cybersecurity awareness training”.
So: there’s a way to go, but AI for security awareness training is definitely something to keep exploring.
Footnote
We will leave the reader with one parting comment that will reassure those with an interest in Diversity and Inclusivity. At the beginning of the presentation (see above), the audience saw a slide showing RPI’s Generative AI For Cybersecurity Awareness Team: Aya, Soshana, Sanya, Sari, Arielle, Quinn, Mary and Lala. As Callahan put it: “Every single face up on the screen right now is a young woman in cybersecurity. So at RPI … we believe in building the cybersecurity future that we want to see. That includes brilliant, strong women. We would love to have you all come join us. Whether you're a student looking to up your game to come over to RPI. A professional looking just to get involved with what we were all doing. Or for research or corporate partnerships, we are happy to have you help us”. So it’s true: D&I is certainly not dead in cybersecurity.
- Register now for ISC2 Security Congress 2025 in Nashville
- ISC2 is holding a series of global strategic and operational AI workshops. Find one near you
- ISC2 Member Voices: Will AI replace the CISO?