After an extensive legislative process, the AI Act was published in the E.U. Official Journal as expected on July 12, 2024. What does this new piece of legislation mean for cybersecurity professionals and how can it help you address AI cybersecurity?

The Artificial Intelligence Act (AI Act) from the European Union (E.U.) is the world's first comprehensive regulation on artificial intelligence. It’s an important moment for AI regulation for a number of reasons, in particular it being the first example of significant regulation of the AI sector by a government or political bloc. In a similar way to how GDPR shaped data privacy globally, this Act also has the potential for global ramifications. It also brings significant structure and process to what has thus far been an unregulated part of the technology sector with a wide array of standards, practices and policies.

Defining AI

Central to legislation and compliance with it is the definition of the subject matter. The E.U. is defining AI as a machine-based system that is designed to operate with varying levels of autonomy and that can, for explicit or implicit objectives, generate outputs such as predictions, recommendations, or decisions that influence physical or virtual environments. This definition aligns with the definitions from the OECD and Biden Administration Executive Order 14110 on the safe, secure, and trustworthy development and use of AI.

It’s a broad, but clear definition that organizations and cybersecurity professionals can work with while still allowing the legislation to maintain far-reaching coverage even as technologies and use cases evolve over time.

Categorizing Risk

One part of the legislation that cybersecurity professionals will work with on a regular basis is the categorization of AI risk. A four-step classification scheme has been developed to help professionals and organizations determine and document their AI risk in a consistent way:

  • Minimal Risk
    Using AI-based technology to automate functions such as spam and content filtering based on pre-defined parameters. Items in this category are unlikely to incur regulatory compliance requirements
  • Limited Risk
    Use where there is a need to disclose that an AI system interacted with a user or was the decision-maker in a transaction. For example, chatbots and recommendation engines. Limited risk can also apply to AI applications such as video and image processing, whereby the AI system is making outcome-altering decisions on behalf of the requester.
  • High Risk
    The application or service is useable, but constitutes a high degree of operational, security or legal risk. Examples include but are not limited to things like AI-based credit assessment, automated education assessment, AI management or monitoring of critical infrastructure etc.
  • Unacceptable Risk
    A prohibited system or application of the technology, such as social scoring or other manipulative AI platforms and applications

Compliance

High risk AI Systems will be subject to a Conformity Assessment (Article 19) to demonstrate adherence to the AI Act before being placed on the market in the E.U. Such an assessment will require the generation and collection of the documentation and evidence needed for an assessment, which may create additional work for cybersecurity professionals, but will result in a clearer and more consistent compliance environment.

The fines for violations of the AI Act are a percentage of the offending company’s global annual turnover (between 1.5-7% depending on the type of infraction) in the previous financial year or a predetermined amount, like the fine structure for GDPR, whichever is higher. However, there is scope for caps on administrative fines for smaller organizations and start-ups in case of infringements of the provisions of the AI Act.

Supporting ISC2 Members with AI Insights

ISC2 recently conducted a survey titled “AI in Cyber 2024: Is the Cybersecurity Profession Ready?” of 1,123 members who work, or have worked, in roles with security responsibilities to understand the realities of how AI is impacting everyday cybersecurity roles and tasks, as opposed to the perception of how its use intersects with the roles of professionals. Considering the imminent E.U. AI Act, this study provides timely insights into the threats, concerns, skills challenges and benefits that AI poses to organizations and cybersecurity teams.

Key AI-driven concerns are not attack based, but more regulatory and best-practice driven, recognizing the appetite for formal legislation. These include:

  • The current lack of regulation (59%)
  • Ethical concerns (57%)
  • Privacy invasion (55%)
  • The risk of data poisoning – intentional or accidental – (52%)

Furthermore, respondents were clear that governments and their agencies need to take a lead in defining the parameters for acceptable AI use, although 72% agreed that different types of AI will need their own tailored regulations. Regardless, 63% said regulation of AI should come from collaborative government efforts (ensuring standardization across borders) with 54% also noting a desire for national governments to take the lead in creating regulation.

However, survey respondents are highly positive about the potential for AI, with 82% stating that AI will improve job efficiency for them as cybersecurity professionals, alongside 56% also noting that AI will make some parts of their job obsolete. These stats recognize the evolving nature of legislation and cybersecurity professionals in the face of rapidly evolving AI technology, noting that roles will need to change and adapt to ensure compliance as well as keep pace with technology change.

Developing AI Skills

AI is changing the threat and regulatory landscape as it becomes more advanced and embedded in every aspect of business and society. At the same time, AI is introducing new defensive capabilities to cybersecurity. In both cases, there is a need for continuous education and skills development.

ISC2 has an on-going program of in-person and live virtual AI workshop experiences, providing members and all cybersecurity professionals with the essential working knowledge needed to develop the AI skill set for technology and regulatory implementation. The two-day workshops have been developed to help members ensure their organization's AI practices are aligned with established risk management and emerging industry best practices.

Members also earn CPE credits for participation, as well as gaining physical and digital certificates of completion.