Taking proactive steps for EU AI Act compliance
As discussed in my previous post, the regulation and safety of AI have been high priorities for governments and key topics in industry forums. Following extensive legal, business, political, and technical discussions, as well as lobbying by public and private stakeholders, the EU Parliament approved the AI Act in March 2024, and it came into force in August 2024. This marked the beginning of a phased implementation period during which various elements, covering also General Purpose AI services like ChatGPT, will become enforceable.
Touted as the first comprehensive law for AI globally, it will bring strict requirements for the complete AI ecosystem of providers, users, manufacturers, and distributors of AI systems in the EU market. The act follows other major EU digital legislation, such as the GDPR, the Digital Services Act (DSA), the Digital Markets Act, the Data Act, and the Cyber Resilience Act.
In a nutshell the act introduces a risk-based approach and categorization of systems based on risk levels with specific compliance requirements. The prohibited category includes things like social scoring, exploiting vulnerable people, behavioral manipulation, or facial recognitions systems in public spaces for law enforcement (with exceptions).
The act specifically defines requirements for General Purpose AI (Foundation Models) that pose systemic risks (those trained with greater than 10^25 FLOPS of compute, for example GPT4) on transparency on technical and training data, safeguards against unlawful output, energy consumption and more. The act also carves exceptions for research and proposes regulatory sandboxes for SMEs and innovative businesses to develop and test in real-world conditions before placing solutions in the market and allow for safe innovation.
Penalties from noncompliance for enterprises would amount to the tune of 7% (or 35M€ whichever higher) of worldwide turnover for prohibited systems, and 3% (or 15M€ whichever is higher) for high-risk AI systems, or penalties for providing incorrect or misleading information to authorities. Strict compliance enforcement is set to be overseen by national authorities designated by each EU Member State as well as a centralized European AI Office for monitoring.
The act is not without criticism as it is not clear on specific definitions and approach to categorize systems, causes ambiguity on what elements would come under compliance, adds costs and burden of compliance and steep liability risks, and tries regulate a technology that is nascent and rapidly evolving and subject to change, causing concerns of innovation slowdown and scaring investments away from the EU.
We are living a fast-paced innovation and adoption era for AI, with leading companies competing aggressively and introducing services early and often, while governments worry about risks for people caused by bad actors and carelessness, as well as the immaturity of the technology.
If you feel your organization is walking on a razor’s edge and needing a lifeline to make it speedily and safely across, do reach out.
Mikel is a senior business and technology leader with broad experience in helping global customers develop and ship next-generation digital products and services. His passion is to collaborate and combine business, technology, and software to create value. At Tietoevry Create, he is responsible for driving technology leadership across the organization and with customers, including technology excellence for solutions, assets and capabilities.