Monday, December 23, 2024

UK technology firms must still respond to EU AI Act

Must read

Jason Raeburn, Partner, Paul Hastings

The first comprehensive legal framework for the use and development of artificial intelligence, the EU AI Act, has finally come into force, and while the UK has formally left the bloc of 27, business ties to the continent mean that British organisations will still be impacted. Jason Raeburn, an intellectual property expert at Paul Hastings has shared some comments on the act and what it means for UK technology companies.

With many of AI’s chief proponents having long warned of the risks of the allegedly powerful technology, regulators have been scrambling to roll out regulations on its use. This has often been met with panicked reactions from those same proponents, who seem distressed that regulators have taken their warnings seriously – and not just as boasts designed to draw in even more capital expenditure from excited investors.

For example, the European Union’s (EU) ‘AI Act’ has long been lobbied against by AI-gurus as something which “could drastically limit the sector’s potential” – depending on how strict it turned out to be in practice. But now that the law has been finalised, what exactly does it mean?

Jason Raeburn, intellectual property and technology litigation partner at legal firm Paul Hastings, explains, “Now that the EU AI Act has come into force, UK tech firms need to be geared up for change. The act will require businesses to have an in-depth understanding of its regulatory requirements and be primed and ready for implementation – especially for those aiming to scale up to a global market.”

The AI Act is the first-ever legal framework on AI, which addresses the risks of AI and positions Europe to play a leading role globally. According to the EU’s own website, it “provides AI developers and deployers with clear requirements and obligations regarding specific uses of AI” – which the institution asserts will actually strengthen uptake and investment in – rather than hinder – AI, as it will support “the development of trustworthy AI”, with measures guaranteeing the safety and fundamental rights of people and businesses when it comes to AI.

Risk levels

Put simply, the act classifies non-exempt AI applications by their risk of causing harm. There are five levels to this. At the most severe end are unacceptable risks, like applications which manipulate human behaviour, those that use real-time remote biometric identification, are outright banned. High-risk applications, like systems used in health, education, recruitment, critical infrastructure management, law enforcement or justice, must comply with security, transparency and quality obligations, and undergo conformity assessments.

Then there are lower-intensity regulations for limited-risk applications, for example generate those which could manipulate images, sound, or videos, only have transparency obligations. There is also an additional category for general-purpose AI, which includes foundation models like ChatGPT – and also have transparency requirements. And finally there are minimal-risk applications, like AI systems used for video games or spam filters, are not regulated at all.

And even though the UK left the EU, the rules have an ‘extraterritorial effect’, according to Raeburn. This means “compliance will be mandatory for many UK AI systems”, including those with outputs utilised by users in the EU. This means that firms will be “required to make significant investment in compliance measures (and engineering changes) to avoid hefty penalties”.

“Due to the broad scope of the EU’s regulation, UK tech businesses will inevitably face friction as the act comes into force, especially for those involved in high-risk AI applications,” Raeburn continues. “The act’s risk-based approach means that higher-risk AI systems will encounter more rigorous compliance demands, with severe penalties for non-compliance.”

Raeburn concludes that for UK tech firms, “this will likely have a huge impact on operations and strategic planning.” However, firms which act fast may put themselves in a strong position to deal with future rules specific to the UK – with the possibility that the new government will also want to have its say on the regulatory front to inhibit perceived risks relating to the technology.

Latest article