Wednesday, November 6, 2024

UK will legislate against AI risks in next year, pledges Kyle

Must read

Stay informed with free updates

The UK will bring in legislation to safeguard against the risks of artificial intelligence in the next year, technology secretary Peter Kyle has said, as he pledged to invest in the infrastructure that will underpin the sector’s growth.

Kyle told the Financial Times’ Future of AI summit on Wednesday that Britain’s voluntary agreement on AI testing was “working, it’s a good code” but that the long-awaited AI bill would be focused on making such accords with leading developers legally binding.

The legislation, which Kyle said would be presented to MPs in the current parliament, will also turn the UK’s AI Safety Institute into an arms-length government body, giving it “the independence to act fully in the interests of British citizens”. At present, the body is a directorate of the Department for Science, Innovation and Technology.

At the UK-organised AI safety summit last November, companies including OpenAI, Google DeepMind and Anthropic signed a “landmark” but non-binding agreement allowing partner governments to test their forthcoming large language models for risks and vulnerabilities before they were released to consumers.

Kyle said that while he was “not fatalistic” about advancements in AI, “citizens need to know that we are mitigating the potential risks”.

The legislation will focus exclusively on ChatGPT-style “frontier” models: the most-advanced systems, made by just a small cluster of companies, which are capable of generating text, images, and video.

Kyle also pledged to invest in the advanced computing power needed to enable Britain to train its own sovereign AI models and LLMs, after ministers came under fire in August for scrapping funding for an “exascale” supercomputer project at Edinburgh university. It had been promised £800mn by the previous Conservative government.

Exascale supercomputing — defined as the ability to produce a billion billion operations a second — is widely seen as a crucial step to unlocking the widespread adoption of AI.

There are two known fully functional exascale computers in the world, both in the US. Experts believe that China also has at least one, although it has not submitted to international leader boards on compute capacity.

Kyle said the decision to scrap the existing Edinburgh exascale project was a “painful” consequence of Labour’s fiscal inheritance from the Tories.

“I didn’t cut anything because you can’t cut something that doesn’t exist,” he said of the previous government’s failure to hand over any money to the programme despite promises to do so.

While the government would not be able to stump up £100bn to invest in compute infrastructure by itself, it would partner with private companies and investors to “unlock that kind of money going forward”, he said.

Kyle also suggested that the commitments inherited from the previous government were not adequately suited to the needs of the LLM sector today, saying: “If we’d planned for this two years ago, we would have got it wrong.”

“I will make statements specifically on compute, relating to sovereign compute capacity, but also general compute capacity that’s needed right across the economy and society for researchers and businesses alike,” he said. “But when I make an announcement . . . it will be funded, it will be costed and it will be delivered.”

Separately Sarah Cardell, chief executive of the Competition and Markets Authority, said the UK could become a leader in AI innovation and that the antitrust watchdog’s “unique” approach to digital regulation, through its new Digital Markets Unit, would enable a “very targeted, proportionate” approach to Big Tech.

The CMA’s proposed regulation was not “going to deter or chill investment”, Cardell told the FT summit. “There’s a huge opportunity here for this to be a platform for growth for the UK tech sector.”

Latest article