How the UK fell out of love with an AI bill

POLITICO - Monday, December 22, 2025

LONDON — Standing in Imperial College London’s South Kensington Campus in September, Britain’s trade chief Peter Kyle insisted that a tech pact the U.K. had just signed with the U.S. wouldn’t hamper his country’s ability to make its own laws on artificial intelligence.  

He had just spoken at an intimate event to celebrate what was meant to be a new frontier for the “special relationship” — a U.K.-U.S. Technology Prosperity Deal

Industry representatives were skeptical, warning at the time the U.S. deal would make the path to a British AI bill, which ministers had been promising for months, more difficult.  

This month U.K. Tech Secretary Liz Kendall confirmed ministers are no longer looking at a “big, all-encompassing bill” on AI.  

But Britain’s shift away from warning the world about runaway AI to ditching its own attempts to legislate frontier models, such as ChatGPT and Google’s Gemini, go back much further than that September morning.  

Gear change

In opposition Prime Minister Keir Starmer promised  “stronger” AI regulation. His center-left Labour Party committed to “binding regulation” on frontier AI companies in its manifesto for government in 2024, and soon after it won a landslide election that summer it set out plans for AI legislation.  

But by the fall of 2024 the view inside the U.K. government was changing.

Kyle, then tech secretary, had asked tech investor Matt Clifford to write an “AI Opportunities Action Plan” which Starmer endorsed. It warned against copying “more regulated jurisdictions” and argued the U.K. should keep its current approach of letting individual regulators monitor AI in their sectors. 

In October 2024 Starmer described AI as the “opportunity of this generation.” AI shifted from a threat to be legislated to an answer to Britain’s woes of low productivity, crumbling public services and sluggish economic growth. Labour had came to power that July promising to fix all three.  

A dinner that month with Demis Hassabis, chief executive and co-founder of Google DeepMind, reportedly opened Starmer’s eyes to the opportunities of AI. Hassabis was coy on the meeting when asked by POLITICO, but Starmer got Hassabis back the following month to speak to his cabinet — a weekly meeting of senior ministers — about how AI could transform public services. That has been the government’s hope ever since.

In an interview with The Economist this month Starmer spoke about AI as a binary choice between regulation and innovation. “I think with AI you either lean in and see it as a great opportunity, or you lean out and think, ‘Well, how do we guard ourselves against the risk?’ I lean in,” he said. 

Enter Trump

The evolution of Starmer’s own views in the fall of 2024 coincided with the second coming of Donald Trump to the White House.

In a letter to the U.S. attorney general the month Trump was elected influential Republican senator Ted Cruz accused the U.K.’s AI Security Institute of hobbling America’s efforts to beat China in the race to powerful AI.

The White House’s new occupants saw AI as a generational competition between America and China. Any attempt by foreign regulators to hamper its development was seen as a threat to U.S. national security.

It appeared Labour’s original plan, to force largely U.S. tech companies to open their models to government testing pre-release, would not go down well with Britain’s biggest ally.

Instead, U.K. officials adapted to the new world order. In Paris in February 2025, at an international AI Summit series which the U.K. had set up in 2023 to keep existential AI risks at bay, the country joined the U.S. in refusing to sign an international AI declaration. 

The White House went on to attack international AI governance efforts, with its director of tech policy Michael Kratsios telling the U.N. that the U.S. wanted its AI technology to become the “global gold standard” with allies building their own AI tech on top of it. 

In opposition Prime Minister Keir Starmer promised  “stronger” AI regulation. | Jonathan Brady/PA Images via Getty Images

The U.K. was the first country to sign up, agreeing the Technology Prosperity Deal with the U.S. that September. At the signing ceremony, Trump couldn’t have been clearer. “We’re going to have a lot of deregulation and a tremendous amount of innovation,” he told a group of hand-picked business leaders.  

The deal, which was light on detail, was put on ice in early December as the U.S. used it to try to extract more trade concessions from the Brits. Kratsios, one of the architects of that tech pact, said work on it would resume once the U.K. had made “substantial” progress in other areas of trade.  

Difficult home life

While Starmer’s overtures to the U.S. have made plans for an AI bill more difficult, U.K. lawmakers have further complicated any attempt to introduce legislation. A group of powerful “tech peers” in the House of Lords have vowed to hijack any tech-related bill and use it to force the government to make concessions in other areas they have concerns about like AI and copyright, just as they did this summer over the Data Use and Access Bill.

Senior civil servants have also warned ministers a standalone AI bill could become messy “Christmas Tree” bill, adorned with unrelated amendments, according to two officials granted anonymity to speak freely.  

The government’s intention is to instead break any AI-related legislation up into smaller chunks. Nudification apps, for example, will be banned as part of the government’s new Violence Against Women and Girls Strategy, AI chatbots are being looked at through a review of the Online Safety Act, while there will also need to be legislation for AI Growth Labs — testbeds where companies can experiment with their products before going to market.  

Asked about an AI bill by MPs on Dec. 3, Kendall said: “There are measures we will need to take to make sure we get the most on growth and deal with regulatory issues. If there are measures we need to do to protect kids online, we will take those. I am thinking about it more in terms of specific areas where we may need to act rather than a big all-encompassing bill.”

The team in Kendall’s department which looks at frontier AI regulation, meanwhile, has been reassigned, according to two people familiar with the team.

Polling by the Ada Lovelace Institute shows Labour’s leadership is out of sync with public views on AI, with 9 in 10 wanting an independent AI regulator with enforcement powers.  

“The public wants independent regulation,” said Ada Lovelace Director Gaia Marcus. “They prioritize fairness, positive social impacts and safety in trade-offs against economic gains, speed of innovation and international competition.” 

A separate study by Focal Data found that framing AI as a geopolitical competition also doesn’t resonate with voters. “They don’t want to work more closely with the United States on shared digital and tech goals because of their distrust of its government,” the research found

Political leadership must step in to bridge that gap, former U.K. prime minister Tony Blair wrote in a report last month. “Technological competitiveness is not a priority for voters because European leaders have failed to connect it to what citizens care about: their security, their prosperity and their children’s futures,” he wrote.  

For Starmer, who has struggled to connect with the voters, that will be a huge challenge.