A clash between Poland’s right-wing president and its centrist ruling coalition
over the European Union’s flagship social media law is putting the country
further at risk of multimillion euro fines from Brussels.
President Karol Nawrocki is holding up a bill that would implement the EU’s
Digital Services Act, a tech law that allows regulators to police how social
media firms moderate content. Nawrocki, an ally of U.S. President Donald Trump,
said in a statement that the law would “give control of content on the internet
to officials subordinate to the government, not to independent courts.”
The government coalition led by Prime Minister Donald Tusk, Nawrocki’s rival,
warned this further exposed them to the risk of EU fines as high as €9.5
million.
Deputy Digital Minister Dariusz Standerski said in a TV interview that, “since
the president decided to veto this law, I’m assuming he is also willing to have
these costs [of a potential fine] charged to the budget of the President’s
Office.”
Nawrocki’s refusal to sign the bill brings back bad memories of Warsaw’s
years-long clash with Brussels over the rule of law, a conflict that began when
Nawrocki’s Law and Justice party rose to power in 2015 and started reforming the
country’s courts and regulators. The EU imposed €320 million in penalties on
Poland from 2021-2023.
Warsaw was already in a fight with the Commission over its slow implementation
of the tech rulebook since 2024, when the EU executive put Poland on notice for
delaying the law’s implementation and for not designating a responsible
authority. In May last year Brussels took Warsaw to court over the issue.
If the EU imposes new fines over the rollout of digital rules, it would
“reignite debates reminiscent of the rule-of-law mechanism and frozen funds
disputes,” said Jakub Szymik, founder of Warsaw-based non-profit watchdog group
CEE Digital Democracy Watch.
Failure to implement the tech law could in the long run even lead to fines and
penalties accruing over time, as happened when Warsaw refused to reform its
courts during the earlier rule of law crisis.
The European Commission said in a statement that it “will not comment on
national legislative procedures.” It added that “implementing the [Digital
Services Act] into national law is essential to allow users in Poland to benefit
from the same DSA rights.”
“This is why we have an ongoing infringement procedure against Poland” for its
“failure to designate and empower” a responsible authority, the statement said.
Under the tech platforms law, countries were supposed to designate a national
authority to oversee the rules by February 2024. Poland is the only EU country
that hasn’t moved to at least formally agree on which regulator that should be.
The European Commission is the chief regulator for a group of very large online
platforms, including Elon Musk’s X, Meta’s Facebook and Instagram, Google’s
YouTube, Chinese-owned TikTok and Shein and others.
But national governments have the power to enforce the law on smaller platforms
and certify third parties for dispute resolution, among other things. National
laws allow users to exercise their rights to appeal to online platforms and
challenge decisions.
When blocking the bill last Friday, Nawrocki said a new version could be ready
within two months.
But that was “very unlikely … given that work on the current version has been
ongoing for nearly two years and no concrete alternative has been presented” by
the president, said Szymik, the NGO official.
The Digital Services Act has become a flashpoint in the political fight between
Brussels and Washington over how to police online platforms. The EU imposed its
first-ever fine under the law on X in December, prompting the U.S.
administration to sanction former EU Commissioner Thierry Breton and four other
Europeans.
Nawrocki last week likened the law to “the construction of the Ministry of Truth
from George Orwell’s novel 1984,” a criticism that echoed claims by Trump and
his top MAGA officials that the law censored conservatives and right-wingers.
Bartosz Brzeziński contributed reporting.
Tag - Copyright
LONDON — Standing in Imperial College London’s South Kensington Campus in
September, Britain’s trade chief Peter Kyle insisted that a tech pact the U.K.
had just signed with the U.S. wouldn’t hamper his country’s ability to make its
own laws on artificial intelligence.
He had just spoken at an intimate event to celebrate what was meant to be a new
frontier for the “special relationship” — a U.K.-U.S. Technology Prosperity
Deal.
Industry representatives were skeptical, warning at the time the U.S. deal would
make the path to a British AI bill, which ministers had been promising for
months, more difficult.
This month U.K. Tech Secretary Liz Kendall confirmed ministers are no
longer looking at a “big, all-encompassing bill” on AI.
But Britain’s shift away from warning the world about runaway AI to ditching its
own attempts to legislate frontier models, such as ChatGPT and Google’s Gemini,
go back much further than that September morning.
GEAR CHANGE
In opposition Prime Minister Keir Starmer promised “stronger” AI
regulation. His center-left Labour Party committed to “binding regulation” on
frontier AI companies in its manifesto for government in 2024, and soon after it
won a landslide election that summer it set out plans for AI legislation.
But by the fall of 2024 the view inside the U.K. government was changing.
Kyle, then tech secretary, had asked tech investor Matt Clifford to write an “AI
Opportunities Action Plan” which Starmer endorsed. It warned against copying
“more regulated jurisdictions” and argued the U.K. should keep
its current approach of letting individual regulators monitor AI in their
sectors.
In October 2024 Starmer described AI as the “opportunity of this
generation.” AI shifted from a threat to be legislated to an answer to Britain’s
woes of low productivity, crumbling public services and sluggish economic
growth. Labour had came to power that July promising to fix all three.
A dinner that month with Demis Hassabis, chief executive and co-founder of
Google DeepMind, reportedly opened Starmer’s eyes to the opportunities of AI.
Hassabis was coy on the meeting when asked by POLITICO, but Starmer got Hassabis
back the following month to speak to his cabinet — a weekly meeting of senior
ministers — about how AI could transform public services. That has been the
government’s hope ever since.
In an interview with The Economist this month Starmer spoke about AI as a binary
choice between regulation and innovation. “I think with AI you either lean in
and see it as a great opportunity, or you lean out and think, ‘Well, how do we
guard ourselves against the risk?’ I lean in,” he said.
ENTER TRUMP
The evolution of Starmer’s own views in the fall of 2024 coincided with the
second coming of Donald Trump to the White House.
In a letter to the U.S. attorney general the month Trump was elected influential
Republican senator Ted Cruz accused the U.K.’s AI Security Institute of hobbling
America’s efforts to beat China in the race to powerful AI.
The White House’s new occupants saw AI as a generational competition between
America and China. Any attempt by foreign regulators to hamper its development
was seen as a threat to U.S. national security.
It appeared Labour’s original plan, to force largely U.S. tech companies
to open their models to government testing pre-release, would not go down well
with Britain’s biggest ally.
Instead, U.K. officials adapted to the new world order. In Paris in February
2025, at an international AI Summit series which the U.K. had set up in 2023 to
keep existential AI risks at bay, the country joined the U.S. in refusing to
sign an international AI declaration.
The White House went on to attack international AI governance efforts, with its
director of tech policy Michael Kratsios telling the U.N. that the U.S. wanted
its AI technology to become the “global gold standard” with allies building
their own AI tech on top of it.
In opposition Prime Minister Keir Starmer promised “stronger” AI regulation. |
Jonathan Brady/PA Images via Getty Images
The U.K. was the first country to sign up, agreeing
the Technology Prosperity Deal with the U.S. that September. At the signing
ceremony, Trump couldn’t have been clearer. “We’re going to have a lot
of deregulation and a tremendous amount of innovation,” he told a group of
hand-picked business leaders.
The deal, which was light on detail, was put on ice in early December as the
U.S. used it to try to extract more trade concessions from the Brits. Kratsios,
one of the architects of that tech pact, said work on it would resume once the
U.K. had made “substantial” progress in other areas of trade.
DIFFICULT HOME LIFE
While Starmer’s overtures to the U.S. have made plans for an AI bill more
difficult, U.K. lawmakers have further complicated any attempt to introduce
legislation. A group of powerful “tech peers” in the House of Lords have vowed
to hijack any tech-related bill and use it to force the government to make
concessions in other areas they have concerns about like AI and copyright, just
as they did this summer over the Data Use and Access Bill.
Senior civil servants have also warned ministers a standalone AI bill could
become messy “Christmas Tree” bill, adorned with unrelated amendments, according
to two officials granted anonymity to speak freely.
The government’s intention is to instead break any AI-related legislation
up into smaller chunks. Nudification apps, for example, will be banned as part
of the government’s new Violence Against Women and Girls Strategy, AI chatbots
are being looked at through a review of the Online Safety Act, while there will
also need to be legislation for AI Growth Labs — testbeds where companies can
experiment with their products before going to market.
Asked about an AI bill by MPs on Dec. 3, Kendall said: “There are measures
we will need to take to make sure we get the most on growth and deal with
regulatory issues. If there are measures we need to do to protect kids online,
we will take those. I am thinking about it more in terms of specific areas where
we may need to act rather than a big all-encompassing bill.”
The team in Kendall’s department which looks at frontier AI regulation,
meanwhile, has been reassigned, according to two people familiar with the team.
Polling by the Ada Lovelace Institute shows Labour’s leadership is out of
sync with public views on AI, with 9 in 10 wanting an independent AI regulator
with enforcement powers.
“The public wants independent regulation,” said Ada Lovelace Director Gaia
Marcus. “They prioritize fairness, positive social impacts and safety in
trade-offs against economic gains, speed of innovation and international
competition.”
A separate study by Focal Data found that framing AI as a geopolitical
competition also doesn’t resonate with voters. “They don’t want to work more
closely with the United States on shared digital and tech goals because of their
distrust of its government,” the research found.
Political leadership must step in to bridge that gap, former U.K. prime minister
Tony Blair wrote in a report last month. “Technological competitiveness is not a
priority for voters because European leaders have failed to connect it to what
citizens care about: their security, their prosperity and their children’s
futures,” he wrote.
For Starmer, who has struggled to connect with the voters, that will be a huge
challenge.
Senate Commerce Chair Ted Cruz (R-Texas) insisted Tuesday the idea of a 10-year
moratorium on state and local artificial intelligence laws remains alive —
despite a Republican argument that knocked it out of the summer’s budget bill.
“Not at all dead,” Cruz said at POLITICO’S AI & Tech Summit on Tuesday. “We had
about 20 battles, and I think we won 19. So I feel pretty good.”
Cruz said the controversial proposal made it further than conventional wisdom in
Washington suggested it could, ultimately passing scrutiny with the Senate’s
rules referee thanks to the “very creative” work of his staff.
He took a swipe at the Democratic-led states that have been most aggressive in
passing tech legislation in the past few years: “Do you want Karen Bass and
Comrade Mamdani setting the rules for AI?,” he asked, referring to the Los
Angeles mayor and New York City mayoral candidate.
Cruz acknowledged the moratorium fell out due to the opposition of Sen. Marsha
Blackburn (R-Tenn.), who was worried about the fate of her own state’s law
protecting musicians from AI copyright violations.
Cruz suggested the two are not in further talks about a path forward.
“She is doing her own thing,” Cruz said, while saying he was working closely
with the White House.
Many in Washington have long suspected the idea’s legislative prospects were
effectively dead after the GOP budget bill passed without its inclusion. It was
also opposed by a firm bloc of Republicans, including conservatives like
Sen. Josh Hawley (Mo.), Rep. Marjorie Taylor Greene (Ga.) and Steve Bannon.
Cruz has been actively engaged on artificial intelligence issues throughout the
current Congress. Last week, he offered a regulatory “sandbox” proposal that
would effectively loosen the regulatory load on emerging AI technologies.
White House Office of Science and Technology policy director Michael Kratsios
formally endorsed Cruz’s new plan during a committee hearing. Rep. Jay
Obernolte (R-Calif.), a leading House voice on AI issues, is preparing his own
legislation and hoping for “legislative oxygen” to advance it by the end of the
year.
Cruz said that “of course” his legislation would ensure certain existing laws,
like consumer safety protections, remain in force — amid concerns from outside
groups and Democrats that it could imperil the ability to enforce current
protections.
He said failing to pass laws unshackling AI would only benefit U.S. adversaries.
“The biggest winner of the status quo with no moratorium is China. Why? Because
we’re going to see contradictory regulations,” Cruz said.
BRUSSELS — Brussels has served the world’s leading artificial intelligence
companies with a tricky summer dilemma.
OpenAI, Google, Meta and others must decide in the coming days and weeks whether
to sign up to a voluntary set of rules that will ensure they comply with the
bloc’s stringent AI laws — or refuse to sign and face closer scrutiny from the
European Commission.
Amid live concerns about the negative impacts of generative AI models such as
Grok or ChatGPT, the Commission on Thursday took its latest step to limit those
risks by publishing a voluntary set of rules instructing companies on how to
comply with new EU law.
The final guidance handed clear wins to European Parliament lawmakers and civil
society groups that had sought a strong set of rules, even after companies such
as Meta and Google had lambasted previous iterations of the text and tried to
get it watered down.
That puts companies in a tough spot.
New EU laws will require them to document the data used to train their models
and address the most serious AI risks as of Aug. 2.
They must decide whether to use guidance developed by academic experts under the
watch of the Commission to meet these requirements, or get ready to convince the
Commission they comply in other ways.
Companies that sign up for the rules will “benefit from more legal certainty and
reduced administrative burden,” Commission spokesperson Thomas Regnier told
reporters on Thursday.
French AI company Mistral on Thursday became the first to announce it would sign
on the dotted line.
WIN FOR TRANSPARENCY
Work on the so-called code of practice began in September, as an extension of
the bloc’s AI rulebook that became law in August 2024.
Thirteen experts embarked on a process focused on three areas: the transparency
AI companies need to show to regulators and customers who use their models; how
they will comply with EU copyright law; and how they plan to address the most
serious risks of AI.
The proceedings quickly boiled down to a few key points of contention.
OpenAI, Google, Meta and others must decide in the coming days and weeks whether
to sign up to a voluntary set of rules that will ensure they comply with the
bloc’s stringent AI laws. | Filip Singer/EPA
Industry repeatedly emphasized that the guidance should not go beyond the
general direction of the AI Act, while campaigners complained the rules were at
risk of being watered down amid intense industry lobbying.
On Wednesday, European Parliament lawmakers said they had “great concern” about
“the last-minute removal of key areas of the code of practice,” such as
requiring companies to be publicly transparent about their safety and security
measures and “the weakening of risk assessment and mitigation provisions.”
In the final text put forward on Thursday, the Commission’s experts handed
lawmakers a win by explicitly mentioning the “risk to fundamental rights” on a
list of risks that companies are asked to consider.
Laura Lázaro Cabrera of the Center for Democracy and Technology, a civil rights
group, said it was “a positive step forward.”
Public transparency was also addressed: the text says companies will have to
“publish a summarised version” of the reports filed to regulators before putting
a model on the market.
Google spokesperson Mathilde Méchin said the company was “looking forward to
reviewing the code and sharing our views.”
Big Tech lobby group CCIA, which includes Meta and Google among its members, was
more critical, stating that the code “still imposes a disproportionate burden on
AI providers.”
“Without meaningful improvements, signatories remain at a disadvantage compared
to non-signatories,” said Boniface de Champris, senior policy manager at CCIA
Europe.
He heckled “overly prescriptive” safety and security measures and slammed a
copyright section, with “new disproportionate measures outside the Act’s
remit.”
SOUR CLIMATE
A sour climate around the EU’s AI regulations and the drafting process for the
guidance will likely affect tech companies’ calculations on how to respond.
“The process for the code has so far not been well managed,” said Finnish
European Parliament lawmaker Aura Salla, a conservative politician and former
lobbyist for Meta, ahead of Thursday’s announcement.
The thirteen experts produced a total of four drafts over nine months, a process
that garnered the attention of over 1,000 participants and was discussed in
several iterations of plenaries and four working groups — often in the evenings
since some of the experts were based in the U.S. or Canada.
Google spokesperson Mathilde Méchin said the company was “looking forward to
reviewing the code and sharing our views.” | John Mabanglo/EPA
The Commission’s Regnier applauded the process as “inclusive,” but both industry
and civil society groups said they felt they had not been heard.
The U.S. tech companies that must now decide whether to sign the code have also
shown themselves critical of the EU’s approach to other parts of its AI
regulation.
Tech lobby groups, such as the CCIA, were among the first to call for a pause on
the parts of the EU’s AI Act that had not yet been implemented — specifically,
obligations for companies deploying high-risk AI systems, which are set to take
effect next year.
BRUSSELS — A series of Hitler-praising comments by Elon Musk’s artificial
intelligence chatbot Grok has fired up European policymakers to demand stronger
action against Big Tech companies as the bloc takes another step to enforce its
laws.
Musk’s chatbot this week sparked criticism for making antisemitic posts that
included glorifying Nazi leader Adolf Hitler as the best-placed person to deal
with alleged “anti-white hate,” after X updated its AI model over the weekend.
The latest foul-mouthed responses from the chatbot saw EU policymakers seize the
opportunity to demand robust rules for the most complex and advanced AI models —
such as the one that underpins Grok — in new industry guidance expected
Thursday.
It’s also put a spotlight on the EU’s handling of X, which is under
investigation for violating the bloc’s social media laws.
The Grok incident “highlights the very real risks the [EU’s] AI Act was designed
to address,” said Italian Social-Democrat European Parliament lawmaker Brando
Benifei, who led work on the EU’s AI rulebook that entered into law last year.
“This case only reinforces the need for EU regulation of AI chat models,” said
Danish Social-Democrat lawmaker Christel Schaldemose, who led work on the EU’s
Digital Services Act, designed to tackle dangerous online content such as hate
speech.
Grok owner xAI quickly removed the “inappropriate posts” and stated Wednesday it
had taken action to “ban hate speech before Grok posts on X,” without clarifying
what this entails.
The EU guidance is a voluntary compliance tool for companies that develop
general-purpose AI models, such as OpenAI’s GPT, Google’s Gemini or X’s Grok.
The European Commission last week gave a closed-door presentation seen by
POLITICO that suggested it would remove demands from earlier drafts, including
one requiring companies to share information on how they address systemic risks
stemming from their models.
Lawmakers and civil society groups say they fear the guidance will be weak to
ensure that frontrunning AI companies sign up to the voluntary rules.
AMMUNITION
After ChatGPT landed in November 2022, lawmakers and EU countries added a part
to the EU’s newly agreed AI law aimed at reining in general-purpose AI models,
which can perform several tasks upon request. OpenAI’s GPT is an example, as is
xAI’s Grok.
That part of the law will take effect in three weeks’ time, on August 2. It
outlines a series of obligations for companies such as xAI, including how to
disclose the data used to train their models, how they comply with copyright law
and how they address various “systemic” risks.
The Grok incident “highlights the very real risks the [EU’s] AI Act was designed
to address,” said Italian Social-Democrat European Parliament lawmaker Brando
Benifei, who led work on the EU’s AI rulebook that entered into law last year. |
Wael Hamzeh/EPA
But much depends on the voluntary compliance guidance that the Commission has
been developing for the past nine months.
On Wednesday, a group of five top lawmakers shared their “great concern” over
“the last-minute removal of key areas of the code of practice, such as public
transparency and the weakening of risk assessment and mitigation provisions.”
Those lawmakers see the Grok comments as further proof of the importance of
strong guidance, which has been heavily lobbied against by industry and the U.S.
administration.
“The Commission has to stand strongly against these practices under the AI Act,”
said Dutch Greens European Parliament lawmaker Kim van Sparrentak. But “they
seem to be letting Trump and his tech bro oligarchy lobby the AI rules to shreds
through the code of practice.”
One area of contention in the industry guidance relates directly to the Grok
fiasco.
In the latest drafts, the risk stemming from illegal content has been downgraded
to one that AI companies could potentially consider addressing, rather than one
they must.
That’s prompted fierce pushback. The industry code should offer “clear guidance
to ensure models are deployed responsibly and do not undermine democratic values
or fundamental values,” said Benifei.
The Commission’s tech chief Henna Virkkunen described work on the code of
practice as “well on track” in an interview with POLITICO last week.
RISKS
The Commission also pointed to its ongoing enforcement work under the Digital
Services Act, its landmark platform regulation, when asked about Grok’s
antisemitic outburst.
While there are no EU rules on what illegal content is, many countries
criminalize hate speech and particularly antisemitic comments.
Large-language models integrated into very large online platforms, which include
X, “may have to be considered in the risk assessments” that platforms must
complete and “fall within the DSA’s audit requirements,” Commission spokesperson
Thomas Regnier told POLITICO.
The problem is that the EU is yet to conclude any action against X through its
wide-reaching law.
The Commission launched a multi-company inquiry into generative AI on social
media platforms in January, focused on hallucinations, voter manipulation and
deepfakes.
In X’s latest risk assessment report, where the platform outlines potential
threats to civic discourse and mitigation measures, X did not outline any risks
related to AI and hate speech.
Neither X nor the Commission responded to POLITICO’s questions on whether a new
risk assessment for Grok has been filed after it was made available to all X
users in December.
French liberal MEP Sandro Gozi said she would ask the Commission whether the AI
Act and the DSA are enough to “prevent such practices” or whether new rules are
needed.
LONDON — It was never meant to be this hard.
In the wake of Labour’s decisive election victory in July, ministers in the
party’s tech team were determined to grip an issue they felt the previous
Conservative government had failed to address: how to protect copyright holders
from artificial intelligence companies’ voracious appetite for content to train
their AI models.
Instead, Labour’s handling of the issue has snowballed into a PR nightmare which
has transformed a largely uncontroversial data bill into a political football.
The Data (Use and Access Bill) has ricocheted between the Commons and the Lords
in an extraordinarily long incidence of ping-pong, with both Houses digging
their heels in and a frenzied lobbying battle on all sides.
As one tech industry insider put it: “Everyone has fought dirty and everyone is
going to walk away covered in shit.”
OPTING OUT
Many in the creative sector, which has long viewed Labour as its natural ally in
Westminster, hoped the party’s July election victory would work in its favor.
In a manifesto for the creative sectors published while in opposition, the party
had vowed to “support, maintain, and promote the U.K.’s strong copyright
regime,” stating: “The success of British creative industries to date is thanks
in part to our copyright framework.”
Instead, just five months later, creatives’ worst fears were realized when
ministers proposed allowing AI developers to scrape copyrighted content freely
unless artists, publishers and creators “opt out” — putting the onus on
creatives to protect their work.
The ensuing backlash sparked broadsides from the likes of Paul McCartney and
Elton John and a rearguard effort by peers to enshrine protections for
creatives.
On Monday, peers in the House of Lords voted overwhelmingly to defy the Commons
for the fourth time by amending a data bill to enshrine protections for
creators, despite Department for Science, Innovation and Technology minister
Maggie Jones saying businesses want “certainty, not constitutional crises.”
Perhaps more troublingly for Labour, the rancor has fed a perception that the
party has allied itself too closely to foreign tech giants in its bid for
economic growth — a narrative that could make it more difficult for the
government to deliver its plans.
HOW DID WE GET HERE?
POLITICO spoke to several people familiar with discussions inside the government
to understand how a complex — if fiercely contested — debate over intellectual
property law became front page news. Many were granted anonymity to speak
freely.
They all agreed that significant missteps had turned a genuine attempt to
resolve the matter into a political nightmare.
Creative sector representatives pointed POLITICO to the outsized influence of
Matt Clifford, a prolific tech investor who was tapped by Technology Secretary
Peter Kyle to draft an “AI Opportunities Action Plan” within days of Labour
taking office.
Clifford had previously argued that the U.K. should reform its copyright laws to
attract AI investment in a report for the previous government on “Pro-innovation
regulation” authored with Patrick Vallance, now the U.K.’s science minister.
The Tony Blair Institute, which had helped shape the incoming Labour
government’s AI policy, also backed the idea.
But it was Chris Bryant, a joint minister across the technology and culture
departments, who was the driving force behind efforts to resolve a deadlock left
behind by the previous government, according to two people close to the process.
One creative sector lobbyist accused Bryant of “naiveté” for assuming the main
problem was that the “other lot were rubbish,” rather than acknowledging the
extent of the technical and political hurdles to a solution.
They also argued that Bryant’s role meant there was no distinct voice from the
culture department — which acted as a bulwark against reform in the previous
government — on what was ostensibly a shared policy.
Ministers and officials in the culture department were “slow to organize
themselves,” the person said, allowing the technology department to own the
issue.
IT’S A STITCH-UP
When POLITICO revealed in October that the government was planning to propose an
“opt out” model, it seemed that creatives had been outmanoeuvred.
A subsequent government consultation in December described an “opt out” system,
alongside increased transparency obligations on AI firms, as its “preferred
option.”
The consultation was “a pivotal opportunity to ensure that sustained growth and
innovation for the U.K.’s AI sector continues to benefit creators, businesses
and consumers alike,” Bryant said, adding: “We want to provide legal certainty
for all.”
For copyright holders, the suggestion that the law on AI training was unclear
undermined their efforts to extract potentially lucrative licensing deals from
AI firms, which partially relied on the threat of pursuing legal action.
But by stating a “preferred option” — and accepting all the recommendations in
Clifford’s AI plan, which called for copyright reform, one month later — the
government created a “target” for itself, a tech industry figure said.
The creative sector was able to portray itself as the victim of a stitch up. In
a coordinated campaign that stretched from newspaper publishers to record
labels, trade bodies called on their A-list networks and sympathetic lawmakers
to concoct a steady stream of damning headlines for the government.
In May, Elton John labeled Kyle a “moron” on the BBC’s Sunday news show. (“My
family thought it was the best thing ever,” Kyle joked this week during an
appearance at SXSW in London.)
Briefings to the press from figures involved in the campaign highlighted
Clifford’s personal AI investments, as well as Kyle’s refusal to meet creatives
despite taking a bevy of meetings with Big Tech lobbyists. Kyle’s decision to
paint the sector as trying to “resist change” in an interview with the Financial
Times didn’t help.
Campaigners also accused Kyle of only seeking advice from a small circle of
advisers with strong views on AI. Kyle “refuses to go one inch” beyond what’s
required in order not to imperil investment from AI firms, one said.
Most importantly, the sector found a champion in Beeban Kidron, a former film
director and crossbench peer in the House of Lords. A formidable campaigner,
Kidron had previous form holding minsters’ feet to the fire over online safety.
In coordination with the wider campaign, Kidron tabled a series of amendments to
a data bill before parliament to push for immediate transparency duties on AI
firms that would force them to disclose how they train their AI models.
“What the government is doing is bad politics, bad economics and bad for the
culture and reputation of the U.K.,” Kidron said. “They will live to regret
their short-sighted awe and the failure to be adults in the room.”
FACE THE MUSIC
In private, ministers railed against what they felt was misleading and
inflammatory coverage of their genuine attempt to resolve a knotty issue.
But “there was no way we could have campaigned the way we did without the
‘preferred option,’” the creative sector lobbyist quoted at the top of the
article said.
Liberal Democrat peer Tim Clement-Jones, who has backed Kidron’s efforts, told
POLITICO the government “put the cart before the horse.” “It completely
destroyed trust,” he said.
In response to Kidron’s campaigning, ministers have promised to publish
technical reports on transparency, technical solutions to an “opt out,”
licensing, and other subjects within nine months. Cross-industry working groups
will be formed to weigh in on the questions and seek to cultivate consensus.
Most significantly, ministers now say they no longer have a “preferred option”
on the way forward, and insist the U.K.’s existing law is clear — though Kyle
maintains the U.K.’s laws are “not fit for purpose” in the AI era.
“We’re open-minded,” a DSIT official said.
In May, Kyle told MPs he “regrets” the timing and framing of the government’s
proposals, accepting they inflamed creatives and almost derailed the
government’s legislative agenda.
“We all should have done things differently,” a senior tech executive agreed.
WHERE NEXT?
The senior tech executive argued that an “opt out” with proportionate
transparency duties on AI firms ultimately remains the best way forward.
Getting there won’t be easy, however.
It is unclear if, and when, technology will emerge that could allow rights
holders to easily and effectively “opt out” of AI firms training new models on
the full range of media spread across the web.
And finding consensus on what constitutes adequate transparency — which
ministers say will be “the foundation” of any legislation that could emerge
within the next two years — will also be challenging.
Leading AI companies OpenAI and Google have already made it clear that they view
“disproportionate” transparency requirements as a threat to their business
models.
Tim Flagg, CEO of UKAI, which represents U.K. businesses adopting AI, said the
trade body “has been on a journey” over the issue.
After initially supporting liberalization of the law, he told POLITICO he now
believes the U.K. stands to benefit most by carving out a niche developing
smaller, more specialized AI systems using high quality, licensed content.
But others in the tech world have only hardened their position.
Some of the largest tech firms have responded to the creative sector’s campaign
by adopting even more extreme positions in a bid to “balance” the debate,
according to multiple people familiar with their thinking. It is understood that
the vast majority of the 11,500 responses submitted to the government’s
consultation are from creators opposed to the government’s plans.
Industry bodies including TechUK, which previously advocated for reforms
mirroring the government’s original “preferred option,” now describe it as a
reluctant “compromise.”
But rights holders may also struggle to sell any eventual compromise to artists
and creators after taking such an uncompromising public position, the tech
industry figure cited above said.
“What people [in the creative sector] are saying in private is very different to
what they are saying in public,” they said, noting that many of the groups
involved in the campaign had been in discussions with tech firms until relations
soured.
Ahead of a near unprecedented fourth round of ping pong on the data bill, a
tearful Chris Bryant on Tuesday begged peers to allow the data bill to “run its
course,” saying he had “heard the concerns” and would address them in the round
in future legislation.
Lobbyists are rolling up their sleeves.
In a statement to POLITICO, a DSIT spokesperson said:
“We recognize how pressing these issues are and we truly want to solve them,
which is why we have committed to bring forward the publication of our report
exploring the breadth of issues raised in the AI and copyright debate, alongside
an economic impact assessment covering a range of possible options.
“We have also been clear the way to arrive at a solution is to ensure we are
meeting the needs of both sectors, rather than trying to force
through piecemeal changes to unrelated legislation which could quickly become
outdated.
“As you would rightly expect, we are taking the time to consider the 11,500
responses to our consultation, but no changes to copyright law will be
considered unless we’re completely satisfied they work for creators.”
THE ENIGMA OF
ANDRIY YERMAK
Zelenskyy’s chief of staff is accused of being a ruthless political operator.
What are his ambitions?
By JAMIE DETTMER
Photo-illustrations by Katy Williamson for POLITICO
Everybody remembers Ukrainian President Volodymyr Zelenskyy being roasted by
Donald Trump’s MAGA loyalists for wearing his combat gear in the now-notorious
Oval Office meeting in February.
But most have probably already forgotten the immaculate suit of the man sitting
on the sofa to Zelenskyy’s right, his imposing Chief of Staff Andriy Yermak. Was
Yermak thinking his boss should have followed his sartorial lead? Was he
thinking he could have kept his temper and done a better job on the biggest
political stage?
If that’s what he was thinking, he’d never admit it.
In fact, he told POLITICO it was his boss who told him to wear the suit.
“That was probably my first suit since the beginning of the full-scale invasion.
It even felt a little unfamiliar, but I’m getting used to it,” he wrote in an
email exchange. And what was on his mind during the brutal spat? The priority
was simply to convince the Americans that it was in their strategic interests to
help stop the Russians, Yermak explained. Not the slightest hint he was ready to
grab the wheel as negotiations skidded across ice.
The response is typical of Yermak. The once little-known lawyer and B-movie
producer — now in the thick of triangular peace diplomacy with the Americans and
Russians — is always reverently loyal to his boss. In an interview with POLITICO
last year, he referred to him glowingly as the “president of the people.” What
else could he say? Yermak has ridden Zelenskyy’s coattails to become the
second-most-powerful figure in Ukraine — even a co-equal.
INTO THE LIMELIGHT
Yermak’s profile is only set to grow in the coming months. He’s come a long way
since producing martial arts movie “The Fight Rules” and smuggling thriller “The
Line.”
He is now at the spearhead of Ukrainian diplomacy, and is trying to find ways to
keep the thin-skinned Trump supporters sweet, partly with sharp tailoring.
In talks in Istanbul in May, Yermak led the charge talking with allies and
stayed out of the direct discussions with the Russians. Russia only sent a
low-level delegation, but the 53-year-old consigliere from Kyiv coordinated
positions with the U.S., U.K., France and Germany, and took the lead insisting
an unconditional ceasefire must be the priority.
This week he travels to Washington seeking to exploit Trump’s frustrations with
Vladimir Putin for the recent massive air raids on Ukrainian cities, including
Kyiv. Trump expressed his irritation with the Russian leader last week for the
air strikes, accusing him of having gone crazy and threatened to impose more
economic sanctions on Russia. “Sanctions are the main priority” for Yermak on
this trip, one of his aides told POLITICO.
The question as he comes to the fore is: What’s his game? As he shuttles to
Western capitals to press Ukraine’s case, some critics hazard he is seizing the
opportunity to burnish his own credentials for the future.
This he firmly denies. “I entered politics together with Volodymyr Zelenskyy —
and I will leave together with him,” he told POLITICO. “My task is to help him
fulfill his responsibilities as the president of Ukraine. For me, this is not
about positions or political careers.”
For now, at least, Yermak is Zelenskyy’s trusted right-hand man, and not a
successor. And even fierce opponents concede the steely former attorney is a
good fit as Ukraine’s interlocutor with Trump’s transactional entourage.
“He reads Trump’s people better than Zelenskyy does,” said opposition lawmaker
Mykola Kniazhytskyi, a member of former President Petro Poroshenko’s party.
Normally Kniazhytskyi hasn’t a good word to say about Yermak, but he noted
“Zelenskyy hasn’t adapted enough” to the massive change in politics in
Washington, while Yermak seems more alive to it.
“Zelenskyy’s mindset hasn’t altered and he still hasn’t understood the rhetoric
that was effective when Joe Biden was in the White House isn’t of much use now,”
he added.
WILL THE REAL YERMAK PLEASE STAND UP?
But who is he really? Who is the man that Zelenskyy will depend on to sit
opposite the unsentimental real estate investor and lawyer Steve Witkoff,
Trump’s special envoy, to try to manage an administration that sees Ukraine as a
nuisance in a bigger game of rapprochement with the Russians?
Yermak has been accused of being everything from a Russian spy — a charge linked
to lingering questions about whether his father was a Russian intelligence
officer — to a dangerous Svengali or a Rasputin who has Zelenskyy under his
spell. The Rasputin comparison is wide of the mark. The bachelor Yermak may be
physically imposing like Rasputin but he’s a teetotaler and steely pragmatist
rather than debauched mystic.
The skill Yermak probably does share with the faith healer who bewitched the
imperial family of Nicholas II, Russia’s last czar, is as a shrewd reader of
psychology. That’s something highlighted by the half dozen former Ukrainian
ministers and aides POLITICO spoke with about Yermak. Aside from one, they all
spoke on the condition of anonymity, fearful of crossing Zelenskyy’s right-hand
man. There’s certainly a ubiquitous sense in Kyiv you have to tread carefully
when dealing with him, or even talking about him.
“Yermak is a brilliant psychologist. He’s able to read Zelenskyy and anticipate
what he wants,” said a former minister, who clashed bitterly with Yermak and was
increasingly squeezed out by him before being forced to resign amid threats.
“He’s careful to offer ready-made solutions to Zelenskyy, who hates being drawn
into details, and he doesn’t bring him problems,” he added.
Frequently described by Ukrainian commentators as the producer in the ruling
duopoly with the former TV comic Zelenskyy as the performing star, the clam-like
relationship between the pair has evolved from Yermak’s initial appointment to
the president’s office in a junior role five years ago.
His rise was meteoric.
RISING STAR
When appointed as an aide shortly after the 2019 election, none of the key
figures in Zelenskyy’s circle had much of an inkling about Yermak. They had all
enjoyed long friendships with Zelenskyy and helped him found his production
company Kvartal 95 Studio and develop his “Servant of the People” TV series, a
show that shot him to fame and eventually propelled him into the presidency.
They included Serhiy Trofimov, Serhiy Shefir, Kyrylo Tymoshenko, Yuriy Kostiuk
and Ivan Bakanov, who was quickly moved to head the security service.
Now they’re all gone, and Yermak’s critics point to the chief of staff as
engineering the purges and reshuffles that set them on their way. Shefir, who’d
been a friend of Zelenskyy’s for more than three decades, only knew he was out
on arriving at the presidential building in Bankova Street in Kyiv to find
someone else ensconced in his office with his effects neatly packed in a box.
“Yermak stuck to Zelenskyy the moment he arrived at Bankova Street,” said Yulia
Mendel, a Ukrainian journalist who served as Zelenskyy’s press secretary from
2019 to 2021. And he quickly outmaneuvered Andriy Bohdan, Zelenskyy’s first head
of office, a former personal lawyer for Ihor Kolomoisky, an oligarch who’d
backed Zelenskyy’s presidential run and would later be dropped and arrested on
corruption charges.
“They were always in the underground gym — that’s where they really bonded as
gym buddies. And he stole a march on Bohdan by arranging the first big POW swap
with the Russians,” she said. That was in September 2019. Yermak’s star rose
quickly after he stepped off the plane at Kyiv’s Boryspil airport with freed
Ukrainian sailors who had been held captive by Russia.
Five months later, he was catapulted to head of office and accrued ever more
power, clearing out the cabinet and the office of the president of anyone
reckoned a political threat or who sought to act autonomously, the former
ministers and officials said.
Outshining either Zelenskyy or Yermak in media coverage also seemed to lead to
unceremonious ejection.
Yermak and Zelenskyy are well-suited, coming from similar backgrounds. They’re
scions of middle-class families who valued education and hard work. Both trained
as lawyers — in Yermak’s case at Ukraine’s largest university Taras Shevchenko
National University of Kyiv — and both moved over to the entertainment industry.
Yermak founded a media company and produced movies, while also working as a
copyright lawyer.
They got to know each other around 2011, when Zelenskyy became the general
producer of a TV channel and Yermak did some legal work for him. Subsequently,
Yermak worked on Zelenskyy’s 2019 Servant of the People election campaign but
not in a high-profile capacity.
Some of Yermak’s biography is glossed over, according to Mendel. After
graduating in the rough-and-tumble of the post-Soviet 1990s he grabbed gigs
where he could — at one time working for a notorious nightclub-cum-discotheque
in the Ukrainian capital, the first to appear after the collapse of communism.
The club attracted gangsters as well as prominent pro-Russian politicians. He
later acted as a fixer for the Sanahunt luxury clothing store, helping to import
exclusive lines from top fashion houses in France and Italy for the store’s
clients of oligarchs and politicians. “That role was extremely important in
shaping Yermak’s network of connections,” said Mendel.
Being highly attuned to power and seizing opportunities have served Yermak well
despite his unprepossessing start. In Bankova Street he’s been in his element,
former ministers said.
“He has a mental connection with Zelenskyy,” said another former longtime friend
of the Ukrainian leader, who has also had a top government job until he fell
foul of Yermak.
“Yermak made sure he was present at every meeting I had with Zelenskyy,
listening, interjecting. Or Yermak would just sit there and scroll through his
phone and show Zelenskyy something and crack a private joke with raised
eyebrows. In time I just stopped going to meetings and communicated just by
email,” he told POLITICO.
CENTER OF POWER
Yermak quickly expanded his role and surrounded himself with people beholden to
him, among them a coterie of unpaid advisers who owe allegiance solely to him.
Some have been suspects in corruption cases, prematurely closed down on the
orders of Oleh Tatarov, a key deputy in the presidential administration who
reports to Yermak. Tatarov, a Ukrainian lawyer, worked in the interior ministry
during the regime of Viktor Yanukovych but was dismissed after the Maidan
uprising only to reappear in Bankova Street in 2022.
Few top officials have managed to cling on if Yermak has wanted them gone. One
standout has been Kyrylo Budanov, head of Ukraine’s military intelligence, who’s
maintained independent access to Zelenskyy, say Bankova Street insiders, to the
frustration of Yermak.
The purges and reshuffles have done nothing to ease long-standing worries about
Zelenskyy’s highly personalized and, according to some, autocratic way of
governing. Zelenskyy has little time for formal ways of governing or
institutions. Everything is highly personal, improvised and often impetuous.
“Zelenskyy and Yermak have undermined institutions, and they’ve developed
governance based on people they trust,” Kniazhytskyi said.
The departures of some highly gifted figures from the cabinet or the military,
including Dmytro Kuleba as foreign minister and Gen. Valery Zaluzhny, the army
commander who clashed with Zelenskyy over war strategy, have prompted domestic
alarm. The monopolization of power has triggered quiet dismay of Western allies,
who are reticent to issue public criticism for fear of handing propaganda
openings for Moscow.
“We don’t have a proper functioning Cabinet of ministers. Instead, we have some
quasi-Cabinet of ministers headed by Yermak, who controls access to the
president’s agenda and to the president himself. Then you have all these strange
advisers, who are not public officials, who are not on the state payroll, and
who don’t have to submit asset declarations,” said Daria Kaleniuk, executive
director of the Anti-Corruption Action Center NGO.
“Oligarchs are no longer Ukraine’s main domestic problem. Even corruption isn’t
the main problem. The main problem is the system of governance and how power has
been monopolized,” Kaleniuk added in a recent interview.
Yermak gave short shrift to the criticism.
“These accusations are not true,” he said. “My task is to ensure the effective
functioning of the presidential office and to support the head of state in
fulfilling his constitutional powers. This is not a separate vertical of power
but a working tool of the president. Especially during wartime, when decisions
must be made quickly and clearly,” he told POLITICO.
“The president has the right to rely on those he trusts and on those who are
capable of working without days off and without self-pity. I am grateful for
this trust, and I do everything I can to ensure that the team functions as a
single mechanism under extremely difficult circumstances,” he added.
He went on: “Is it easy to build an effective system? No. Do we make mistakes?
Of course we do. Because in the end, we are human. We acknowledge that and
respond accordingly. As for the myths about ‘total control’ — they are built on
simplifications. The state is a complex structure where powers are always
distributed. Even with the greatest desire, it is physically impossible to
centralize everything. Ukraine has no historical tradition of authoritarian rule
— society simply would not allow it. And President Zelenskyy understands this
very well.”
For all the criticism, Yermak has notched up some considerable successes for
Ukraine during the war, and is credited, among other things, with being the
driving force in persuading allies to adopt sanctions on Russia. Zelenskyy’s
supporters say in wartime there’s no option but to centralize power to get quick
decision-making.
VAULTING AMBITION?
Is the producer now thinking about his turn as the star? It wouldn’t be
surprising if his thoughts are turning to a political life independent of
Zelenskyy. A former minister has no doubt that an operator as shrewd as Yermak
must be thinking about a post-Zelenskyy future. “He has this exceptionally high
ambition and the only thing he really craves for is recognition,” the former
minister said.
“Andriy can be charming, but he’s guided by an overwhelming drive to have his
greatness recognized. He’ll tell you that for him public recognition is nothing.
That he only cares for the country. And that even if his name disappeared it
wouldn’t matter. But this is all bullshit. He almost suffers physically if he
gets sidelined and his name has to be on everything.”
“Right now, any conversation about the future after the war is inappropriate,”
Yermak said. “As long as the fight continues, talking about personal political
plans is simply irresponsible. All resources, time, and efforts must be focused
on one thing — stopping Russian aggression. If we don’t do that, no political
scenario will matter.”
Whether he ever could succeed Zelenskyy, though, strikes Ruslan Bortnik a
political scientist and director of the Ukrainian Institute of Politics, as
doubtful. “Yermak hasn’t any political future without Zelenskyy. He’s not
popular and has no real support from the elites. He’s a temporary person,” he
said.
Maybe so, but in the meantime much of Ukraine’s future is in his hands.
LONDON — The executive at the helm of one the world’s leading AI firms treated
the U.K.’s technology secretary to a meal worth just £30 last month.
U.K. Technology Secretary Peter Kyle received the hospitality from OpenAI CEO
Sam Altman — whose net worth is listed by Forbes as $1.7 billion — on April 6,
according to newly released transparency data. The data doesn’t show where the
pair went for their meal.
The revelation comes after Kyle has faced accusations of being too cozy with
U.S. Big Tech firms in his bid to position the U.K. as a leading AI superpower.
Just two weeks before the meeting with Altman, Kyle told an AI industry
conference in California that the U.K. would be “an agile, proactive partner” to
tech firms, and invited them to train and deploy their technology in the
country.
But Kyle admitted “regret” last week over the U.K. government’s messaging around
AI and copyright. The U.K. government described proposals to require copyright
holders to “opt out” of AI model training as its “preferred option,” causing
uproar among artists and creative sector groups.
OpenAI, meanwhile, has argued an “opt out” model doesn’t go far enough to
encourage AI investment in the U.K., and has also pushed back against attempts
to place transparency duties on AI firms.
In November Kyle argued the U.K. should exercise a “sense of humility” and use
“statecraft” when dealing with U.S. Big Tech firms.
The Department for Science, Innovation and Technology and OpenAI have been
contacted for comment.
BRUSSELS — The European Union has missed a key milestone in its effort to rein
in the riskiest artificial intelligence models amid heavy lobbying from the U.S.
government.
After ChatGPT stunned the world in November 2022, EU legislators quickly
realized these new AI models needed tailor-made rules.
But two and a half years later, an attempt to draft a set of rules for companies
to sign on to has become the subject of an epic lobbying fight involving the
U.S. administration.
Now the European Commission has blown past a legal deadline of May 2 to
deliver.
Pressure has been building in recent weeks: In a letter to the Commission in
late April, obtained by POLITICO, the U.S. government said the draft rules had
“flaws” and echoed many concerns aired in recent months by U.S. tech companies
and lobbyists.
It’s the latest pushback from the Trump administration against the EU’s bid to
become a super tech regulator, and follows attacks on the EU’s social media law
and digital competition rules.
The delay also exposes the reality that the rules are effectively a bandage
measure after EU legislators failed to settle some of the thorniest topics when
they negotiated the binding AI Act in early 2024. The rules are voluntary,
leading to a complicated dance between the EU and industry to land on something
meaningful that companies will actually implement.
POLITICO walks you through how a technical process turned into a messy
geopolitical lobbying fight — and where it goes from here.
1. WHAT IS THE EU TRYING TO DO?
Brussels is trying to put guardrails around the most advanced AI models such as
ChatGPT and Gemini. Since September, a group of 13 academics tasked by the
Commission has been working on a “code of practice” for models that can perform
a “wide range of distinct tasks.”
That initiative was inspired by ChatGPT’s rise to fame in late 2022. The instant
popularity of a chatbot that could perform several tasks upon request, such as
generating text, code and now also images and video, upended the bloc’s drafting
of the AI Act.
Generative AI wasn’t a thing when the Commission first presented its AI Act
proposal in 2021, which left regulators scrambling. “People were saying: we will
not go through five more years to wait for a regulation, so let’s try to force
generative AI into this Act,” Audrey Herblin-Stoop, a top lobbyist at French
OpenAI rival Mistral, recalled at a panel last week.
Brussels is trying to put guardrails around the most advanced AI models such as
ChatGPT and Gemini. | Klaudia Radecka/NurPhoto via Getty Images
EU legislators decided to include specific obligations in the act on
“general-purpose AI,” a catch-all term that includes generative AI models like
OpenAI’s GPT or Google’s Gemini.
The final text left it up to “codes of practice” to put meat on the bones.
2. WHAT IS IN THE CODE THAT WAS DUE MAY 2?
The 13 experts, including heavy hitters like Yoshua Bengio, a French Canadian
computer scientist nicknamed the “godfather of AI,” and former European
Parliament lawmaker Marietje Schaake, have worked on several thorny topics.
According to the latest draft, signatories would commit to disclosing relevant
information about their models to authorities and customers, including the data
being used to train them, and to drawing up a policy to comply with copyright
rules.
Companies that develop a model that carries “systemic risks” also face a series
of obligations to mitigate those risks.
The range of topics being discussed has drawn immense interest: Around 1,000
interested parties ranging from EU countries, lawmakers, leading AI companies,
rightsholders and media to digital rights groups have weighed in on three
different drafts.
3. WHAT ARE THE OBJECTIONS?
U.S. Big Tech companies, including Meta and Google, and their lobby group
representatives have repeatedly warned that the code goes beyond what was agreed
on in the AI Act.
Just last week, Microsoft President Brad Smith said “the code can be helpful”
but warned that “if too many things [are] competing with each other … it’s not
necessarily helpful.”
The companies also claim this is the reason the deadline was missed.
“Months [were] lost to debates that went beyond the AI Act’s agreed scope,
including [a] proposal explicitly rejected by EU legislators,” Boniface de
Champris, senior policy manager at Big Tech Lobby CCIA, told POLITICO.
Digital rights campaigners, copyright holders and lawmakers haven’t been
impressed with Big Tech’s criticism.
“We have to ensure that the code of practice is not designed primarily to make
AI model providers happy,” Italian Social Democrat lawmaker Brando Benifei and
the Parliament’s AI Act lead negotiator said in an interview — a clear hint that
the Parliament doesn’t want a watered-down code.
Benifei was among a group of lawmakers who resisted a decision in March to
remove “large-scale discrimination” from a list of risks in the code that AI
companies must manage.
Brando Benifei was among a group of lawmakers who resisted a decision in March
to remove “large-scale discrimination” from a list of risks in the code that AI
companies must manage. | Simona Granati – Corbis/Corbis via Getty Images
There have also been allegations of unfair lobbying tactics by U.S. Big Tech.
Last week, two non-profit groups complained that “Big Tech enjoyed structural
advantages.”
“A staggering amount of corporate lobbying is attempting to weaken not just the
EU’s AI laws but also DMA and DSA,” said Ella Jakubowska, head of policy at
European Digital Rights.
Tech lobby CCIA resisted that criticism, saying AI model providers are “the
primary subjects of the code” but make up only 5 percent of the 1,000 interest
groups involved in the drafting.
4. WHAT HAS THE U.S. GOVERNMENT SAID?
The U.S. administration has been less public in its pushback against the EU’s AI
rules than in its attacks on the EU’s social media law (the Digital Services
Act) and the EU’s digital competition rules (the Digital Markets Act).
Behind the scenes, the positioning has been strong. The U.S. Mission to the EU
filed feedback on the third draft of the code of practice in a letter to the
European Commission echoing many of the concerns already aired by U.S. tech
executives or lobby groups.
“Several elements in the code are not found in the AI Act,” the letter read.
The mission piggybacked on the European Commission’s own pivot toward focusing
on AI innovation, and said that the code must be improved “to better enable AI
innovation.”
5. HOW WILL THIS PLAY OUT?
Ultimately, the success of the effort hinges on whether leading AI companies
such as U.S.-based Meta, Google, OpenAI, Anthropic and French Mistral sign on to
it.
That means the Commission needs to figure out how to publish something that
meets its intentions while also being sufficiently palatable to Big Tech and the
Trump administration.
The Commission has repeatedly stressed that the code is a voluntary tool for
companies to ensure they comply — but more recently warned that life could be
more complicated for companies that don’t sign it.
Those who do sign the code will “benefit from increased trust” by the
Commission’s AI Office and “from reduced administrative burden,” said European
Commission spokesperson Thomas Regnier.
Benifei too said that it’s “our challenge to make sure that the obligations
behind the code are somehow applicable to those that don’t sign the code.”
Under the timelines set out in the AI Act, providers of the most complex AI
models will have to abide by the new obligations, either through the code or
otherwise, by Aug. 2.
Europe’s moviemakers are bracing to be the next industry embroiled in Donald
Trump’s trade war.
The U.S. president pledged Sunday to slap a 100 percent “tariff” on movies
“produced in Foreign Lands,” after governments worldwide have enticed production
teams with lucrative tax breaks and lower labor costs.
“WE WANT MOVIES MADE IN AMERICA, AGAIN!” Trump said in a Truth Social post,
claiming to have instructed the U.S. Department of Commerce and the Trade
Representative to crack down on this “National Security threat” and
“propaganda.”
The administration has yet to explain how the tariff would work or what it would
exactly target.
“Commerce is figuring it out,” said a White House official, granted anonymity to
share details about the internal process. “Maybe like, the rights to movies or
something,” adding that a study will be launched.
Experts in the U.S. have pointed out that movies are exempt from tariff orders,
per the so-called Berman Amendment from 1988.
Across the Atlantic, confusion — if not surprise — reigned.
“We felt that [cinema] could become a battlefield [amid the trade war]. We’re
entering the unpredictable,” Pascal Rogard, the president of the French authors’
society SACD, said. “This is contrary to international commitments,” he added.
Although the criteria on what constitutes a “foreign” production remain unclear
and contentious, “the political gesture is in line with what we expected to
happen,” a French industry insider who, like others contacted by POLITICO for
this story, was granted anonymity to speak freely amid the uncertainty
surrounding Trump’s plan.
“Everyone’s trying to make sense of what it means, or what it might mean,” said
another.
Others slammed the damage the move could do to the sector.
“Ousting the European film industry from the U.S. market is a harmful move
toward cultural essentialism,” said Nela Riehl, a German lawmaker from the
Greens chairing the European Parliament’s culture committee. “Protectionism in
this sector will only encourage other regions to retaliate, like we have seen it
from China already,” she warned.
Laurence Farreng, a member of the European Parliament from French President
Emmanuel Macron’s party, said “imposing duties will penalize American industry
in the end.”
A group of EU lawmakers from the Parliament’s culture committee will visit Los
Angeles at the end of May to meet U.S. movie producers, she said.
According to Trump, the tariffs are to stop Hollywood from dying a “very fast
death.” Los Angeles has seen feature movie shoot days plummet — from 3,901 in
2017 to just 2,403 in 2024, a 38 percent drop highlighting its dwindling role on
the global scene.
Trump’s bombshell memo in February already slammed EU media rules that “require
American streaming services to fund local productions,” in a clear reference to
the bloc’s audiovisual media services directive allowing national governments to
force Netflix, Amazon Prime, Disney+ and others to invest in European works.
Whatever form it takes, the new tariff might be added to the list of bargaining
chips in the unfolding trade standoff between Washington and Brussels.
As the Cannes Film Festival kicks off in France next week, the controversial
move is likely to take center stage, with French moviemakers from the
association ARP preparing to speak out.
“I will be present at Cannes and I believe that this subject will keep the
producers very busy. It’ll be very interesting to hear whether they can do
without a market like the European one, and no doubt they’ll get things moving,
just as manufacturers in other American sectors have done,” Farreng said.
“For now, this is just an announcement by Trump,” Riehl said. “The EU is already
working toward more opportunities and visibility for European film in Europe and
globally. This approach of ‘more Europe’ will now be the way to go.”