When the Franco-German summit concluded in Berlin, Europe’s leaders issued a
declaration with a clear ambition: strengthen Europe’s digital sovereignty in an
open, collaborative way. European Commission President Ursula von der Leyen’s
call for “Europe’s Independence Moment” captures the urgency, but independence
isn’t declared — it’s designed.
The pandemic exposed this truth. When Covid-19 struck, Europe initially
scrambled for vaccines and facemasks, hampered by fragmented responses and
overreliance on a few external suppliers. That vulnerability must never be
repeated.
True sovereignty rests on three pillars: diversity, resilience and autonomy.
> True sovereignty rests on three pillars: diversity, resilience and autonomy.
Diversity doesn’t mean pulling every factory back to Europe or building walls
around markets. Many industries depend on expertise and resources beyond our
borders.
The answer is optionality, never putting all our eggs in one basket.
Europe must enable choice and work with trusted partners to build capabilities.
This risk-based approach ensures we’re not hostage to single suppliers or
overexposed to nations that don’t share our values.
Look at the energy crisis after Russia’s illegal invasion of Ukraine. Europe’s
heavy reliance on Russian oil and gas left economies vulnerable. The solution
wasn’t isolation, it was diversification: boosting domestic production from
alternative energy sources while sourcing from multiple markets.
Optionality is power. It lets Europe pivot when shocks hit, whether in energy,
technology, or raw materials.
Resilience is the art of prediction. Every system inevitably has
vulnerabilities. The key is pre-empting, planning, testing and knowing how to
recover quickly.
Just as banks undergo stress tests, Europe needs similar rigor across physical
and digital infrastructure. That also means promoting interoperability between
networks, redundant connectivity links (including space and subsea cables),
stockpiling critical components, and contingency plans. Resilience isn’t
theoretical. It’s operational readiness.
Finally, Europe must exercise authority through robust frameworks, such as
authorization schemes, local licensing and governance rooted in EU law.
The question is how and where to apply this control. On sensitive data, for
example, sovereignty means ensuring it’s held in Europe under European
jurisdiction, without replacing every underlying technology component.
Sovereign solutions shouldn’t shut out global players. Instead, they should
guarantee that critical decisions and compliance remain under European
authority. Autonomy is empowerment, limiting external interference or denial of
service while keeping systems secure and accountable.
But let’s be clear: Europe cannot replicate world-leading technologies,
platforms or critical components overnight. While we have the talent, innovation
and leading industries, Europe has fallen significantly behind in a range of key
emerging technologies.
> While we have the talent, innovation and leading industries, Europe has fallen
> significantly behind in a range of key emerging technologies.
For example, building fully European alternatives in cloud and AI would take
decades and billions of euros, and even then, we’d struggle to match Silicon
Valley or Shenzhen.
Worse, turning inward with protectionist policies would only weaken the
foundations that we now seek to strengthen. “Old wines in new bottles” — import
substitution, isolationism, picking winners — won’t deliver competitiveness or
security.
Contrast that with the much-debated US Inflation Reduction Act. Its incentives
and subsidies were open to EU companies, provided they invest locally, develop
local talent and build within the US market.
It’s not about flags, it’s about pragmatism: attracting global investments,
creating jobs and driving innovation-led growth.
So what’s the practical path? Europe must embrace ‘sovereignty done right’,
weaving diversity, resilience and autonomy into the fabric of its policies. That
means risk-based safeguards, strategic partnerships and investment in European
capabilities while staying open to global innovation.
Trusted European operators can play a key role: managing encryption, access
control and critical operations within EU jurisdiction, while enabling managed
access to global technologies. To avoid ‘sovereignty washing’, eligibility
should be based on rigorous, transparent assessments, not blanket bans.
The Berlin summit’s new working group should start with a common EU-wide
framework defining levels of data, operational and technological sovereignty.
Providers claiming sovereign services can use this framework to transparently
demonstrate which levels they meet.
Europe’s sovereignty will not come from closing doors. Sovereignty done right
will come from opening the right ones, on Europe’s terms. Independence should be
dynamic, not defensive — empowering innovation, securing prosperity and
protecting freedoms.
> Europe’s sovereignty will not come from closing doors. Sovereignty done right
> will come from opening the right ones, on Europe’s terms.
That’s how Europe can build resilience, competitiveness and true strategic
autonomy in a vibrant global digital ecosystem.
Tag - Encryption
The European Union’s law enforcement agency wants to speed up how it gets its
hands on artificial intelligence tools to fight serious crime, a top official
said.
Criminals are having “the time of their life” with “their malicious deployment
of AI,” but police authorities at the bloc’s Europol agency are weighed down by
legal checks when trying to use the new technology, Deputy Executive Director
Jürgen Ebner told POLITICO.
Authorities have to run through data protection and fundamental rights
assessments under EU law. Those checks can delay the use of AI by up to eight
months, Ebner said. Speeding up the process could make the difference in time
sensitive situations where there is a “threat to life,” he added.
Europe’s police agency has built out its tech capabilities in past years,
ranging from big data crunching to decrypting communication between criminals.
Authorities are keen to fight fire with fire in a world where AI is rapidly
boosting cybercrime. But academics and activists have repeatedly voiced concerns
about giving authorities free rein to use AI tech without guardrails.
European Commission President Ursula von der Leyen has vowed to more than double
Europol’s staff and turn it into a powerhouse to fight criminal groups
“navigating constantly between the physical and digital worlds.” The
Commission’s latest work program said this will come in the form of a
legislative proposal to strengthen Europol in the second quarter of 2026.
Speaking in Malta at a recent gathering of data protection specialists from
across Europe’s police forces, Ebner said it is an “absolute essential” for
there to be a fast-tracked procedure to allow law enforcement to deploy AI tools
in “emergency” situations without having to follow a “very complex compliance
procedure.”
Assessing data protection and fundamental rights impacts of an AI tool is
required under the EU’s General Data Protection Regulation (GDPR) and AI Act.
Ebner said these processes can take six to eight months.
The top cop clarified that a faster emergency process would not bypass AI tool
red lines around profiling or live facial recognition.
Law enforcement authorities already have several exemptions under the EU’s
Artificial Intelligence Act (AI Act). Under the rules, the use of real-time
facial recognition in public spaces is prohibited for law enforcers, but EU
countries can still permit exceptions, especially for the most serious crimes.
Lawmakers and digital rights groups have expressed concerns about these
carve-outs, which were secured by EU countries during the law’s negotiation.
DIGITAL POLICING POWERS
Ebner, who oversees governance matters at Europol, said “almost all
investigations” now have an online dimension.
The investments in tech and innovation to keep pace with criminals is putting a
“massive burden on law enforcement agencies,” he said.
European Commission President Ursula von der Leyen has vowed to more than double
Europol’s staff and turn it into a powerhouse to fight criminal groups. | Wagner
Meier/Getty Images
The Europol official has been in discussions with Europe’s police chiefs about
the EU agency’s upcoming expansion. He said they “would like to see Europol
doing more in the innovation field, in technology, in co-operation with private
parties.”
“Artificial intelligence is extremely costly. Legal decryption platforms are
costly. The same is to be foreseen already for quantum computing,” Ebner said.
Europol can help bolster Europe’s digital defenses, for instance by seconding
analysts with technological expertise to national police investigations, he
said.
Europol’s central mission has been to help national police investigate
cross-border serious crimes through information sharing. But EU countries have
previously been reluctant to cede too much actual policing power to the EU level
authority.
Taking control of law enforcement away from EU countries is “out of the scope”
of any discussions about strengthening Europol, Ebner said.
“We don’t think it’s necessary that Europol should have the power to arrest
people and to do house searches. That makes no sense, that [has] no added
value,” he said.
Pieter Haeck contributed reporting.
BRUSSELS — The European Union is trying to stop space from turning into a
junkyard.
The European Commission on Wednesday proposed a new Space Act that seeks to dial
up regulatory oversight of satellite operators — including requiring them to
tackle their impact on space debris and pollution, or face significant fines.
There are more than 10,000 satellites now in orbit and growing space junk to
match. In recent years, more companies — most notably Elon Musk’s Starlink —
have ventured into low-Earth orbit, from where stronger telecommunication
connections can be established but which requires more satellites to ensure full
coverage.
“Space is congested and contested,” a Commission official said ahead of
Wednesday’s proposal in a briefing with reporters. The official was granted
anonymity to disclose details ahead of the formal presentation.
The EU executive wants to set up a database to track objects circulating in
space; make authorization processes clearer to help companies launch satellites
and provide services in Europe; and force national governments to give
regulators oversight powers.
The Space Act proposal would also require space companies to have launch safety
and end-of-life disposal plans, take extra steps to limit space debris, light
and radio pollution, and calculate the environmental footprint of their
operations.
Mega and giga constellations, which are networks of at least 100 and 1,000
spacecraft, respectively, face extra rules to coordinate orbit traffic and avoid
collisions.
“It’s starting to look like a jungle up there. We need to intervene,” said
French liberal lawmaker Christophe Grudler. “Setting traffic rules for
satellites might not sound as sexy as sending people to Mars. But that’s real,
that’s now and that has an impact on our daily lives.”
Under the proposal, operators would also have to run cybersecurity risk
assessments, introduce cryptographic and encryption-level protection, and are
encouraged to share more information with corporate rivals to fend off
cyberattacks.
Breaches of the rules could result in fines of up to twice the profits gained or
losses avoided as a result of the infringement, or, where these amounts cannot
be determined, up to 2 percent of total worldwide annual turnover.
Satellites exclusively used for defense or national security are excluded from
the law.
THE MUSK PROBLEM
The Space Act proposal comes as the EU increasingly sees a homegrown satellite
industry as crucial to its connectivity, defense and sovereignty ambitions.
Musk’s dominance in the field has become a clear vulnerability for Europe. His
Starlink network has showcased at scale how thousands of satellites can reach
underserved areas and fix internet voids, but it has also revealed his hold over
Ukraine’s wartime communication, highlighting the danger of relying on a single,
foreign player.
Top lawmakers in the European Parliament, including Grudler, earlier this month
advocated for a “clearly ring-fenced budget of at least €60 billion” devoted to
space policy, while French President Emmanuel Macron last week called for the
next EU budget to earmark more money to boost Europe’s space sector.
That’s crucial “if we want to stay in the game of the great international
powers,” he said shortly after the French government announced it would ramp up
its stake in Eutelsat, a Franco-British satellite company and Starlink rival.
The Space Act proposal introduces additional requirements for players from
outside the EU that operate in the European market, unless their home country is
deemed to have equivalent oversight by the Commission, which could be the case
for the U.S. They will also have to appoint a legal representative in the bloc.
The proposal is set to apply from 2030 and will now head to the Council of the
EU, where governments hash out their position, and the European Parliament for
negotiations on the final law.
Aude van den Hove contributed reporting.
A British man has been indicted in the United States on charges of attempting to
pass “sensitive American technology” to China.
John Miller, 63, was indicted by U.S. federal juries on Friday along with
Chinese national Cui Guanghai for allegedly trying to export a device used for
encryption and decryption to Beijing, according to a statement from the United
States Attorney’s Office for the Central District of California.
The two men, who were arrested in Serbia, discussed ways to smuggle the device
to China via Hong Kong in a blender, according to the statement.
The pair “solicited the procurement of U.S. defense articles, including
missiles, air defense radar, drones and cryptographic devices with associated
crypto ignition keys for unlawful export from the United States to the People’s
Republic of China,” the U.S. Attorney’s Office said.
The case comes amid heightened tensions between the U.S. and China, as the two
engage in an escalating trade war. On Sunday, Beijing warned Washington to not
“play with fire” after U.S. defense chief Pete Hegseth said China could be
gearing up to invade Taiwan.
Both men are also accused of attempting to “harass” a Chinese-American artist
and known critic of Beijing by trying to install a tracking device on the
victim’s car and slashing his tires. The U.S. is seeking to extradite the pair
from Serbia.
The two men could face up to 20 years in prison under the U.S. Arms Export
Control Act if found guilty, and 10 years for smuggling.
U.S. Deputy Attorney General Todd Blanche called the plot a “blatant assault” on
the country’s national security, adding the American government would not “allow
hostile nations to infiltrate or exploit our defense systems.”
Miller, who lives in Kent in the U.K. and describes himself as a recruitment
specialist, reportedly referred to Chinese President Xi Jinping as “the boss” in
intercepted phone calls. He was caught in a sting operation after discussing his
plans with FBI agents posing as arms dealers, according to media reports.
Britain’s Foreign Office said it was providing consular assistance to Miller
following his arrest in Serbia on April 24.
PARIS — A group of leading French cryptocurrency entrepreneurs and their
families will receive enhanced security following two successful kidnappings and
one abduction attempt involving industry leaders and their loved ones.
The measures include priority access to the police emergency line, home visits
and safety briefings from law enforcement to advise on best practices, the
French interior ministry said. Law enforcement officers will also undergo
“anti-crypto asset laundering training.”
“These repeated kidnappings of professionals in the crypto sector will be fought
with specific tools, both immediate and short-term, to prevent, dissuade and
hinder in order to protect the industry,” Interior Minister Bruno Retailleau
said in a statement Friday.
Retailleau met with leaders from the French crypto sector Friday morning
following the trio of violent incidents. The most recent one took place on
Tuesday, when four people attempted to abduct the daughter of the CEO of French
cryptocurrency platform Paymium and her son off the street in broad daylight.
Earlier this year, Ledger cofounder David Balland was abducted and the father of
another crypto entrepreneur was kidnapped at the start of this month.
The investigations are being led by France’s organized crime prosecutor, and
several arrests have already been made. Retailleau said earlier this week that
he believed the incidents were likely connected.
However, Pierre Noizat, the CEO of Paymium, told French broadcaster BFMTV ahead
of Friday’s meeting that he viewed the gathering as a “communications
operation.”
Youth gangs have wreaked havoc in Sweden and Denmark for months, with violence
ranging from murders to explosions.
For Peter Hummelgaard, Denmark’s justice minister, it’s not just guns and bombs
that are causing mayhem. It’s also the criminals’ smartphones.
“We’ve seen a new trend of crime-as-a-service, where organized criminals use
digital platforms to hire children and young people from Sweden to commit
serious crimes in Denmark — murders, attempted murders, explosions,” Hummelgaard
told POLITICO in an interview last month.
Technology has made it “far easier for criminals to reach a larger audience and
also coordinate actions in real time,” the justice minister said, singling out
crimes like spreading child pornography, money laundering, illicit drug
smuggling — “or, as we’ve seen examples in Denmark and Sweden, recruitment of
minors into a life of crime.”
The smartphones and applications used by criminals to recruit, organize and
carry out crime sprees are increasingly the target of European law enforcement
and politicians alike. So-called end-to-end encrypted technology — a pillar of
privacy-friendly and cybersecure digital communication — is seen as a foe by
police and investigative authorities.
The technology is now coming under heavy fire across Europe.
“Without lawful access to encrypted communications, law enforcement is fighting
crime blindfolded,” said Jan Op Gen Oorth, a spokesperson for Europol, the
European Union’s law enforcement agency.
France has put forward an anti-drug trafficking law that critics say would ban
encryption. The Nordic countries have taken the fight to tech companies. Spain
said it wants to ban encryption. And the U.K. government has now entered a legal
battle with Apple over an apparent attempt to secretly spy on encrypted data.
Denmark will soon take over the rotating presidency of the Council of the EU,
giving it an influential role at a time when EU countries are debating the
bloc’s child sexual abuse material bill (CSAM). That draft legislation could
impose an obligation on all messaging platforms to conduct blanket scans on
their content to root out child abuse images — even if they’re end-to-end
encrypted and thus technically out of reach of the platforms themselves.
“It’s no secret that I would like to see an ambitious regulation on child sexual
abuse,” Hummelgaard said.
The EU won’t stop there. The European Commission, the bloc’s executive branch,
this month unveiled a new internal security strategy, setting out plans to look
into “lawful and effective” data access for law enforcement and to find
technological solutions to access encrypted data.
The smartphones and applications used by criminals to recruit, organize and
carry out crime sprees are increasingly the target of European law enforcement
and politicians alike. | Oscar Olsson/TT News Agency/AFP via Getty Images
It also wants to start work on a new data retention law, it said in the
strategy, which would define the kinds of data that messaging services,
including digital ones like WhatsApp, have to store and keep, and for how long.
The EU’s top court struck down the previous data retention legislation in 2014,
saying it interfered with people’s privacy rights.
The Commission is presenting a united front in their plans to help law
enforcement. The internal security strategy was presented jointly by Henna
Virkkunen, a powerful executive vice-president who heads the digital department,
and Magnus Brunner, the commissioner in charge of home affairs. Both hail from
the center-right European People’s Party, as does Commission President Ursula
von der Leyen.
POLICE FACE PRIVACY GROUPS
In taking on encryption, European governments are heading for a massive clash
with a powerful political coalition of privacy activists, cybersecurity experts,
intelligence services and governments favoring privacy over police access.
Strands of that fight date all the way back to the last century. Cryptography
was a powerful asset during the Cold War, when the U.S. and the Soviet Union
aimed to restrict access to the technology to keep control of confidential
communication. But the technology grew in stature during the age of the
internet, underpinning everything from digital banking to sensitive data
transfers. In recent years, an increasing number of major tech firms have moved
toward using end-to-end encryption as a default setting.
“I have no sympathy for the argument that one needs to undermine encryption in
order to catch the bad guys,” said Matthew Hodgson, a co-founder of Matrix, a
secure comms protocol that has been used by the U.S. Navy and multiple European
governments.
Doing so would punish regular people hoping to communicate privately while
pushing drug dealers, pedophiles and terrorists toward encrypted messaging
services operated in countries beyond the reach of European police, Hodgson
said. “It really is a naive, fool’s errand.”
One app in particular has taken up the fight against the creep of
encryption-threatening laws: Signal.
The app is seen as the industry standard on end-to-end encrypted messaging — and
recently entered the limelight when a group chat among top U.S. security
officials was compromised in what was dubbed “Signal-gate.”
Its president, Meredith Whittaker, has repeatedly threatened to pull out of a
country rather than abide by any law that forces her to weaken Signal’s
security. Whittaker told POLITICO in early March that it is a “fundamental
mathematical reality that either encryption works for everyone, or it’s broken
for everyone.”
Danish Justice Minister Hummelgaard suggested he would have no problem if Signal
ceased operations in Denmark over its refusal to work with law enforcement.
“I’m beginning to question whether or not these are technologies and platforms
that we simply cannot live without,” he said.
A FORK IN THE ROAD
With no legal solution in sight, law enforcement authorities are making do with
what they have. They’ve had success infiltrating and compromising open encrypted
messaging services used only for criminal purposes, like Encrochat, and getting
access to so-called metadata (like location information) that is more in the
open than messages.
But they continue to complain of being locked out of the dominant form of
communication. Jean-Philippe Lecouffe, the deputy head of Europol, put it simply
at a recent conference: “We want legal access.”
That’s where mathematics gets in the way.
Those in favor of police access to data argue there are ways for messaging
services to get access to criminals’ end-to-end-encrypted messages without
weakening the security of regular people’s conversations. Yet tech experts have
noted that with end-to-end-encryption, it’s not possible for only the “good
guys” to get access. Once such a so-called backdoor has been opened, they say,
it cannot remain closed to hackers, criminals and spies.
Both can’t be true, so the debate remains a clash of heads with no compromise
available.
With the legislative challenges to encryption, Europe seems headed for a fork in
the road, where political momentum could give the police what they want — or the
fight could rumble on.
Ella Jakubowska, head of policy at European digital rights group EDRi, which has
long fought police surveillance, said the debate seems intractable. “It’s like
banging our head into a brick wall.”
U.K. Prime Minister Keir Starmer directly challenged JD Vance in the Oval Office
Thursday after the U.S. vice president claimed Britain’s stance on freedom of
speech is stifling American companies and citizens.
Vance — sat across from Starmer in a meeting with President Donald Trump — told
reporters that the “special relationship” between the U.K. and U.S. mattered,
but issued a warning about perceived over-bearing tech regulation.
He said: “We also know that there have been infringements on free speech that
actually affect not just the British — of course what the British do in their
own country is up to them — but also affect American technology companies and,
by extension, American citizens.”
Starmer, leaning forward in his seat, issued a retort to the vice president. He
said his government “wouldn’t want to reach across U.S. citizens, and we don’t,
and that’s absolutely right.”
“But in relation to free speech in the U.K. I’m very proud of our history
there,” he added.
“We’ve had free speech for a very, very long time in the United Kingdom and it
will last for a very, very long time,” Starmer told the U.S. vice president in
the Oval Office.
It comes amid a fierce transatlantic debate about tech regulation, with the U.S.
urging the U.K. and EU to ease up on rules governing AI and social media
platforms.
Trump adviser Elon Musk has also been a regular critic of the U.K. government
and Starmer himself because of the way his administration responded to far-right
riots that hit the U.K. last year and his record as the country’s top public
prosecutor.
Vance’s comments also come after U.S. Director of National Intelligence Tulsi
Gabbard condemned the British government for reportedly asking Apple to
implement a backdoor in its encryption to allow greater access to users’ data.
Apple instead removed its most advanced data security tool for U.K. customers
last week.
Russian state-linked hacking groups have snuck into some Ukrainian military
staffers’ Signal messenger accounts to gain access to sensitive communications,
Google said in a report on Wednesday.
Moscow-linked groups have found ways to couple victims’ accounts to their own
devices by abusing the messaging application “linked devices” feature that
enables a user to be logged in on multiple devices at the same time.
In some cases, Google has found Russia’s notorious, stealthy hacking group
Sandworm (or APT44, part of the military intelligence agency GRU), to work with
Russian military staff on the front lines to link Signal accounts on devices
captured on the battlefield to their own systems, allowing the espionage group
to keep tracking the communication channels.
In other cases, hackers have tricked Ukrainians into scanning malicious QR codes
that, once scanned, link a victim’s account to the hacker’s interface, meaning
future messages will be delivered both to the victim and the hackers in real
time.
Russia-linked groups including UNC4221 and UNC5792 have been sending altered
Signal “group invite” links and codes to Ukrainian military personnel, Google
said.
Signal is considered an industry benchmark for secure, end-to-end encrypted
messaging, as it collects minimal data and its end-to-end encryption protocol is
open-source, meaning cybersecurity experts can continuously check it for
glitches. The European Commission and European Parliament are some of the
government institutions that have advised staff to use the application over
competing messaging apps.
Google’s research did not suggest the app’s encryption protocol itself was
vulnerable, but rather that the app’s “linked devices” functionality was being
abused as a workaround.
Google is now warning the workarounds to snoop on Signal data could pop up
beyond Ukraine too.
“We anticipate the tactics and methods used to target Signal will grow in
prevalence in the near-term and proliferate to additional threat actors and
regions outside the Ukrainian theater of war,” said Dan Black, cyber espionage
researcher at Google Cloud’s Mandiant group.
Other messaging apps, including WhatsApp and Telegram, have similar
functionalities to link devices’ communications and could be or become the
target of similar lures, Google suggested.
Signal did not respond to a request for comment at the time of publication.
More than 1,000 artificial intelligence experts, thinkers, investors, regulators
and doers are swarming Paris this week for two days of talks about what the
technology can and should do. POLITICO runs down some of the big names shaping
the debate.
FRANCE’S AI HOPEFUL
Arthur Mensch embodies France’s hopes for a breakthrough in the cutthroat world
of AI.
The 32-year-old, who co-founded and leads startup Mistral AI, has forged strong
connections with the French public sector and French President Emmanuel Macron,
working on the country’s AI strategy and voicing the concerns of AI companies
about regulation.
Mensch has repeatedly asked for European Union rules on AI to be more flexible,
even after pushing for an “innovation-friendly” framework as the law was being
agreed. That outreach seems to have had some success, with EU officials now
agreeing to simplify some of their requirements.
Trying to be a European AI success — with an eye toward an eventual initial
public offering to raise funds from investors — involves a complicated balancing
act. Mistral AI has tried to build partnerships in France with state-owned news
agency AFP and with the French army.
But Mensch, a former alumni of Google DeepMind, has also forged bonds across the
Atlantic, with a growing team in the U.S. and with U.S. investment. Last year
the company struck a distribution pact with Microsoft’s cloud business Azure,
sparking a debate on whether European AI companies can or should remain
independent of the Big Tech titans that lead AI.
OPENAI’S EURO-FIXER: SANDRO GIANELLA
Shortly after OpenAI stepped into the global spotlight with the launch of
ChatGPT in November 2022, the company knew it had to bring in a tech policy
master and a safe pair of hands to run its operations in Europe.
They chose Sandro Gianella, who had learned the ropes at both a U.S. Big Tech
firm (Google) and a European upstart (Irish-American payment handler Stripe).
Gianella started work in June 2023 at a critical moment, as European legislators
were trying to land the EU’s AI Act, the globe’s first-ever binding AI rulebook,
with calls to include specific rules for general-purpose AI models such as that
of OpenAI.
Gianella is not your average suit-and-tie Brussels tech lobbyist. Having
embraced the post-pandemic remote work culture, he can often be found in the
picturesque Bavarian Alps near Munich. His social media feeds are about AI, to
be sure, but he posts just as much about bike or ski trips in the Alps.
Those diverse interests might help him balance a frantic OpenAI work stream
while juggling scrutiny from several European capitals. Brussels has been
drafting a voluntary code of practice for general-purpose AI models, while Paris
and London have also been keen to develop their own AI efforts and rein in
potential risks, including scrutiny of OpenAI’s links to Microsoft.
THE AI SEER: GEOFFREY HINTON
Cited as one of the godfathers of AI for his work on artificial neural networks,
Hinton shocked the AI world in May 2023 by quitting Google to speak about the
existential risk of artificial intelligence. The computer scientist said he had
changed his mind about the technology after seeing its rapid progress, and began
touring the world to warn of the dire threats it posed to humanity. That mission
included briefing U.K. government ministers on the societal impacts that would
result if AI systems evolved beyond human control.
“He was very compelling,” said one person who was briefed. Hinton’s warnings
helped convince then-U.K. Prime Minister Rishi Sunak to launch the world’s first
AI Safety Institute and hold an AI Safety Summit at Bletchley Park.
AI doomers have since lost the argument on trying to slow down the technology’s
development, but the 77-year-old continues to beat the existential risk drum.
The Nobel Prize winner (in physics) will be in Paris speaking at side events.
THE AI OPEN-SOURCE ADVOCATE: YANN LECUN
Even though he works for Silicon Valley giant Meta as its chief AI scientist,
Yann LeCun is a pillar of the French AI ecosystem. An early AI pioneer, he’s
been a lifelong advocate of open source, an open and collaborative form of
software development that contrasts with closed proprietary models developed by
AI star OpenAI and others .
LeCun plays an influential role at Meta, with his hand visible in the company’s
2015 opening of the FAIR artificial intelligence laboratory in Paris. The launch
was a first for France at the time, and was motivated by LeCun’s conviction that
the French capital was home to a pool of untapped talent.
Almost 10 years later, corporate alumni from that laboratory have seeded
themselves across European AI. Antoine Bordes, who was the co-managing director
of FAIR, works for the defense startup Helsing, while another former employee,
Timothée Lacroix, is now Mistral AI’s co-founder and chief technology officer.
LeCun is also an enthusiastic cheerleader for the technology, and could be seen
walking around Paris with his AI-powered Ray Ban glasses even as Meta hesitated
to release them in Europe due to regulatory concerns.
LeCun has never been an AI doomer and argues that an open-source approach can
ensure AI evolves in a way that benefits humanity, even if it’s also been viewed
as beneficial to China, where open source helped fuel the creation of the
DeepSeek chatbot. LeCun’s open-source advocacy has seen him joust with SpaceX
founder Elon Musk on social media before Meta’s current turn to embrace the
administration of U.S. President Donald Trump.
THE UK’S AI WHISPERER: MATT CLIFFORD
Matt Clifford is the U.K. government’s go-to brain on all matters tech. He
chairs the country’s moonshot funding agency ARIA, helped set up the U.K. AI
Safety Institute under the last government, and is now advising the new
government on implementing an “AI Opportunities Action Plan” that he authored.
He played a crucial role in the first AI Safety Summit at Bletchley Park in
November 2023, jetting across the world as then-PM Sunak’s representative.
After that, the former McKinsey consultant returned to his day job as an
early-stage investor in tech firms; when Sunak’s government fell at last year’s
election, the new Labour administration came calling.
The 39-year-old had several chats with the country’s technology secretary, Peter
Kyle, which led to his being tasked with creating an AI Action Plan for them
over the summer. That plan was finally released in January and will be the
blueprint for British AI policy; the government accepted all 50 of its
recommendations, and Kyle is now advising No.10 once a week on implementing it.
With no other tech specialist close to No.10, Clifford’s star keeps rising.
While the Bradford-born Clifford is affable and doesn’t take payment for his
government work, he has been the subject of briefings against him for perceived
conflicts of interest. His recommendation that the copyright regime be reformed
has drawn particular ire from publishers and rights holders.
THE AI REGULATOR: KILLIAN GROSS
Last year the European Union became a global trendsetter by adopting its AI Act,
a binding rulebook regulating the highest-risk AI systems. European Commission
veteran Kilian Gross has been one of the key figures in ensuring the law is
rolled out swiftly.
Gross leads the AI regulation and compliance unit inside the Commission’s AI
Office, a key group that will determine the fate of the AI Act. While AI Office
boss Lucilla Sioli is the Commission’s face to a broader audience on anything
related to AI regulation, Gross is never too far away to jump in when things get
technical.
Gross was trained as a competition lawyer, but in his quarter century at the EU
executive he has also worked on policies such as digital, energy, taxation and
state aid. He also advised Germany’s Energy and Housing Commissioner Günther
Oettinger.
Tech lobbyists say Gross has been running around Brussels to meet with tech
companies or industry lobby groups, either to explain the rules or to listen to
their complaints about how burdensome they are. His nerves could be tested to
the limit over the next 18 months as the EU’s AI rulebook gradually takes
effect.
THE AI SCIENTIST: YOSHUA BENGIO
While policymakers regulate how AI companies deal with the risks of the
technology, the step before that — identifying those risks — is the playground
of Canadian computer scientist Yoshua Bengio.
One of the “godfathers of AI,” together with Hinton and LeCun, Bengio is an
influential voice in the debate over the risks of AI and potential responses to
them.
In the lead-up to the Paris summit, Bengio led work on an AI safety report
authored by 96 scientists, which will be a focus of debate in Paris. His
message: Before we can start addressing the risks, we need to crack open the AI
boxes and require that companies provide more transparency about how their AI
models work.
Bengio is also being tapped for regulatory work. The European Commission’s AI
Office has named him as one of the academic experts who will draft a set of
voluntary rules for the most advanced general-purpose AI models. That
initiative, however, is now in peril after Google and Meta attacked how the
rules are drafted.
THE AI DISSIDENT: MEREDITH WHITTAKER
As an influential AI ethics researcher at Google, Meredith Whittaker urged that
the company do more about AI’s potential harms. Now, as head of the non-profit
foundation behind encrypted messaging app Signal and an adviser to the AI Now
Institute, she remains a powerful voice calling Big Tech to account and
countering some of the AI hype.
Whittaker quit Google in 2019 after leading a series of walkouts to protest
workplace misconduct. She has since warned that existing AI systems can include
biased datasets that entrench racial and gender biases — an issue that requires
immediate action by regulators.
She also campaigned against attempts to break encryption and warned of the
market power of a handful of U.S. companies over AI. Until recently she even had
a role counseling regulators as a senior adviser on AI to Lina Khan, who chaired
the U.S. Federal Trade Commission from 2021 to 2025.
THE AI PRESIDENT: EMMANUEL MACRON
French President Emmanuel Macron may be struggling to form a government but he
hasn’t abandoned his ambition to be the brains behind France’s — and Europe’s —
AI strategy.
As host of the AI Action Summit in Paris, the French president has been hard at
work pushing European countries to adopt a more aggressive innovation strategy
that could help draw investment. He has also stepped up talks with French and
European business leaders and researchers to show off what France can do for AI.
Macron’s interest in AI is not new. Back in 2018 he launched a national AI
strategy, entitled “AI for Humanity,” aimed at positioning France as a world
leader and funding AI research, innovation and training.
That ambition has now shifted up a gear, especially since Washington announced
the investment of hundreds of billions of dollars in AI infrastructure. Macron
is pushing hard to help French companies and above all the country’s great hope,
Mistral AI, which Paris is counting on to rival OpenAI.
At the same time, Macron also wants to make Paris a platform for global talks on
universal access to AI, as Europe tries to find a space in a tech race dominated
by the U.S. and China. Here he has tried to pull in new allies, even reaching
out to Indian Prime Minister Narendra Modi to co-chair the Paris AI Summit.
Hosken: The term ‘sovereignty’ is used by different providers in different ways,
which can lead to confusion. What does sovereign cloud mean for customers?
Michels: From a customer perspective, the term ‘sovereign’ cloud is often used
to describe a service that offers a high degree of control. For example, the
customer can determine who can access data stored in the cloud, for what
purposes that data can be used and in which region(s) the data is stored. The
provider is also transparent about the sub-processors it uses and the metadata
it collects. A customer can then make its own informed and autonomous decisions
about how to use the service. Further, the customer can port its data to another
provider, if it wants to do so, or move data back in-house. In this broad sense,
sovereign cloud describes an ideal centered on customer choice and autonomy ,
rather than a particular service type.
But the term is also used to focus specifically on foreign government access.
Can a foreign government order your cloud provider to disclose your data without
telling you? This question impacts both public sector organizations (especially
those that deal with defense and national security), as well as private
companies that process personal data regulated under the GDPR [ the EU’s General
Data Protection Regulation], or store commercially sensitive data in the cloud.
Aside from such customer concerns, there is a wider policy discussion about
European cloud sovereignty. For example, the EU aims to promote the use of
European cloud providers as part of its industrial strategy to strengthen the EU
economy and maintain competitiveness and technological leadership in an
increasingly uncertain world. Hyperscalers in the United States have access to
enormous amounts of data. The European Commission would prefer that European
cloud providers benefit from this ‘data advantage’ instead. Lastly, EU
policymakers also have geopolitical concerns about an overreliance on foreign
service providers and the strategic autonomy of European states. Yet such issues
are typically more relevant to EU policymaking than to individual customer
deployments.
Hosken: Why are European organizations turning to European cloud providers
rather than hyperscalers in the United States for sovereign cloud solutions?
Michels: For some aspects of sovereignty, the provider’s nationality doesn’t
matter. For example, providers in the United States can offer a high degree of
transparency and provide data residency options by using local data centers.
Yet provider nationality does matter when it comes to foreign government
access. This is because American providers are necessarily subject to their
country’s jurisdiction. So, any data they process could be subject to United
States’ production orders. For example, courts in the United States can issue
warrants for law enforcement purposes under the Stored Communications Act, as
amended by the CLOUD Act, while the National Security Agency can issue
directives for foreign intelligence purposes under Section 702 of the Foreign
Intelligence Surveillance Act.
Such production orders could compel a cloud provider in the United States to
disclose European customer data. For instance, per the CLOUD Act, production
orders can target any customer data within a United States’ provider’s
“possession, custody, or control”, regardless of data location. As a result, it
is possible that production orders in the United States can target European
customer data, even if the data is stored in Europe by an American hyperscaler’s
European subsidiary. This is because data processed by the European subsidiary
is considered to fall within the American parent company’s ‘control’, since the
parent company can exercise legal control over its subsidiary.
We therefore need to distinguish data residency (which concerns the location of
storage) from cloud sovereignty (which looks at the risk of foreign government
access). By contrast, a European provider can offer a service that is more
likely to be immune to jurisdiction and production orders in the United States.
This immunity would apply especially if the provider does not have customers,
assets or employees there.
Yet that doesn’t mean only European providers can offer sovereign cloud
solutions. American providers might also be able to protect customer data from
United States government access through technical measures such as client-side
encryption, encryption with third-party key management or confidential
computing. If securely implemented, the American provider can then only disclose
data in its encrypted form. In addition, United States law allows cloud
providers to challenge production orders under certain circumstances, including
on the basis of comity. Whether such measures can reduce the risk of foreign
government access to an acceptable level depends on the nature of the data and
the specific use case.
Hosken: At Broadcom, we’re seeing growing customer interest in building
sovereign clouds across Europe. What’s driving this demand?
Michels: I see regulation as one of the main drivers for European customers
seeking to protect their cloud data. This is particularly true of the GDPR,
given the high level of potential fines. Admittedly, the EU and the United
States have made progress on international data transfers and on increasing the
level of protection for European personal data, including through the EU-US Data
Privacy Framework. Nonetheless, there remains a level of legal uncertainty as to
whether American providers can provide an appropriate level of security and
offer sufficient guarantees of compliance when acting as processors of European
personal data. An example of this uncertainty is the European Data Protection
Supervisor’s enforcement action regarding the EU Commission’s use of Microsoft
365. In France, the CNIL [National Commission on Informatics and Liberty] has
also repeatedly raised concerns about the use of American cloud providers.
This problem applies especially to so-called special category data , such as
those relating to health and ethnicity, which are subject to strict rules under
the GDPR.
Some member states also have domestic legal requirements for sovereign cloud,
which apply at the national level. These typically apply to the public sector
and to operators of critical infrastructure, as with the French SecNumCloud
scheme. That said, regulation isn’t the only driver. Many customers also seek to
protect commercially sensitive information and trade secrets from foreign
government access.
Hosken: Will European organizations move all their data to sovereign clouds or
is there a case for multi-cloud?
Michels: European customers will continue to use the traditional cloud services
of American hyperscalers. But many organizations also need to think more
strategically about which data belong in which IT environment. Different
environments suit different workloads depending on technical and security
requirements, cost and regulatory compliance. For example, some workloads
benefit from the scalability and functionality that American hyperscalers offer,
while other, more sensitive data require additional protection. So, for some
customers, there is a strong case for cloud deployments that combine traditional
hyperscale cloud with sovereign cloud solutions.
> So, for some customers, there is a strong case for cloud deployments that
> combine traditional hyperscale cloud with sovereign cloud solutions.
Hosken: What challenges do organizations face in adopting sovereign cloud?
Michels: An organization that wants to adopt sovereign cloud solutions can face
three main challenges. First, the organization needs to understand the data it
processes, including which of its data are sensitive and so need extra
protection, whether as regulated personal data or from a commercial perspective.
This can be achieved by data classification policies.
Second, a lack of interoperability between cloud services can prevent customers
from combining the services of different providers. Better interoperability
would support integrated cloud deployments across multiple providers instead of
separate, siloed workloads.
Third, a lack of portability can stop customers from migrating data from their
current cloud provider to another provider. Better portability would empower
customers who are unhappy with their current service to switch providers and
allow them to benefit from the advantages that different cloud services offer.
In theory, the EU Data Act should support customer switching beginning in
September 2025. Yet practical challenges to switching may well persist, despite
the new (and untested) legal requirements. Much will depend on how the law is
implemented.
Hosken: One way you have suggested that cloud providers could help customers
navigate legal uncertainty around sovereignty requirements is a so-called code
of conduct. While we at Broadcom have not taken a position on this approach,
could you share your views on this?
Michels: To address legal uncertainty under the GDPR, the cloud industry could
work together to develop a new Sovereign Cloud Code of Conduct. The code would
focus on what providers can do to reduce the risk that foreign government access
poses to the fundamental rights and interests of European data subjects. The
code can also recognize that different providers can reduce that risk in
different ways, including through technological measures such as confidential
computing.
> To address legal uncertainty under the GDPR, the cloud industry could work
> together to develop a new Sovereign Cloud Code of Conduct
Once approved by a regulator, the code would reduce legal uncertainty under the
GDPR. Compliance with the code would give the customer assurance that their data
will not be subject to an inappropriate level of risk of foreign government
access. This should benefit customers and providers alike, as well as assure
European data subjects that their fundamental rights are protected in the cloud.
Michels researches cloud computing law for the Cloud Legal Project at the Centre
for Commercial Law Studies, Queen Mary University of London. He has co-authored
papers on the implications of sovereignty for European policy and GDPR
compliance. The Cloud Legal Project is made possible by the generous financial
support of Microsoft. The author is grateful to Broadcom for providing funding
to conduct a series of expert interviews and prepare a forthcoming report on
cloud sovereignty. Responsibility for views expressed remains entirely with the
author and do not necessarily reflect views shared by Broadcom.