The Dutch data protection watchdog has warned voters not to ask artificial
intelligence chatbots for voting advice ahead of the country’s general election
next week.
“AI chatbots give a highly distorted and polarized image of the Dutch political
landscape in a test,” the data protection watchdog warned in a study published
on Tuesday.
“We warn not to use AI chatbots for voting advice, because their operations are
not transparent and verifiable,” Monique Verdier, vice-chair of the authority,
said in a statement. She called upon the chatbot developers to “prevent that
their systems are being used for voting advice.”
Dutch voters elect a new parliament next Wednesday.
The Dutch data protection authority ran an experiment on how parties were
portrayed in voting advice across four different chatbots, including OpenAI’s
ChatGPT, Google’s Gemini, Elon Musk’s Grok and French Mistral AI’s Le Chat.
The authority set up profiles that matched different political parties (based on
vetted Dutch voting-aid tools), after which it asked the chatbots to give voting
advice for these profiles.
Voter profiles on the left and progressive side of the spectrum “were mostly
directed to the GreenLeft-Labor” party led by former European Commission
Executive Vice President Frans Timmermans, while voters on the right and
conservative side “were mostly directed to the PVV,” the far-right party led by
Geert Wilders that is currently leading in the polls.
Centrist parties were hardly represented in the voting advice, even though these
parties were represented equally in the voter profiles fed to the chatbots.
OpenAI, Google and Mistral have all signed up to the EU’s code of practice for
the most complex and advanced AI models, while Grok’s parent company xAI has
signed up to parts of it. Under the code, these companies commit to address
risks stemming from their models, including risks to fundamental rights and
society.
The Dutch authority argued that chatbots giving voting advice could be
classified as a high-risk system under the EU’s AI Act, for which a separate set
of rules will start to apply from mid-next year.
Tag - Internet governance
TIRANA — Albania has become the first country in the world to have an AI
minister — not a minister for AI, but a virtual minister made of pixels and code
and powered by artificial intelligence.
Her name is Diella, meaning sunshine in Albanian, and she will be responsible
for all public procurement, Prime Minister Edi Rama said Thursday.
During the summer, Rama mused that one day the country could have a digital
minister and even an AI prime minister, but few thought that day would come
around so quickly.
At the Socialist Party assembly in Tirana on Thursday, where Rama announced
which ministers would get the chop and which would stay on for another mandate,
he also introduced Diella, the only non-human member of the government.
“Diella is the first member not physically present, but virtually created by
artificial intelligence,” he told party members.
Rama stated that decisions on tenders would be taken “out of the ministries” and
placed in the hands of Diella, who is “the servant of public procurement.” He
said the process will be “step-by-step,” but Albania will be a country where
public tenders are “100 percent incorruptible and where every public fund that
goes through the tender procedure is 100 percent legible.”
“This is not science fiction, but the duty of Diella,” he said.
Diella has already been introduced to Albanian citizens as she powers the
country’s e-Albania platform, which allows citizens to access almost all
government services digitally. She even has an avatar, appearing as a young
woman dressed in traditional Albanian clothing.
Diella will evaluate tenders and have the right to “hire talents here from all
over the world,” while breaking down “the fear of prejudice and rigidity of the
administration.”
Albania has long battled with corruption, particularly in public administration
and in the area of public procurement. The matter has been repeatedly
highlighted by the European Union in its annual rule of law reports.
Rama swept to a historic fourth mandate in May 2025, on a ticket of joining the
bloc by 2030.
BRUSSELS — Two of Europe’s tech powerhouses tied the knot on Tuesday in a
landmark deal that bolsters a push by politicians to reduce reliance on the
United States for critical technology.
Dutch microchips champion ASML confirmed it was investing €1.3 billion in French
AI frontrunner Mistral, one of the few European companies that is able to go
head-to-head with U.S. leaders like OpenAI and Anthropic on artificial
intelligence technology.
It’s a business deal soaked in politics.
Officials from Brussels to Paris, Berlin and beyond have called for Europe to
reduce its heavy reliance on U.S. technology — from the cloud to social media
and, most recently, artificial intelligence — under the banner of “tech
sovereignty.”
“European tech sovereignty is being built thanks to you,” was how France’s
Junior Minister for Digital Affairs and AI Clara Chappaz cheered the deal on X.
Europe has struggled to stand out in the global race to build generative AI ever
since U.S.-based OpenAI burst onto the scene in 2022 with its popular ChatGPT
chatbot. Legacy tech giants like Google quickly caught up, while China proved
its mettle early this January when DeepSeek burst onto the scene.
European politicians can showcase the ASML-Mistral deal as proof that European
consumers and companies still can rely on homegrown tools. That need has never
been more urgent amid strained EU-U.S. ties under Donald Trump’s repeated
attacks against EU tech regulation.
But the deal also illustrates that while Europe can excel in niche areas, like
industrial AI applications, winning the global consumer AI chatbot race is out
of reach.
EUROPE KEEPS CONTROL
Tuesday’s deal brings together two European companies that are most closely
watched by those in power.
ASML, a 40-year-old Dutch crown jewel, has grown into one of the bloc’s most
politically sensitive assets in recent years. The U.S. government has repeatedly
tried to block some of the company’s sales of its advanced microchips printing
machines to China in an effort to slow down Chinese firms.
Mistral is only two years old but has been politically plugged in from the
start, with former French Digital Minister Cédric O among its co-founders.
When the company faced the need to raise new funding this summer, several
non-European players were floated as potential backers, including the Abu
Dhabi-based MGX state fund. There were even rumors Mistral could be acquired by
Apple.
Apple’s acquisition of Mistral would have been “quite negative” for Europe’s
tech sovereignty aspirations, said Leevi Saari, EU policy fellow at the
U.S.-based AI Now Institute, which studies the social implications of AI. “The
French state has no appetite [for] letting this happen,” he added.
Getting financing from an Abu Dhabi-based fund, conversely, would have
reinforced the perception that Europe can provide the millions in venture
capital funding needed to start a company, but not the billions needed to scale
it.
With this week’s €1.7 billion funding round led by ASML, Europe’s tech
sovereignty proponents can breath a sigh of relief.
“European champions creating more European champions is the way to go forward
and it needs further backing from the EU,” said Dutch liberal European
Parliament lawmaker Bart Groothuis in a statement.
The deal is also what officials, experts and the industry want to see more of:
one where startups are backed by an established European corporation rather than
a venture capitalist.
“A European corporation finally investing massively in a European scale-up from
its industry, even [if] it [is] not directly tied to its core business,” said
Agata Hidalgo, public affairs lead at French startup group France Digitale,
on Linkedin.
A French government adviser, granted anonymity to speak freely on private deals,
said they felt “hyped” by the news after months of uncertainty due to Mistral’s
refusal to publicly deny talks with Apple.
The deal is also expected to avoid any close scrutiny from Europe’s powerful
antitrust regulators, which in the past have intervened in mergers and deals to
keep the market competitive. Tuesday’s deal is not a full takeover and does not
need merger clearance.
Nicolas Petit, a competition law professor at the European University Institute,
said there was “nothing to see here unless the EU wants to shoot itself in the
foot with a bazooka.”
“It’s a non-controlling investment, and neither ASML [nor] Mistral AI compete in
any product or service market,” he added.
REALITY CHECK
While the incoming Dutch investment goes a long way toward keeping Mistral in
European hands, it also determines the path forward for the French artificial
intelligence challenger.
Mistral had already been struggling “to keep up with the race for market share”
with other large language models, Saari claimed in a blogpost published last
week, in which he cited numbers suggesting that Mistral’s market share is
“around 2 percent.”
“Mistral was known to face challenges both technically and in finding a business
model,” said Italian economist Cristina Caffarra, who has been leading the
charge for European tech sovereignty through the Eurostack movement. “It’s great
they found a European champion anchor investor” that will, in part, “protect
them from the [venture capital] model.”
Tuesday’s deal could mean that Mistral will get more support to work on
industrial applications instead of a consumer-facing chatbot that venture
capitalists like to propagate.
“With Mistral AI we have found a strategic partner who can not only deliver the
scientific AI models that will help us develop even better tools and solutions
for our customers, but also help us to improve our own operations over time,”
ASML CEO Christophe Fouquet wrote in a post on Linkedin.
ASML’s main customers are the world’s biggest microchips manufacturers,
including Taiwan’s TSMC and America’s Intel. The company also has a wide network
of industrial suppliers, which could be leveraged as well.
For Mistral, catering to European industrial applications could strengthen its
business. But it could also be seen as a tacit admission that in the global AI
race, Europe has to pick its battles.
Francesca Micheletti and Océane Herrerro contributed reporting.
The United States should see the United Nations as something it can benefit
from, rather than charity, the body’s tech envoy said Tuesday.
The U.S. withdrew from several U.N. bodies, including its Human Rights Council.
Asked about the United Nations’ reaction, the U.N. Secretary General’s Envoy on
Technology Amandeep Singh Gill said the U.N. is critical for peace and security,
upholding human rights and advancing sustainable development.
“These efforts are not charity,” he told POLITICO’s AI & Tech Summit.
The U.N.’s programs and initiatives “benefit all of us … They might even benefit
partners in the global north more than those in the global south,” he said, in
part because global technology initiatives drive cross-border trade and
investment.
Gill echoed U.N. Secretary General António Guterres’s promise this week to cut
costs and streamline the body’s operations — an apparent response to the U.S.
administration cutting contributions and participation.
Gill said the U.N. will strive “to achieve more efficiency and effectiveness in
delivering value for member states.”
The European Union is set to admit that untangling from the dominance of U.S.
tech companies is “unrealistic” as fears grow over the bloc’s dependence on
American giants.
A draft strategy seen by POLITICO ahead of its release this spring signals the
EU has few fresh ideas to restore Europe as a serious player in global tech —
even as responding to the new transatlantic reality becomes a top priority in
Brussels.
The return of United States President Donald Trump to the White House and his
combative stance toward Europe has revived concerns about sovereignty over
fundamental technologies, including social media and cloud services, as well as
about the potential access of U.S. law enforcement to data processed by
ubiquitous giants Amazon, Microsoft and Google.
In the context of escalating trade tensions and mounting hybrid threats, the EU
will soon release its International Digital Strategy for Europe. “Tech
competitiveness is an economic and security imperative for all aspiring to
durable wealth and stability,” says a draft version dated April 9.
Yet when it comes to dominant players such as the U.S., “decoupling is
unrealistic and cooperation will remain significant across the technological
value chain,” the draft says. It cites China as well as Japan, South Korea and
India as countries with which collaboration will also be essential.
The pitch for strategic tech alliances with like-minded countries — to team up
on research and generate greater business opportunities for the bloc’s companies
— comes in stark contrast to growing calls for a move toward protectionism.
The strategy is more defensive on China, stating that the EU will seek to
maintain its “leadership in promoting secure and trusted 5G networks globally” —
essentially a nod to excluding Chinese vendors such as Huawei. | Mukhriz
Hazim/EFE via EPA
For Europe, “business as usual is no option,” wrote Marietje Schaake earlier
this year. Schaake, a former Dutch liberal member of the European Parliament who
is a leading voice on tech, called on the bloc to “end its debilitating
dependence on American tech groups and take concrete steps to shield itself from
the growing dangers of this new, tech-fueled geopolitical landscape.”
In Brussels, the idea of a “Eurostack” — an ambitious industrial plan to break
free from U.S. tech dominance — is gaining steam, with key lawmakers throwing
their weight behind the proposal.
The draft strategy backs international engagement on critical technologies such
as quantum and chips — as “the growing complexity of semiconductor supply chains
and geopolitical uncertainty necessitate a tailored, country-specific approach.”
The EU has been scrambling to fix, among other things, a risky reliance on China
for low-tech chips.
Cooperation could also include building prized artificial intelligence factories
outside the bloc to help Europe grow its impact in the nascent technology,
according to the draft. It should also include joined-up efforts on
cybersecurity to crack down on ransomware.
The strategy is more defensive on China, stating that the EU will seek to
maintain its “leadership in promoting secure and trusted 5G networks globally” —
essentially a nod to excluding Chinese vendors such as Huawei.
Brussels and Washington have been joining forces for years to tame the
technology giant’s global ambitions, using digital diplomacy tools to convince
third countries to ditch equipment from the Shenzhen-based firm.
The draft proposes extending that model to subsea cables, whose network map
should be built “with like-minded countries.”
The strategy is set to be presented June 4 according to the latest European
Commission agenda.
BRUSSELS — Were you thinking of sending your artificial intelligence helper to
an online meeting with the European Commission?
Think again.
The European Union’s executive institution has a new ground rule that bars
virtual assistants powered by artificial intelligence from participating in its
meetings. It imposed the rule for the first time on a call with representatives
from a network of digital policy support offices across Europe earlier this
month.
“No AI Agents are allowed,” said a slide on e-meeting etiquette at the start of
the presentation.
The Commission acknowledged it had imposed the ground rule for the first time
last week, declining to give more details on the policy and reasons why it took
the decision.
It’s a weird twist to a recent development in artificial intelligence
technology: the rise of said “AI agents.”
AI’s most popularized application so far seems to be chatbots like OpenAI’s
ChatGPT, which can generate text or information or perform one single task when
asked by a human. | Sebastien Bozon/AFP via Getty Images
AI’s most popularized application so far seems to be chatbots like OpenAI’s
ChatGPT, which can generate text or information or perform one single task when
asked by a human. But AI agents push that boundary: They are assistants that can
tackle several tasks autonomously and interact in a virtual environment. They
act on users’ behalf to conduct a series of tasks helping people in their jobs
or daily life.
One of those tasks is joining an online meeting, taking notes or even reciting
certain information.
Quietly, Brussels has been gearing up for an era in which AI agents participate
in daily life and business.
The technology was mentioned in a wider Commission package on virtual worlds
published March 31. “AI agents are software applications designed to perceive
and interact with the virtual environment,” the text read. Agents can “operate
autonomously,” but their work is set by “specific predefined rules.”
Leading AI companies have all been experimenting with their own AI agent
applications. In January, OpenAI launched Operator, a research version of an AI
agent that can carry out several tasks in a separate web browser. Microsoft has
also been rolling out the possibility of creating agents in its AI “companion”
Copilot. French AI company Mistral also offers a platform to build agents.
So far, the technology isn’t covered by any specific legislation, but the AI
models that power the agents will have to abide by the EU’s binding AI Act.
The technology could also come into focus when the Commission explores specific
legislation on algorithmic management, the idea that employees are being managed
by algorithms, later this mandate.
BRUSSELS — The European Commission is finalizing a plan to make its artificial
intelligence rules more palatable to companies, as they scramble to adapt to
American tariffs that have sent shockwaves through the global economy.
The EU executive will launch a new “AI Continent” plan on Wednesday. According
to an undated draft of the plan obtained by POLITICO, the executive wants to
“streamline” rules and get rid of “obstacles” that it feels are slowing
companies in Europe down in competing with the U.S. and China.
The strategy accomodates concerns expressed by Big Tech companies and AI
front-runners, which directed fierce lobbying attacks against the EU’s AI Act
and other pieces of digital legislation.
Those concerns of the tech industry were echoed by former Italian Prime Minister
Mario Draghi in his landmark report on competitiveness in Europe and were
included in the key priorities of Ursula von der Leyen’s second term as
Commission president. The Commission’s tech czar Henna Virkkunen told a global
AI conference in Paris in early February that the EU’s regulatory framework
should be more “innovation-friendly.”
Wednesday’s draft strategy is expected to say that the bloc needs to seize the
“opportunity to minimize the potential compliance burden” of the AI Act.
OpenAI’s Vice President for Global Affairs Chris Lehane told POLITICO in an
interview last week that Brussels needs to keep its rules “simple and
predictable.” Lehane is in Brussels this week to meet with EU policymakers — a
signal that leading AI companies are watching Wednesday’s announcement closely.
The Commission’s tech czar Henna Virkkunen told a global AI conference in Paris
in early February that the EU’s regulatory framework should be more
“innovation-friendly.” | Emmi Korhonen/Lehtikuva/AFP via Getty Images
The OpenAI chief lobbyist said he had seen “a shift in mindset of how people are
thinking about AI and the opportunity” at the summit in Paris in February. But
he added that the question is now whether Brussels “can get the strategy right.”
According to the latest tally only 13 percent of European companies have adopted
AI.
Lehane said that besides having “simple rules,” the EU should also be able to
build its own AI infrastructure and launch an effort to retrain European
workers.
By 2030, the EU should increase its computing power by 300 percent, and 100
million Europeans should have acquired AI skills, OpenAI said on Monday in
recommendations called the “EU’s economic blueprint” targeted at EU
policymakers. It also pitched a €1 billion fund for AI pilot projects.
SHOW US THE OBSTACLES
The EU’s executive in its plan on Wednesday wants to ask the tech industry to
“identify where regulatory uncertainty creates obstacles” to developing and
deploying AI.
The draft text listed measures to boost the computing power and high-quality
data needed to train AI models, as well as the industry’s uptake of AI and
workers’ AI skills.
Brussels is also set to make progress on its effort to build five “AI
gigafactories” — a €20 billion promise made by Commission President Von der
Leyen during the Paris AI Action Summit. Wednesday’s plan includes a call for EU
countries to invest in or host such gigafactories — a first step for gauging
interest before a more formal procedure kicks off at the end of this year.
Those gigafactories are meant to train the most complex AI models and will have
four times the processors of the current most performant supercomputers.
The Commission is also paving the way to expand Europe’s cloud and data center
capacity. The draft stated that Brussels aims to “triple” Europe’s data center
capacity in the next five to seven years.
It labeled Europe’s current reliance on “non-EU infrastructure,” notably
American hyperscalers like Amazon, Google and Microsoft, as a concern for
industry and governments.
Austrian privacy group Noyb on Thursday filed a complaint against ChatGPT for
making up information about individuals, including a false story about how one
user would be a child murderer.
The popular artificial intelligence chatbot ChatGPT, like other chatbots, has a
tendency to “hallucinate” and generate wrong information about people because it
uses incorrect data or makes incorrect assumptions from its data.
In the case underpinning the complaint, a user called Arve Hjalmar Holmen asked
the chatbot in August 2024 if it had any information about him, after which
ChatGPT presented a false story that he murdered two of his children and
attempted to murder his third son. The response contained correct facts like the
number and gender of his children and the name of his home town.
“The fact that someone could read this output and believe it is true, is what
scares me the most,” Hjalmar Holmen said in a statement shared by Noyb.
OpenAI has since updated ChatGPT to search for information on the internet when
asked about individuals, meaning it would in theory no longer hallucinate about
individuals, Noyb said. But it added that the incorrect information may still be
part of the AI model’s dataset.
In its complaint filed with Norway’s data protection authority (Datatilsynet),
it asked the authority to fine OpenAI and order it to delete the defamatory
output and fine-tune its model to eliminate inaccurate results.
Noyb said that by knowingly allowing ChatGPT to produce defamatory
results, OpenAI is violating the General Data Protection Regulation (GDPR)’s
principle of data accuracy.
ChatGPT presents users with a disclaimer at the bottom of its main interface
that says that the chatbot may produce false results. But Noyb data protection
lawyer Joakim Söderberg said that “isn’t enough.”
“You can’t just spread false information and in the end add a small disclaimer
saying that everything you said may just not be true,” he said. “The GDPR is
clear. Personal data has to be accurate. And if it’s not, users have the right
to have it changed to reflect the truth.”
The New York Times previously reported that “chatbots invent information at
least 3 percent of the time — and as high as 27 percent.” Other news reports
detail how ChatGPT has made up stories about people including allegations of
sexual assault or bribery.
Noyb filed a separate complaint with Austria’s data protection authority last
year over the fact that ChatGPT made up founder Max Schrems’ birthday.
Europe’s data protection authorities formed a ChatGPT task force in 2023 to
coordinate privacy-related enforcement actions against the platform, which was
widened to a more general AI task force earlier this year.
OpenAI did not respond to a request for comment in time for publication.
Italy’s data protection authority has ordered a block on Chinese artificial
intelligence revelation DeepSeek, it said late on Thursday.
The regulator said it has ordered Hangzhou DeepSeek Artificial Intelligence and
Beijing DeepSeek Artificial Intelligence — the Chinese companies behind the
DeepSeek chatbot — to stop processing Italians’ data with immediate effect.
The move comes after DeepSeek apparently told the authorities it wouldn’t
cooperate with a request for information made by the agency.
“Contrary to what was found by the authority, the companies have declared that
they do not operate in Italy and that European legislation does not apply to
them,” the Italian regulator said. This response “was deemed completely
insufficient,” it added.
The regulator has also opened an investigation, it said.
The Chinese AI firm recently emerged as a fierce competitor to industry leaders
like OpenAI, when it launched a competitive model to ChatGPT, Google’s Gemini
and other leading AI-fueled chatbots that it claimed was created at a fraction
of the cost of others.
The release triggered an industry panic and markets shock in the U.S., as key
shares in the tech sector dropped sharply on Monday.
The ban is not the first time the Italian privacy authority has taken such a
step; it also blocked OpenAI’s ChatGPT in 2023. It later allowed OpenAI to
re-open its service in Italy after meeting its demands.
POLITICO has approached DeepSeek for comment.
BRUSSELS — In just a week, Europe saw its artificial intelligence scene
floundering in the face of American flexing, only to see it rebound with the
rise of a Chinese rival.
A little-known Chinese AI model, DeepSeek, emerged as a fierce competitor to
United States’ industry leaders this weekend, when it launched a competitive
model it claimed was created at a fraction of the cost of champions like OpenAI.
President Donald Trump on Tuesday called it a “wake-up call” for the American
tech sector.
In Europe, it could mean something entirely different: A welcome signal that its
own AI industry has a fighting chance against the full force of American
national capitalism in the global AI arms race.
“High-quality efficient AI models are no longer the exclusive domain of tech
giants with huge hardware resources,” said Lucie Aimée Kaffee, EU policy lead
and applied researcher at Hugging Face, an open-source AI development platform.
Europe could compete on “efficiency, specialisation and responsible AI
development” with “small” AI models like DeepSeek, she said.
Trump presented his opening gambit in the AI race on his first day in office
last week, presenting an industry-led €500 billion AI hardware plan,
strengthening the belief that financial firepower is what will determine who
wins the AI race.
DeepSeek over the weekend claimed its new reasoning model R1 rivals that of
U.S.-based AI posterchild OpenAI, one of the driving forces between the $500
billion plan, and said the costs of training an earlier released model were
“economical,” estimated at under $6 million.
The rise of a Chinese budget competitor led to a market sell-off on Monday: U.S.
AI chip designer Nvidia lost close to $600 billion in valuation.
Europe doesn’t have the tech giants able to splash billions of euros on the AI
hardware needed to train models. Last week, that was seen as a crippling factor.
But the rise of DeepSeek suggests European leading firms like France’s Mistral,
Germany’s Aleph Alpha and many other, smaller ventures could also gain ground in
the AI race — perhaps even on the cheap.
“This shows that the race for AI is far from being over,” European Commission
spokesperson Thomas Regnier told reporters on Tuesday in Brussels.
Some saw in DeepSeek’s rise a sign that Europe’s lack of cash to splash on
massive computing power won’t necessarily hold it back in the AI race. Others
focused on the effect that it would bring down costs for AI developers.
Donald Trump presented his opening gambit in the AI race on his first day in
office last week, presenting an industry-led €500 billion AI hardware plan,
strengthening the belief that financial firepower is what will determine who
wins the AI race. | Andrew Harnik/Getty Images
DeepSeek’s app rose to the top of the app store rankings. It is free to
download, and the model itself is open and accessible on the developer platform
GitHub without many restrictions on how it can be reused.
OpenAI’s rival model, o1, is reserved for paid subscribers.
“[Startup] founders building at the application level have just been handed a
way to achieve good performance at a significantly lower cost,” said Nathan
Benaich, general partner at the AI-focused firm Air Street Capital.
Building actual AI applications on top of an existing model is one area where
Europe can still win, Meta’s outgoing top lobbyist Nick Clegg said on a panel at
the World Economic Forum in Davos last week.
Yet, companies should also be on guard.
Many users tinkering with DeepSeek’s model noticed that the chatbot refrains
from discussing topics that fall under the Chinese Communist Party’s censorship
regime. For example, users flagged the app refused to respond to queries about
the 1989 Tiananmen Massacre. Others saw in DeepSeek’s privacy policy that the
company collects keystroke patterns.
“If you’re working on certain sensitive applications, you should beware [of]
Chinese labs bearing gifts,” said Benaich.
European lawmakers are also closely watching the developments and the risks.
“It’s quite something that you store keystroke patterns, on Chinese servers,”
said Dutch liberal member of the European Parliament Bart Groothuis.
“It also influences the way we are searching, the way we are thinking, how
information is being provided,” he added. “It should not have its place in the
EU.”