The Dutch data protection watchdog has warned voters not to ask artificial
intelligence chatbots for voting advice ahead of the country’s general election
next week.
“AI chatbots give a highly distorted and polarized image of the Dutch political
landscape in a test,” the data protection watchdog warned in a study published
on Tuesday.
“We warn not to use AI chatbots for voting advice, because their operations are
not transparent and verifiable,” Monique Verdier, vice-chair of the authority,
said in a statement. She called upon the chatbot developers to “prevent that
their systems are being used for voting advice.”
Dutch voters elect a new parliament next Wednesday.
The Dutch data protection authority ran an experiment on how parties were
portrayed in voting advice across four different chatbots, including OpenAI’s
ChatGPT, Google’s Gemini, Elon Musk’s Grok and French Mistral AI’s Le Chat.
The authority set up profiles that matched different political parties (based on
vetted Dutch voting-aid tools), after which it asked the chatbots to give voting
advice for these profiles.
Voter profiles on the left and progressive side of the spectrum “were mostly
directed to the GreenLeft-Labor” party led by former European Commission
Executive Vice President Frans Timmermans, while voters on the right and
conservative side “were mostly directed to the PVV,” the far-right party led by
Geert Wilders that is currently leading in the polls.
Centrist parties were hardly represented in the voting advice, even though these
parties were represented equally in the voter profiles fed to the chatbots.
OpenAI, Google and Mistral have all signed up to the EU’s code of practice for
the most complex and advanced AI models, while Grok’s parent company xAI has
signed up to parts of it. Under the code, these companies commit to address
risks stemming from their models, including risks to fundamental rights and
society.
The Dutch authority argued that chatbots giving voting advice could be
classified as a high-risk system under the EU’s AI Act, for which a separate set
of rules will start to apply from mid-next year.
Tag - Internet search
TIRANA — Albania has become the first country in the world to have an AI
minister — not a minister for AI, but a virtual minister made of pixels and code
and powered by artificial intelligence.
Her name is Diella, meaning sunshine in Albanian, and she will be responsible
for all public procurement, Prime Minister Edi Rama said Thursday.
During the summer, Rama mused that one day the country could have a digital
minister and even an AI prime minister, but few thought that day would come
around so quickly.
At the Socialist Party assembly in Tirana on Thursday, where Rama announced
which ministers would get the chop and which would stay on for another mandate,
he also introduced Diella, the only non-human member of the government.
“Diella is the first member not physically present, but virtually created by
artificial intelligence,” he told party members.
Rama stated that decisions on tenders would be taken “out of the ministries” and
placed in the hands of Diella, who is “the servant of public procurement.” He
said the process will be “step-by-step,” but Albania will be a country where
public tenders are “100 percent incorruptible and where every public fund that
goes through the tender procedure is 100 percent legible.”
“This is not science fiction, but the duty of Diella,” he said.
Diella has already been introduced to Albanian citizens as she powers the
country’s e-Albania platform, which allows citizens to access almost all
government services digitally. She even has an avatar, appearing as a young
woman dressed in traditional Albanian clothing.
Diella will evaluate tenders and have the right to “hire talents here from all
over the world,” while breaking down “the fear of prejudice and rigidity of the
administration.”
Albania has long battled with corruption, particularly in public administration
and in the area of public procurement. The matter has been repeatedly
highlighted by the European Union in its annual rule of law reports.
Rama swept to a historic fourth mandate in May 2025, on a ticket of joining the
bloc by 2030.
BRUSSELS — Two of Europe’s tech powerhouses tied the knot on Tuesday in a
landmark deal that bolsters a push by politicians to reduce reliance on the
United States for critical technology.
Dutch microchips champion ASML confirmed it was investing €1.3 billion in French
AI frontrunner Mistral, one of the few European companies that is able to go
head-to-head with U.S. leaders like OpenAI and Anthropic on artificial
intelligence technology.
It’s a business deal soaked in politics.
Officials from Brussels to Paris, Berlin and beyond have called for Europe to
reduce its heavy reliance on U.S. technology — from the cloud to social media
and, most recently, artificial intelligence — under the banner of “tech
sovereignty.”
“European tech sovereignty is being built thanks to you,” was how France’s
Junior Minister for Digital Affairs and AI Clara Chappaz cheered the deal on X.
Europe has struggled to stand out in the global race to build generative AI ever
since U.S.-based OpenAI burst onto the scene in 2022 with its popular ChatGPT
chatbot. Legacy tech giants like Google quickly caught up, while China proved
its mettle early this January when DeepSeek burst onto the scene.
European politicians can showcase the ASML-Mistral deal as proof that European
consumers and companies still can rely on homegrown tools. That need has never
been more urgent amid strained EU-U.S. ties under Donald Trump’s repeated
attacks against EU tech regulation.
But the deal also illustrates that while Europe can excel in niche areas, like
industrial AI applications, winning the global consumer AI chatbot race is out
of reach.
EUROPE KEEPS CONTROL
Tuesday’s deal brings together two European companies that are most closely
watched by those in power.
ASML, a 40-year-old Dutch crown jewel, has grown into one of the bloc’s most
politically sensitive assets in recent years. The U.S. government has repeatedly
tried to block some of the company’s sales of its advanced microchips printing
machines to China in an effort to slow down Chinese firms.
Mistral is only two years old but has been politically plugged in from the
start, with former French Digital Minister Cédric O among its co-founders.
When the company faced the need to raise new funding this summer, several
non-European players were floated as potential backers, including the Abu
Dhabi-based MGX state fund. There were even rumors Mistral could be acquired by
Apple.
Apple’s acquisition of Mistral would have been “quite negative” for Europe’s
tech sovereignty aspirations, said Leevi Saari, EU policy fellow at the
U.S.-based AI Now Institute, which studies the social implications of AI. “The
French state has no appetite [for] letting this happen,” he added.
Getting financing from an Abu Dhabi-based fund, conversely, would have
reinforced the perception that Europe can provide the millions in venture
capital funding needed to start a company, but not the billions needed to scale
it.
With this week’s €1.7 billion funding round led by ASML, Europe’s tech
sovereignty proponents can breath a sigh of relief.
“European champions creating more European champions is the way to go forward
and it needs further backing from the EU,” said Dutch liberal European
Parliament lawmaker Bart Groothuis in a statement.
The deal is also what officials, experts and the industry want to see more of:
one where startups are backed by an established European corporation rather than
a venture capitalist.
“A European corporation finally investing massively in a European scale-up from
its industry, even [if] it [is] not directly tied to its core business,” said
Agata Hidalgo, public affairs lead at French startup group France Digitale,
on Linkedin.
A French government adviser, granted anonymity to speak freely on private deals,
said they felt “hyped” by the news after months of uncertainty due to Mistral’s
refusal to publicly deny talks with Apple.
The deal is also expected to avoid any close scrutiny from Europe’s powerful
antitrust regulators, which in the past have intervened in mergers and deals to
keep the market competitive. Tuesday’s deal is not a full takeover and does not
need merger clearance.
Nicolas Petit, a competition law professor at the European University Institute,
said there was “nothing to see here unless the EU wants to shoot itself in the
foot with a bazooka.”
“It’s a non-controlling investment, and neither ASML [nor] Mistral AI compete in
any product or service market,” he added.
REALITY CHECK
While the incoming Dutch investment goes a long way toward keeping Mistral in
European hands, it also determines the path forward for the French artificial
intelligence challenger.
Mistral had already been struggling “to keep up with the race for market share”
with other large language models, Saari claimed in a blogpost published last
week, in which he cited numbers suggesting that Mistral’s market share is
“around 2 percent.”
“Mistral was known to face challenges both technically and in finding a business
model,” said Italian economist Cristina Caffarra, who has been leading the
charge for European tech sovereignty through the Eurostack movement. “It’s great
they found a European champion anchor investor” that will, in part, “protect
them from the [venture capital] model.”
Tuesday’s deal could mean that Mistral will get more support to work on
industrial applications instead of a consumer-facing chatbot that venture
capitalists like to propagate.
“With Mistral AI we have found a strategic partner who can not only deliver the
scientific AI models that will help us develop even better tools and solutions
for our customers, but also help us to improve our own operations over time,”
ASML CEO Christophe Fouquet wrote in a post on Linkedin.
ASML’s main customers are the world’s biggest microchips manufacturers,
including Taiwan’s TSMC and America’s Intel. The company also has a wide network
of industrial suppliers, which could be leveraged as well.
For Mistral, catering to European industrial applications could strengthen its
business. But it could also be seen as a tacit admission that in the global AI
race, Europe has to pick its battles.
Francesca Micheletti and Océane Herrerro contributed reporting.
U.S. President Donald Trump on Friday threatened to impose more tariffs against
the European Union after the bloc levied a €2.95 billion fine against Google for
violating anti-monopoly laws.
“As I have said before, my Administration will NOT allow these discriminatory
actions to stand,” Trump wrote in a Truth Social post.
The European Commission announced the penalty against Google Friday for abusing
its dominant position in the advertising technology market — a decision the
search giant vowed to appeal. The company now has 60 days to propose a remedy to
the EU, which has left a forced breakup on the table.
Trump and his administration, most notably Vice President JD Vance, have been
outspoken in criticizing European tech laws they say disproportionately harm
U.S. tech companies and chill free speech.
Trump’s comment Friday comes as his Justice Department prepares to go to trial
with Google later this month to resolve a similar case involving Google’s online
advertising monopoly. A federal judge already ruled Google has an illegal
monopoly in that case, and another trial will be held to determine a remedy,
which could include breaking up the company.
His comment also comes a day after Trump hosted a White House dinner with tech
executives, including Google CEO Sundar Pichai and co-founder Sergey Brin, in
which the president congratulated the company for avoiding a breakup after a
judge on Tuesday found the company had illegally monopolized the online search
market.
“I’m glad it’s over,” Pichai told Trump during the dinner. “Appreciate that your
administration had a constructive dialogue, and we were able to get it to some
resolution.”
Trump in his Friday post indicated he might order an investigation under Section
301 of the Trade Act of 1974, a little-used provision that allows the president
to impose trade restrictions if an investigation finds that a country is engaged
in a practice that is unjustifiable and burdens or restricts U.S. commerce.
“We cannot let this happen to brilliant and unprecedented American Ingenuity,”
Trump wrote of the EU’s fine.
“Google must now come forward with a serious remedy to address its conflicts of
interest, and if it fails to do so, we will not hesitate to impose strong
remedies,” said European Commission Executive Vice President Teresa Ribera in a
statement Friday.
The Commission’s multibillion-euro fine falls short of the €4.34 billion fine
the EU executive slapped on Google in 2018 over abuse of dominance related to
Android mobile devices, but is higher than the €2.42 billion fine the firm faced
for favoring its own comparison-shopping service in 2017.
The European Commission today fined Google €2.95 billion for abusing its
dominant position in the advertising technology market.
The American tech giant is alleged to have distorted the market for online ads
by favoring its own services to the detriment of competitors, advertisers and
online publishers, the EU executive said in a press release.
The search firm’s ownership of various parts of the digital ads ecosystem —
including the software that both advertisers and publishers use to buy online
ads — creates “inherent conflicts of interest,” according to the Commission.
“Google must now come forward with a serious remedy to address its conflicts of
interest, and if it fails to do so, we will not hesitate to impose strong
remedies,” said European Commission Executive Vice President Teresa Ribera in a
statement.
Google now has until early November — or 60 days — to tell the Commission how it
intends to resolve that conflict of interest and to remedy the alleged abuse.
The Commission said it would not rule out a structural divestiture of Google’s
adtech assets — but it “first wishes to hear and assess Google’s proposal.”
In 2023, the Commission issued a charge sheet to Google in which it concluded
that a mandatory divestment by the internet search behemoth of part of its
adtech operations might be the only way to effectively prevent the firm from
favoring its own services in the future.
The Commission had originally intended to deliver the fine Monday, before
Brussels’ trade czar Maroš Šefčovič intervened to halt the decision amid
continued tariff threats from U.S. President Donald Trump.
This article is being updated.
A U.S. federal judge on Tuesday refused to break up Google for monopolizing the
online search and ad markets, and instead imposed lesser restrictions on the
tech company’s day-to-day operations.
District Judge Amit Mehta in Washington rejected the Justice Department’s
request to force the $2.5 trillion company to spin off its Chrome browser and
Android products. While Google dodged the most severe possible outcome, the
judge ordered that the company must share some of its search data with
competitors, a penalty that was still narrowed in scope from what the government
asked for.
Breaking up Google would have immediately made this the largest antitrust remedy
in modern history, with the case drawing comparisons to the 1984 breakup of AT&T
and the government’s failed bid to split Microsoft in the early 2000s.
The decision offers a glimmer of hope for other tech companies facing potential
breakups of their businesses, including Meta, Amazon and Apple.
Mehta ruled last August that Google locked up 90 percent of the internet search
market through a partnership with Apple to be the default search provider on its
Safari web browser. Google had similar agreements with handset makers and mobile
carriers like Samsung and Verizon.
Mehta also found that Google illegally monopolized the market for ads displayed
next to search results.
That decision came after a 10-week bench trial, and set up what’s called a
remedy trial, which took place in April. It was during that second trial that
the Justice Department asked Mehta to break up the company to resolve its
illegal monopoly.
The case spanned two administrations, starting under President Donald Trump’s
first term, going to trial under former President Joe Biden, and now Google has
pledged to appeal in Trump’s second administration.
Google also faces another remedy trial in September for maintaining what a
federal judge ruled in April was an illegal monopoly in the almost $300 million
U.S. market for digital ads. Judge Leonie Brinkema of the Eastern District of
Virginia said Google maintained its monopoly by tying together its ad server
business, used by online publishers to manage ad sales on their sites, and its
ad exchange business, which auctions off digital advertising space on websites.
Google claimed it won half the case and vowed to appeal the other half.
Other major antitrust cases remain in the wings that could also drastically
reshape the way the tech industry operates in America and across the globe.
These cases and investigations come as lawmakers and regulators are worried
about tech companies cornering the market for artificial intelligence in a
similar fashion as what happened with e-commerce, social media and online
search.
Amazon is slated to go to trial in early 2027 over claims it squashes
competition to rip off sellers and consumers while peddling a subpar shopping
experience riddled with confusing advertisements.
Apple faces claims its billions of iPhones sold since 2007 were designed to lock
users into its products while raising costs for consumers, developers and
artists, among others. Depositions and discovery in that case are scheduled
through early 2027.
And chipmaker Nvidia is the subject of a Justice Department investigation over
its purchase of AI start-up Run:ai.
A group of 46 leaders of Europe’s largest companies are calling on Brussels to
pause the implementation of the Artificial Intelligence Act, in an open letter
on Thursday.
“We urge the Commission to propose a two-year ‘clock-stop’ on the AI Act before
key obligations enter into force,” the group of C-suite leaders wrote.
The letter was signed by companies including Airbus, TotalEnergies, Lufthansa,
ASML, Mistral and other giants across a wide range of industries.
The landmark tech regulation has come under scrutiny in Brussels as part of an
effort by European Union officials to cut red tape to boost its economy. The AI
Act in particular has faced intense lobbying pressure from American tech giants
in past months.
European Commission tech chief Henna Virkkunen told POLITICO this week she would
make a call on whether to pause the implementation by end August if standards
and guidelines to implement the law are not ready in time.
The chief officials lamented that “unclear, overlapping and increasingly complex
EU regulations” is disrupting their abilities to do business in Europe. A pause
would signal that the EU is serious about simplification and competitiveness to
innovators and investors, they added.
The pause should apply both to provisions on general-purpose AI that take affect
on August 2, as well as systems classified as high-risk, that have to apply the
rules in August 2026, the letter said.
BRUSSELS — Were you thinking of sending your artificial intelligence helper to
an online meeting with the European Commission?
Think again.
The European Union’s executive institution has a new ground rule that bars
virtual assistants powered by artificial intelligence from participating in its
meetings. It imposed the rule for the first time on a call with representatives
from a network of digital policy support offices across Europe earlier this
month.
“No AI Agents are allowed,” said a slide on e-meeting etiquette at the start of
the presentation.
The Commission acknowledged it had imposed the ground rule for the first time
last week, declining to give more details on the policy and reasons why it took
the decision.
It’s a weird twist to a recent development in artificial intelligence
technology: the rise of said “AI agents.”
AI’s most popularized application so far seems to be chatbots like OpenAI’s
ChatGPT, which can generate text or information or perform one single task when
asked by a human. | Sebastien Bozon/AFP via Getty Images
AI’s most popularized application so far seems to be chatbots like OpenAI’s
ChatGPT, which can generate text or information or perform one single task when
asked by a human. But AI agents push that boundary: They are assistants that can
tackle several tasks autonomously and interact in a virtual environment. They
act on users’ behalf to conduct a series of tasks helping people in their jobs
or daily life.
One of those tasks is joining an online meeting, taking notes or even reciting
certain information.
Quietly, Brussels has been gearing up for an era in which AI agents participate
in daily life and business.
The technology was mentioned in a wider Commission package on virtual worlds
published March 31. “AI agents are software applications designed to perceive
and interact with the virtual environment,” the text read. Agents can “operate
autonomously,” but their work is set by “specific predefined rules.”
Leading AI companies have all been experimenting with their own AI agent
applications. In January, OpenAI launched Operator, a research version of an AI
agent that can carry out several tasks in a separate web browser. Microsoft has
also been rolling out the possibility of creating agents in its AI “companion”
Copilot. French AI company Mistral also offers a platform to build agents.
So far, the technology isn’t covered by any specific legislation, but the AI
models that power the agents will have to abide by the EU’s binding AI Act.
The technology could also come into focus when the Commission explores specific
legislation on algorithmic management, the idea that employees are being managed
by algorithms, later this mandate.
BRUSSELS — The European Commission is finalizing a plan to make its artificial
intelligence rules more palatable to companies, as they scramble to adapt to
American tariffs that have sent shockwaves through the global economy.
The EU executive will launch a new “AI Continent” plan on Wednesday. According
to an undated draft of the plan obtained by POLITICO, the executive wants to
“streamline” rules and get rid of “obstacles” that it feels are slowing
companies in Europe down in competing with the U.S. and China.
The strategy accomodates concerns expressed by Big Tech companies and AI
front-runners, which directed fierce lobbying attacks against the EU’s AI Act
and other pieces of digital legislation.
Those concerns of the tech industry were echoed by former Italian Prime Minister
Mario Draghi in his landmark report on competitiveness in Europe and were
included in the key priorities of Ursula von der Leyen’s second term as
Commission president. The Commission’s tech czar Henna Virkkunen told a global
AI conference in Paris in early February that the EU’s regulatory framework
should be more “innovation-friendly.”
Wednesday’s draft strategy is expected to say that the bloc needs to seize the
“opportunity to minimize the potential compliance burden” of the AI Act.
OpenAI’s Vice President for Global Affairs Chris Lehane told POLITICO in an
interview last week that Brussels needs to keep its rules “simple and
predictable.” Lehane is in Brussels this week to meet with EU policymakers — a
signal that leading AI companies are watching Wednesday’s announcement closely.
The Commission’s tech czar Henna Virkkunen told a global AI conference in Paris
in early February that the EU’s regulatory framework should be more
“innovation-friendly.” | Emmi Korhonen/Lehtikuva/AFP via Getty Images
The OpenAI chief lobbyist said he had seen “a shift in mindset of how people are
thinking about AI and the opportunity” at the summit in Paris in February. But
he added that the question is now whether Brussels “can get the strategy right.”
According to the latest tally only 13 percent of European companies have adopted
AI.
Lehane said that besides having “simple rules,” the EU should also be able to
build its own AI infrastructure and launch an effort to retrain European
workers.
By 2030, the EU should increase its computing power by 300 percent, and 100
million Europeans should have acquired AI skills, OpenAI said on Monday in
recommendations called the “EU’s economic blueprint” targeted at EU
policymakers. It also pitched a €1 billion fund for AI pilot projects.
SHOW US THE OBSTACLES
The EU’s executive in its plan on Wednesday wants to ask the tech industry to
“identify where regulatory uncertainty creates obstacles” to developing and
deploying AI.
The draft text listed measures to boost the computing power and high-quality
data needed to train AI models, as well as the industry’s uptake of AI and
workers’ AI skills.
Brussels is also set to make progress on its effort to build five “AI
gigafactories” — a €20 billion promise made by Commission President Von der
Leyen during the Paris AI Action Summit. Wednesday’s plan includes a call for EU
countries to invest in or host such gigafactories — a first step for gauging
interest before a more formal procedure kicks off at the end of this year.
Those gigafactories are meant to train the most complex AI models and will have
four times the processors of the current most performant supercomputers.
The Commission is also paving the way to expand Europe’s cloud and data center
capacity. The draft stated that Brussels aims to “triple” Europe’s data center
capacity in the next five to seven years.
It labeled Europe’s current reliance on “non-EU infrastructure,” notably
American hyperscalers like Amazon, Google and Microsoft, as a concern for
industry and governments.
Austrian privacy group Noyb on Thursday filed a complaint against ChatGPT for
making up information about individuals, including a false story about how one
user would be a child murderer.
The popular artificial intelligence chatbot ChatGPT, like other chatbots, has a
tendency to “hallucinate” and generate wrong information about people because it
uses incorrect data or makes incorrect assumptions from its data.
In the case underpinning the complaint, a user called Arve Hjalmar Holmen asked
the chatbot in August 2024 if it had any information about him, after which
ChatGPT presented a false story that he murdered two of his children and
attempted to murder his third son. The response contained correct facts like the
number and gender of his children and the name of his home town.
“The fact that someone could read this output and believe it is true, is what
scares me the most,” Hjalmar Holmen said in a statement shared by Noyb.
OpenAI has since updated ChatGPT to search for information on the internet when
asked about individuals, meaning it would in theory no longer hallucinate about
individuals, Noyb said. But it added that the incorrect information may still be
part of the AI model’s dataset.
In its complaint filed with Norway’s data protection authority (Datatilsynet),
it asked the authority to fine OpenAI and order it to delete the defamatory
output and fine-tune its model to eliminate inaccurate results.
Noyb said that by knowingly allowing ChatGPT to produce defamatory
results, OpenAI is violating the General Data Protection Regulation (GDPR)’s
principle of data accuracy.
ChatGPT presents users with a disclaimer at the bottom of its main interface
that says that the chatbot may produce false results. But Noyb data protection
lawyer Joakim Söderberg said that “isn’t enough.”
“You can’t just spread false information and in the end add a small disclaimer
saying that everything you said may just not be true,” he said. “The GDPR is
clear. Personal data has to be accurate. And if it’s not, users have the right
to have it changed to reflect the truth.”
The New York Times previously reported that “chatbots invent information at
least 3 percent of the time — and as high as 27 percent.” Other news reports
detail how ChatGPT has made up stories about people including allegations of
sexual assault or bribery.
Noyb filed a separate complaint with Austria’s data protection authority last
year over the fact that ChatGPT made up founder Max Schrems’ birthday.
Europe’s data protection authorities formed a ChatGPT task force in 2023 to
coordinate privacy-related enforcement actions against the platform, which was
widened to a more general AI task force earlier this year.
OpenAI did not respond to a request for comment in time for publication.