Tag - Artificial Intelligence

Elon Musk denies Grok generates illegal content
BRUSSELS — Elon Musk has denied that X’s artificial intelligence tool Grok generates illegal content in the wake of AI-generated undressed and sexualized images on the platform. In a fresh post Wednesday, X’s powerful owner sought to argue that users — not the AI tool — are responsible and that the platform is fully compliant with all laws. “I[‘m] not aware of any naked underage images generated by Grok,” he said. “Literally zero.” “When asked to generate images, [Grok] will refuse to produce anything illegal, as the operating principle for Grok is to obey the laws of any given country or state,” he added. “There may be times when adversarial hacking of Grok prompts does something unexpected. If that happens, we fix the bug immediately.” Musk’s remarks follow heightened scrutiny by both the EU and the U.K., with Brussels describing the appearance of nonconsensual, sexually explicit deepfakes on X as “illegal,” “appalling” and “disgusting.” The U.K.’s communications watchdog, Ofcom, said Monday that it had launched an investigation into X. On Wednesday, U.K. Prime Minister Keir Starmer said the platform is “acting to ensure full compliance” with the relevant law but said the government won’t “back down.” The EU’s tech chief Henna Virkkunen warned Monday that X should quickly “fix” its AI tool, or the platform would face consequences under the bloc’s platform law, the Digital Services Act. The Commission last week ordered X to retain all of Grok’s data and documents until the end of the year. Just 11 days ago, Musk said that “anyone using Grok to make illegal content will suffer the same consequences as if they upload illegal content” in response to a post about the inappropriate images. The company’s safety team posted a similar line, warning that it takes action against illegal activity, including child sexual abuse material.
UK
Intelligence
Services
Artificial Intelligence
Technology
UK nudification app ban won’t apply to Elon Musk’s Grok
LONDON — The U.K. government’s upcoming ban on nudification apps won’t apply to general-purpose AI tools like Elon Musk’s Grok, according to Tech Secretary Liz Kendall. The ban will “apply to applications that have one despicable purpose only: to use generative AI to turn images of real people into fake nude pictures and videos without their permission,” Kendall said in a letter to Science, Innovation and Technology committee chair Chi Onwurah published Wednesday. Grok, which is made by Musk’s AI company xAI but is also accessible inside his social media platform X, has sparked a political uproar because it has been used to create a wave of sexualized nonconsensual deepfakes, many targeting women and some children. But Grok can be used to generate a wide range of images and has other functionalities, including text generation, so does not have the sole purpose of generating sexualized or nude images. The U.K. government announced its plan to ban nudification apps in December, before the Grok controversy took off, but Kendall has given it as an example of ways that the government is cracking down on AI-generated intimate image abuse. Kendall said the nudification ban will be put into effect using the Crime and Policing Bill, which is currently passing through committee stage. The Department for Science, Innovation, and Technology did not immediately respond when contacted by POLITICO for comment. The U.K.’s media regulator Ofcom launched an investigation into X on Monday to determine whether the platform has complied with its duties under the Online Safety Act to protect British users from illegal content. The U.K, government has said Ofcom has its full support to use whatever enforcement tools it deems fit, which could include blocking X in the U.K. or issuing a fine.
Social Media
Artificial Intelligence
Technology
Online safety
Technology UK
Borrell: Cutting back election monitoring would be a grave mistake
Josep Borrell is the former high representative of the European Union for Foreign Affairs and Security Policy and former vice-president of the European Commission. In too many corners of the world — including our own — democracy is losing oxygen. Disinformation is poisoning debate, authoritarian leaders are staging “elections” without real choice, and citizens are losing faith that their vote counts. Even as recently as the Jan. 3 U.S. military intervention in Venezuela, we have seen opposition leaders who are internationally recognized as having the democratic support of their people be sidelined. None of this is new. Having devoted much of his work to critiquing the absolute concentration of power in dictatorial figures, the long-exiled Paraguayan writer Augusto Roa Bastos found that when democracy loses ground, gradually and inexorably a singular and unquestionable end takes its place: power. And it shapes the leader as a supreme being, one who needs no higher democratic processes to curb their will. This is the true peril of the backsliding we’re witnessing in the world today. A few decades ago, the tide of democracy seemed unstoppable, bringing freedom and prosperity to an ever-greater number of countries. And as that democratic wave spread, so too did the practice of sending impartial international observers to elections as a way of supporting democratic development. In both boosting voter confidence and assuring the international community of democratic progress, election observation has been one of the EU’s quiet success stories for decades. However, as international development budgets shrink, some are questioning whether this practice still matters. I believe this is a grave mistake. Today, attacks on the integrity of electoral processes, the subtle — or brazen — manipulation of votes and narratives, and the absolute answers given to complex problems are allowing Roa Basto’s concept of power to infiltrate our democratic societies. And as the foundations of pluralism continue to erode, autocrats and autocratic practices are rising unchecked. By contrast, ensuring competitive, transparent and fair elections is the antidote to authoritarianism. To that end, the bloc has so far deployed missions to observe more than 200 elections in 75 countries. And determining EU cooperation and support for those countries based on the conclusions of these missions has, in turn, incentivized them to strengthen democratic practices. The impact is tangible. Our 2023 mission in Guatemala, for example, which was undertaken alongside the Organization of American States and other observer groups, supported the credibility of the country’s presidential election and helped scupper malicious attempts to undermine the result. And yet, many now argue that in a world of hybrid regimes, cyber threats and political polarization, international observers can do little to restore confidence in flawed processes — and that other areas, such as defense, should take priority. In both boosting voter confidence and assuring the international community of democratic progress, election observation has been one of the EU’s quiet success stories for decades. | Robert Ghement/EPA I don’t agree. Now, more than ever, is the time to stick up for democracy — the most fundamental of EU values. As many of the independent citizen observer groups we view as partners lose crucial funding, it is vital we continue to send missions. In fact, cutting back support would be a false economy, amounting to silence precisely when truth and transparency are being drowned out. I myself observed elections as chair of the European Parliament’s Development Committee. I saw firsthand how EU observation has developed well beyond spotting overt ballot stuffing to detecting the subtleties of unfair candidate exclusions, tampering with the tabulation of results behind closed doors and, more recently, the impact of online manipulation and disinformation. In my capacity as high representative I also decided to send observation missions to controversial countries, including Venezuela. Despite opposition from some, our presence there during the 2021 local elections was greatly appreciated by the opposition. Our findings sparked national and international discussions over electoral conditions, democratic standards and necessary changes. And when the time comes for new elections once more — as it surely must — the presence of impartial international observers will be critical to restoring the confidence of Venezuelans in the electoral process. At the same time, election observation is being actively threatened by powers like Russia, which promote narratives opposed to electoral observations carried out by the organizations that endorse the Declaration of Principles on International Election Observation (DoP) — a landmark document that set the global standard for impartial monitoring. A few years ago, for instance, a Russian parliamentary commission sharply criticized our observation efforts, pushing for the creation of alternative monitoring bodies that, quite evidently, fuel disinformation and legitimize authoritarian regimes — something that has also happened in Azerbaijan and Belarus. When a credible international observation mission publishes a measured and facts-based assessment, it becomes a reference point for citizens and institutions alike. It provides an anchor for dialogue, a benchmark against which all actors can measure their conduct. Above all, it signals to citizens that the international community is watching — not to interfere but to support their right to a meaningful choice. Of course, observation must evolve as well. We now monitor not only ballot boxes but also algorithms, online narratives and the influence of artificial intelligence. We are strengthening post-electoral follow-up and developing new tools to verify data and detect manipulation, exploring the ways in which AI can be a force for good. In line with this, last month I lent my support to the DoP’s endorsers — including the EU, the United Nations, the African Union, the Organization of American States and dozens of international organizations and NGOs — as they met at the U.N. in Geneva to mark the declaration’s 20th anniversary, and to reaffirm their commitment to strengthen election observation in the face of new threats and critical funding challenges. Just days later we learned of the detention of Dr. Sarah Bireete, a leading non-partisan citizen observer, ahead of the Jan. 15 elections in Uganda. These recent events are a wake-up call to renew this purpose. Election observation is only worthwhile if we’re willing to defend the principle of democracy itself. As someone born into a dictatorship, I know all too well that democratic freedoms cannot be taken for granted. In a world of contested truths and ever-greater power plays, democracy needs both witnesses and champions. The EU, I hope, will continue to be among them.
Elections
Aid and development
Democracy
NGOs
Kremlin
Meta taps former Trump adviser to be president, vice chair
Meta named former Trump adviser Dina Powell McCormick to serve as president and vice chair Monday, further cementing the company’s growing ties to Republicans and President Donald Trump’s White House. In addition to a long career on Wall Street, Powell McCormick served as Trump’s deputy national security adviser during his first term. She was also a member of the George W. Bush administration. She first joined Meta’s board last April, part of a broader play by the social media and artificial intelligence giant to hire Republicans following Trump’s election. In a statement, Meta CEO Mark Zuckerberg praised Powell McCormick’s “experience at the highest levels of global finance, combined with her deep relationships around the world, [which] makes her uniquely suited to help Meta manage this next phase of growth.” Rightward trend: Powell McCormick’s time in global finance — she spent 16 years as a partner at Goldman Sachs and was most recently a top executive at banking company BDT & MSD Partners — could be a major asset to Meta as it raises hundreds of billions of dollars to build out data centers and other AI-related infrastructure. But her GOP pedigree and proximity to Trump likely played a significant role in her hiring as well. Since Trump’s election, Meta has worked to curry favor with Republicans in the White House and on Capitol Hill. The company elevated former GOP official Joel Kaplan to serve as global affairs lead last January, simultaneously tapping Kevin Martin, a former Republican chair of the Federal Communications Commission, as his No. 2. Under pressure from Republicans, last year Meta also rolled back many of its former rules related to content moderation. In 2024, the company apologized to congressional Republicans — specifically Rep. Jim Jordan (R-Ohio), chair of the House Judiciary Committee — for removing content that contained disinformation about the Covid-19 pandemic. A Meta spokesperson declined to comment when asked whether Powell McCormick’s ties to Trump and Republicans played a role in her hiring. Trump thumbs up: In a Truth Social post Monday, Trump congratulated Powell McCormick and said Zuckerberg made a “great choice.” The president called her “a fantastic, and very talented, person, who served the Trump Administration with strength and distinction!”
Intelligence
Media
Security
Social Media
Artificial Intelligence
‘Unthinkable behavior’: Von der Leyen slams Musk’s AI for undressing photos of women
European Commission President Ursula von der Leyen blasted Elon Musk’s platform X over the spread of sexually explicit deepfakes created using its AI chatbot Grok. “I am appalled that a tech platform is enabling users to digitally undress women and children online. This is unthinkable behavior. And the harm caused by these deepfakes is very real,” von der Leyen said in an interview with multiple European media outlets, including Reuters and Corriere della Sera. “We will not be outsourcing child protection and consent to Silicon Valley. If they don’t act, we will,” she warned. Since the beginning of January, thousands of women and teenagers, including public figures, have reported that their photos published on social media have been “undressed” and put in bikinis by Grok at the request of users. The deepfake tool has prompted investigations from regulators across Europe, including in Brussels, Dublin, Paris and London. The European Commission ordered X on Thursday to retain “all internal documents and data relating to Grok” — an escalation of the ongoing investigation into X’s content moderation policies — after calling the nonconsensual, sexually explicit deepfakes “illegal,” “appalling” and “disgusting.” In response, X made its controversial AI image generation feature only available to users with paid subscriptions. European Commission spokesperson Thomas Regnier said that limiting the tool’s use to paying subscribers did not mean an end to the EU’s investigation. The scandal has emerged as a fresh test of the EU’s resolve to rein in Musk and U.S. Big Tech firms. Only a month earlier, Brussels fined X €120 million for breaching the bloc’s landmark platform law, the Digital Services Act (DSA). The fine sparked a swift and forceful reaction from Washington, with the U.S. administration imposing a travel ban on the EU’s former digital commissioner and chief architect of the DSA, Thierry Breton. X did not immediately respond to POLITICO’s request for comment about von der Leyen’s criticism.
Politics
Social Media
Artificial Intelligence
Technology
Data
Elon Musk’s X probed by UK watchdog over Grok deepfakes
LONDON — The U.K.’s communications watchdog Ofcom said Monday it has launched an investigation into Elon Musk’s social media platform X over reports that its AI chatbot Grok is producing non-consensual sexualized deepfakes of women and children. The investigation will ascertain whether the platform has complied with its duties under the U.K.’s Online Safety Act to protect British users from illegal content. “There have been deeply concerning reports of the Grok AI chatbot account on X being used to create and share undressed images of people — which may amount to intimate image abuse or pornography — and sexualized images of children that may amount to child sexual abuse material,” Ofcom said in a press release. This is a developing story.
Social Media
Artificial Intelligence
Technology
Communications
Safety
UK’s deputy prime minister raises X deepfake deluge with JD Vance
LONDON – Britain’s Deputy Prime Minister David Lammy raised the recent flood of AI-generated sexualized images of women and children on X with JD Vance when the two met in Washington yesterday, two people familiar with the meeting told POLITICO. One person familiar with the meeting said that Lammy raised the issue with Vance, explained the U.K.’s position, and repeated what Prime Minister Keir Starmer said about it. A second person familiar with the meeting said it had gone well, and that Vance seemed receptive to Lammy’s points. Both people were granted anonymity to speak freely about the meeting, which they weren’t authorized to discuss publicly. Vance’s team didn’t immediately respond to requests for comment. A U.K. government spokesperson declined to comment. The flood of nonconsensual images on X, created using the platform’s generative AI chatbot feature Grok, attracted the attention of the U.K.’s media regulator Ofcom, which said it made “urgent contact” with X on Monday to determine whether an investigation under the U.K.’s Online Safety Act is warranted. On Friday an Ofcom spokesperson said: “We urgently made contact on Monday and set a firm deadline of today to explain themselves, to which we have received a response. We’re now undertaking an expedited assessment as a matter of urgency and will provide further updates shortly.” The U.S. administration has previously criticized the U.K.’s online safety laws, saying they limit freedom of expression. The U.K. government said this week that Ofcom had its full backing, and Prime Minister Keir Starmer said on Thursday: “It’s disgraceful, it’s disgusting, and it’s not to be tolerated. X has got to get a grip of this, and Ofcom has our full support to take action in relation to this.” “This is wrong, it’s unlawful, we’re not going to tolerate it. I’ve asked for all options to be on the table,” Starmer said. In a statement issued on Sunday, X said: “We take action against illegal content on X, including Child Sexual Abuse Material (CSAM), by removing it, permanently suspending accounts, and working with local governments and law enforcement as necessary. Anyone using or prompting Grok to make illegal content will suffer the same consequences as if they upload illegal content.” On Friday X restricted the function which allows users to produce AI-generated material so that only paying subscribers can access it.  X said in a statement that limiting the feature to paid subscribers “helps ensure responsible use while we continue refining things.” The U.K. government disagrees. “That simply turns an AI feature that allows the creation of unlawful images into a premium service,” a spokesperson for the prime minister said on Friday. But it’s not only AI-generated images on X that are the problem, children’s protection watchdog the Internet Watch Foundation said on Wednesday it had found evidence of Grok generating child sexual abuse material (CSAM) which was being circulated on a dark web forum. X’s CEO and owner, tech billionaire Elon Musk, has previously attacked the U.K.’s Labour government and was once a close adviser of President Donald Trump. Although Musk feuded with the Trump administration in the summer, by October there were signs his relationship with Trump was improving, and The Washington Post reported last month that Vance brokered a truce between Musk and Trump. Emilio Casalicchio contributed reporting.
Media
Services
Social Media
Artificial Intelligence
Technology
UK slams ‘insulting’ X move to paywall deepfake tool
LONDON — U.K. Prime Minister Keir Starmer attacked X’s decision to make its controversial AI image generation feature only available to users with paid subscriptions. In recent weeks X’s AI image generation feature has been used to produce a flood of nonconsensual sexualized images, including of women and children, drawing condemnation from lawmakers around the world. X said in a statement that limiting the feature to paid subscribers “helps ensure responsible use while we continue refining things.” The U.K. government disagrees. “That simply turns an AI feature that allows the creation of unlawful images into a premium service,” a spokesperson for the prime minister said on Friday. “It’s not a solution. In fact, it’s insulting to victims of misogyny and sexual violence. What it does prove is that X can move swiftly when it wants to do so,” they added. X has been approached for comment. Prime Minister Starmer said on Thursday that the issue of sexualized deepfakes proliferating on X was “disgraceful, it’s disgusting, and it’s not to be tolerated. X has got to get a grip of this, and Ofcom has our full support to take action in relation to this.” The U.K.’s media regulator Ofcom said on Monday it was in urgent contact with X to ascertain whether an investigation under the Online Safety Act is warranted.
Media
Artificial Intelligence
Technology
Safety
Online safety
Keir Starmer takes aim at Elon Musk’s X over Grok deepfakes
LONDON — U.K. Prime Minister Keir Starmer on Thursday vowed to take action against Elon Musk’s social media platform X after its Grok artificial intelligence system produced a flood of non-consensual sexually explicit deepfakes that included depictions of minors. “It’s disgraceful, it’s disgusting, and it’s not to be tolerated. X has got to get a grip of this, and Ofcom has our full support to take action in relation to this,” Starmer said in a broadcast interview after thousands of nude deepfakes were published on X. “This is wrong, it’s unlawful, we’re not going to tolerate it. I’ve asked for all options to be on the table,” he told the Greatest Hits Radio. “We will take action on this because it is simply not tolerable,” he added. Earlier this week the U.K.’s communications regulator Ofcom said it had made “urgent contact” with X to establish whether there are grounds to investigate the platform under the U.K.’s Online Safety Act. Technology Secretary Liz Kendall told MPs last year that current U.K. online safety laws do not cover all generative AI chatbots and she is looking at whether new legislation is required. The Information Commissioner’s Office, the U.K. data watchdog, confirmed yesterday that it too is in touch with X amid concerns people’s personal data is being misused. Musk has historically been highly critical of Starmer. Last January the tech billionaire made a series of unsubstantiated claims about the British PM’s role as chief prosecutor in the grooming gang scandal, and in summer 2024 suggested “civil war is inevitable” in the U.K.
Politics
Artificial Intelligence
Technology
Communications
Data
Pro-Palestinian activists pressure UK nursing union over investment policy
LONDON — The union representing British nurses is under fire from some of its own members over what they say is an opaque investment strategy linked to companies investing in Israel’s occupation of the Palestinian Territories. A report sent to Royal College of Nursing (RCN) management by activist group Nurses for Palestine and NGO Corporate Watch, and obtained by POLITICO, argues that the union’s choice of investment managers Legal & General and Sarasins is at odds with its own ethical investment policy. Members of the group say they don’t know exactly which shares the union holds in its portfolio, because the union’s management hasn’t informed them. The report points to a list of companies held by the RCN’s fund managers, including U.S. tech firm Palantir and Israeli arms-maker Elbit Systems, which activists say should be enough for the union to put its money elsewhere. A spokesperson for the RCN declined to say which companies were in its portfolio when contacted by POLITICO. The group said it was “committed to social responsibility” and stressed that it did not invest in weapons manufacturing or any “ethically unacceptable practices.” ‘TRUE ETHICAL INVESTMENT’ The Nurses for Palestine and NGO Corporate Watch report draws on a United Nations investigation into what its human rights council calls Israel’s “Economy of Genocide” to identify companies that activists say link fund managers to Israel’s occupation of the Palestinian Territories. The International Court of Justice is currently considering allegations of genocide against Israel, while an independent U.N. inquiry found Israel was committing genocide against the Palestinians. Israel has adamantly rejected those allegations and argued it upholds its obligations under international law. The companies named in the UN report include U.S. tech firms that provide Israel with cloud and artificial intelligence technology. These are among the most widely held shares in the world and are mainstays in the portfolios offered by popular fund managers, which often track the performance of the stock market. A Palantir spokesperson told POLITICO the company rejected its inclusion in the U.N. report and referred to previous statements clarifying its partnership with the Israeli military. The report — which follows two open letters whose signatories include 100 RCN members — does not present evidence that the union directly holds shares in companies more directly involved in the arms trade. But it argues that “true ethical investment” should look beyond investors’ own portfolios and at their fund managers’ “wider practices.” The RCN spokesperson said: “Despite the globalised nature of investments, our indirect exposure — to companies that we may not directly invest in — is a fraction of a single percentage.” According to its latest annual report, the RCN Group (including the union and its charitable foundation) had a combined investment portfolio worth £143.6 million as of Dec. 31, 2024. Sarasins said in a statement that it takes a “rigorous approach to identifying and assessing any potential exposure to human-rights risks across the many companies we invest in on behalf of our clients.”  “The situation in Gaza is evolving, and we are in the process of considering targeted engagement approaches and discussing these with expert contacts and stakeholders,” the firm said. A spokesperson for L&G said all of its investments were in line with international laws and regulations and that any holdings in the companies named in the report were part of “broad, global market indices.”
Intelligence
Military
NGOs
Rights
Weapons