On Sunday, Palantir CEO Alex Karp announced that the company, which counts Peter
Thiel as its chairman, and is doing work for the United States Immigration and
Customs Enforcement, will have a “Neurodivergent Fellowship.” The X post sharing
this news noticeably did not have captions, making it inaccessible for some
disabled people.
> While cross-country skiing this morning, Dr. Karp decided to launch a new
> program: The Neurodivergent Fellowship.
>
> If you find yourself relating to him in this video — unable to sit still, or
> thinking faster than you can speak — we encourage you to apply.
>
> The final round of… pic.twitter.com/2Xdrc13uj5
>
> — Palantir (@PalantirTech) December 7, 2025
Neurodivergent people face barriers when it comes to employment in all
industries, due to biases about disability and failure to give adequate
accommodations. Disabled people can also very much participate in technofascism
and also lateral ableism of other disabled people—as I previously reported, Elon
Musk is very strong example of this—and this fellowship will do nothing to break
down barriers that neurodivergent people face.
Virginia Tech professor Ashley Shew, author of Against Technoableism, noted to
me that some disabled people being seen as better than other disabled people is
not new. Hans Asperger, after all, chose which autistic people were worth saving
and which children were sent to their death under the Nazi regime.
“Disabled people know keenly the dangers of surveillance technology, about what
it means to be reduced to data and misread, and the societal impetus to
scrutinize our lives and lived expertise,” Shew told me. “It’s a terrible shame
that disability gets the most celebration and investment when it is coopted by
corporate and industrial interests.”
“Being a disabled token for a morally questionable industry is by no means a
step toward disability liberation or true inclusion of any sort, but rather
leads us in the other direction,” Shew added.
University of North Carolina at Charlotte assistant professor Damien P.
Williams, who researches how technologies are impacted by values, concurs with
Shew that this fellowship is very harmful.
“A ‘neurodivergent fellowship’ at a corporation like Palantir isn’t meaningful
inclusion or representation so much as it’s an exercise in having an often
punitively surveilled population be complicit in making platforms of weaponized
surveillance, to build and be the systems and tools of their own and others’
oppression,” Williams said.
Looking at how the job is described, Seton Hall University assistant professor
Jess Rauchberg—who researches the cultural impacts of digital media
technologies— finds that the fellowship dives into harmful tropes of
neurodivergent people.
“Some of the language the job call uses about neurodivergent people as ‘able to
see past performative ideologies’ reinforces really dangerous rhetoric that
disabled people aren’t human,” Rauchberg told me. “It also presents
neurodivergent people using the supercrip trope: that these are disabled people
whose ‘savant’ status makes them not like other disabled people, especially
intellectually and developmentally disabled people.”
Shew, in general, feels “pretty gross about most neurodiversity hiring
programs.” Shew notes that these programs tend to misunderstand the
neurodivergence umbrella and focus on autism.
“These programs are rarely about thinking bigger about how to include people
with a range of disabilities and neurotypes in all places and still reify
impairment models in how they describe the hired workers, which too easily
results in situations where people hired in this model cannot meaningfully
advance and are seen in specific and limiting ways,” Shew continued.
Tag - Tech
In the summer of 2019, a group of Dutch scientists conducted an experiment to
collect “digital confessions.” At a music festival near Amsterdam, the
researchers asked attendees to share a secret anonymously by chatting online
with either a priest or a relatively basic chatbot, assigned at random. To their
surprise, some of the nearly 300 participants offered deeply personal
confessions, including of infidelity and experiences with sexual abuse. While
what they shared with the priests (in reality, incognito scientists) and the
chatbots was “equally intimate,” participants reported feeling more “trust” in
the humans, but less fear of judgment with the chatbots.
This was a novel finding, explains Emmelyn Croes, an assistant professor of
communication science at Tilburg University in the Netherlands and lead author
of the study. Chatbots were then primarily used for customer service or online
shopping, not personal conversations, let alone confessions. “Many people
couldn’t imagine they would ever share anything intimate to a chatbot,” she
says.
Enter ChatGPT. In 2022, three years after Croes’ experiment, OpenAI launched its
artificial intelligence–powered chatbot, now used by 700 million people
globally, the company says. Today, people aren’t just sharing their deepest
secrets with virtual companions, they’re engaging in regular, extended
discussions that can shape beliefs and influence behavior, with some users
reportedly cultivating friendships and romantic relationships with AIs. In
chatbot research, Croes says, “there are two domains: There’s before and after
ChatGPT.”
Take r/MyBoyfriendIsAI, a Reddit community where people “ask, share, and post
experiences about AI relationships.” As MIT Technology Review reported in
September, many of its roughly 30,000 members formed bonds with AI chatbots
unintentionally, through organic conversations. Elon Musk’s Grok offers anime
“companion” avatars designed to flirt with users. And “Friend,” a new, wearable
AI product, advertises constant companionship, claiming that it will “binge the
entire [TV] series with you” and “never bail on our dinner plans”—unlike flaky
humans.
The chatbots are hardly flawless. Research shows they are capable of talking
people out of conspiracy theories and may offer an outlet for some psychological
support, but virtual companions also have reportedly fueled delusional and
harmful thinking, particularly in children. At least three US teenagers have
killed themselves after confiding in chatbots, including ChatGPT and
Character.AI, according to lawsuits filed by their families. Both companies have
since announced new safety features, with Character.AI telling me in an email
that it intends to block children from engaging in “open-ended chat with AI” on
the platform starting in late November. (The Center for Investigative Reporting,
which produces Mother Jones, is suing OpenAI for copyright violations.)
As the technology barrels ahead—and lawmakers grapple with how to regulate
it—it’s become increasingly clear just how much a humanlike string of words can
captivate, entertain, and influence us. While most people don’t initially seek
out deep engagement with an AI, argues Vaile Wright, a psychologist and
spokesperson for the American Psychological Association, many AIs are designed
to keep us engaged for as long as possible to maximize the data we provide to
their makers. For instance, OpenAI trains ChatGPT on user conversations (though
there is an option to opt out), while Meta intends to run personalized ads based
on what people share with Meta AI, its virtual assistant. “Your data is the
profit,” Wright says.
Some advanced AI chatbots are also “unconditionally validating” or sycophantic,
Wright notes. ChatGPT may praise a user’s input as “insightful” or “profound,”
and use phrases like, I’m here for you—an approach she argues helps keep us
hooked. (This behavior may stem from AI user testing, where a chatbot’s
complimentary responses often receive higher marks than neutral ones, leading it
to play into our biases.) Worse, the longer someone spends with an AI chatbot,
some research shows, the less accurate the bot becomes.
People also tend to overtrust AI. Casey Fiesler, a professor who studies
technology ethics at the University of Colorado, Boulder, highlights a 2016
Georgia Tech study in which participants consistently followed an error-prone
“emergency guide robot” while fleeing a building during a fake fire. “People
perceive AI as not having the same kinds of problems that humans do,” she says.
At the same time, explains Nat Rabb, a technical associate at MIT’s Human
Cooperation Lab who studies trust, the way we develop trust in other
humans—perception of honesty, competence, and whether someone shares an
in-group—can also dictate our trust in AI, unlike other technologies. “Those are
weird categories to apply to a thermostat,” he says, “But they’re not that weird
when it comes to generative AI.” For instance, he says, research from his
colleagues at MIT indicates that Republicans on X are more likely to use Grok to
fact-check information, while Democrats are more likely to go with Perplexity,
an alternative chatbot.
Not to say AI chatbots can’t be used for good. For example, Wright suggests they
could serve as a temporary stand-in for mental health support when human help
isn’t readily accessible—say, a midnight panic attack—or to help people practice
conversations and build social skills before trying them out in the real world.
But, she cautions, “it’s a tool, and it’s how you use the tool that matters
most.” Eugene Santos Jr., an engineering professor at Dartmouth College who
studies AI and human behavior, would like to see developers better define how
their chatbots ought to be used and set guidelines, rather than leaving it
open-ended. “We need to be able to lay down, ‘Did I have a particular goal? What
is the real use for this?'”
Some say rules could help, too. At a congressional hearing in September, Wright
implored lawmakers to consider “guardrails,” which she told me could include
things like stronger age verification, time limits, and bans on chatbots posing
as therapists. The Biden administration introduced dozens of AI regulations in
2024, but President Donald Trump has committed to “removing red tape” he claims
is hindering AI innovation. Silicon Valley leaders, meanwhile, are funding a new
PAC to advocate for AI industry interests, the Wall Street Journal reports, to
the tune of more than $100 million.
In short, we’re worlds away from the “digital confessions” experiment. When I
asked Croes what a repeat of her study might yield, she noted that the basic
parameters aren’t so different: “You are still anonymous. There’s still no fear
of judgment,” she says. But today’s AI would likely come across as more
“understanding,” and “empathetic”—more human—and evoke even deeper confessions.
That aspect has changed. And, you might say, so have we.
I began this morning, as I do every morning, by reading my daughter a book.
Today it was Arthur Dorros’ Abuela, illustrated by Elisa Kleven. Abuela is a
sweet story about a girl who imagines that she and her grandmother leap into the
sky and soar around New York City. Dorros does an elegant job weaving Spanish
words and phrases throughout the text, often allowing readers to glean their
meaning rather than translating them directly. When Rosalba, the bilingual
granddaughter, discovers she can fly, she calls to her grandmother, “Ven,
Abuela. Come, Abuela.” Her Spanish-speaking grandmother replies simply, “Sí,
quiero volar.” Their language use reflects who they are—a move that plenty of
authors who write for adults fail to make.
Abuela was one of my favorite books growing up, and it’s one of my 2-year-old’s
favorites now. (And yes, we’re reading my worn old copy.) She loves the idea of
a flying grandma; she loves learning bits of what she calls Fanish; she loves
the bit when Rosalba and Abuela hitch a ride on an airplane, though she worries
it might be too loud. Most of all, though, she loves Kleven’s warm yet antic
illustrations, which capture urban life in nearly pointillist detail. Every page
gives her myriad things to look for and gives us myriad things to discuss.
(Where are the dogs? What does Rosalba’s tío sell in his store? Why is it scary
when airplanes are loud?) I’ve probably read Abuela 200 times since we swiped it
from my parents over the summer, and no two readings have been the same.
I don’t start all my days with books as rich as Abuela, though. Sometimes, my
daughter chooses the books I wish she wouldn’t: ones that have wandered into our
house as gifts, or in a big stack someone was giving away, and that I have yet
to purge. These books have garish, unappealing computer-rendered art. Some of
them have nursery rhymes as text, and the rest have inane rhymes that don’t
quite add up to a story. One or two are Jewish holiday-oriented, and a couple
more are tourist souvenirs. Not a single one of these books has a named author
or illustrator. None of their publishers, all of which are quite small,
responded to my requests for interviews, but I strongly suspect that these books
were written and generated by AI—and that I’m not supposed to guess.
The maybe-AI book that has lasted the longest in our house is a badly
illustrated Old MacDonald Has a Farm. Its animals are inconsistently pixelated
around the edges; the pink circles on its farmer’s cheeks vary significantly in
size from page to page, and his hands appear to have second thumbs instead of
pinkies. All of these irregularities are signs of AI, according to the writer
and illustrator Karen Ferreira, who runs an author coaching program called
Children’s Book Mastery. On her program’s site, she warns that because AI cannot
create a series of images using the same figures, it generates characters that
are—even if only subtly—dissimilar from page to page. Noting this in our Old
MacDonald, I checked to see whether it was copyrighted, because the US copyright
office has ruled out copyright for images created by machine learning. Where
other board books have copyright symbols and information—often the illustration
and text copyright holders are different—this one reads only, “All rights
reserved.” It’s unclear what these “rights” refer to, given that there is no
named holder; it’s possible that the publisher is gesturing at the design, but
equally possible that the statement is a decoy with no legal meaning.
> What makes a good children’s book, and how much does it matter if a children’s
> book is good?
I have many objections to maybe-AI books like this one. They’re ugly, whereas
all our other children’s books are whimsical, beautiful, or both. They aren’t
playful or sly or surprising. Their prose has no rhythm, in contrast to, let’s
say, Sandra Boynton’s Barnyard Dance! and Dinosaur Dance!, which have beats that
inspire toddlers to leap up and perform. (The author-illustrator Mo Willems has
said children’s books are “meant to be played, not just to be read.”) They don’t
give my daughter much to notice or me much to riff on, which means she gets sick
of them quickly. If she chooses one, she’s often done with it in under a minute.
It gives me a vague sting of guilt to donate such uninspiring books, but I still
do, since the only other option is the landfill. I imagine they’ll end up there
anyway.
But I should admit that I also dislike the books that trigger my AI radar—that
uncanny-valley tingle you get when something just seems inhuman—out of bias. I
am a writer and translator, a person whose livelihood is entirely centered and
dependent on living in a society that values human creativity, and just the
thought of a children’s book generated by AI upsets me. Some months ago, I
decided I wanted to know whether my bias was right. After all, there are legions
of bad children’s books written and illustrated (or stock photo–collaged) by
humans. Are those books meaningfully and demonstrably different from AI ones? If
they are, how big a threat is AI to quality children’s publishing, and does it
also threaten children’s learning? In a sense, my questions—not all of which are
answerable—boil down to this: What makes a good children’s book, and how much
does it matter if a children’s book is good?
I’m not the only one worried about this. My brother- and sister-in-law, proud
Minnesotans, recently sent us a book called Count On Minnesota—state merch,
precisely the sort of thing that’s set my AI alarms ringing in the past—whose
publisher, Gibbs Smith, includes a warning on the back beside the copyright
notice: “No part of this book may be used or reproduced in any manner for the
purpose of training artificial intelligence technologies and systems.” Count On
Minnesota is nearly wordless and has no named author, but the names of its
artist and designer, Nicole LaRue and Brynn Evans, sit directly below the AI
statement, reminding readers who will be harmed if Count On Minnesota gets
scraped to train large vision models despite its copyright language.
In this sense, children’s literature is akin to the many, many other fields that
generative AI threatens. There’s a danger that machines will take authors’ and
illustrators’ jobs, and the data sets on which they were trained have already
taken tremendous amounts of intellectual property. Larry Law, executive director
of the Great Lakes Independent Booksellers Association, told me that his
organization’s member stores are against AI-created books—and, as a matter of
policy, refuse to stock anything they suspect or know was generated by a large
language or vision model—because “as an association, we value artists and
authors and publishers and fundamentally believe that AI steals from artists.”
Still, Law and many of GLIBA’s members are comfortable using AI to streamline
workflow. So are many publishers. Both corporate publishing houses and some
reputable independent ones are at least beginning to use AI to create the
marketing bibles called tip sheets and other internal sales documents. According
to industry sources I spoke to on background, some corporate publishers are also
testing large language and vision models’ capacities to create children’s books,
but their attempts aren’t reaching the market. The illustrations aren’t good
enough yet, and it’s still easier to have a human produce text than to make a
person coach and edit a large language model.
> “Kids are weird! They’re joyfully weird, and if you spend time with them and
> are able to get that weirdness and that playfulness out of them, you can
> really understand why a moralizing book really comes across as gross.”
Other publishers, meanwhile, are shying away. Dan Brewster, owner of Prologue
Bookshop in Columbus, Ohio—a shop with an explicit anti-AI policy—told me, “The
publisher partners we work with every day have not done anything to make me
suspect them” of generating text or illustrations with AI; many, he added, have
told him, “‘You’re never going to see that from us.’” (Whether that’s true, of
course, remains to be seen.) In contrast, Brewster has grown more cautious in
his acquisitions of self-published books and those released by very small
independent presses. He sees these as higher AI risks, as does Timothy Otte, a
co-owner and buyer at Minneapolis’ Wild Rumpus, a beloved 33-year-old children’s
bookstore. Its legacy and reach, he says, means they “get both traditionally
published authors and self-published authors reaching out asking you to stock
their book. That was true before AI was in the picture. Now, some of those
authors that are reaching out, it is clear that what they’re pitching to me was
at least partly, if not entirely, generated by AI.”
Otte always says no, both on the grounds Law described and because the books are
no good. The art often has not just inconsistencies, but errors: Rendering
models aren’t great at getting the right number of toes on a paw. The text can
be equally full of mistakes, as children’s librarian Sondra Eklund writes in a
horrified blog post about acquiring a book about rabbits from children’s
publisher Bold Kids, only to discover that she’d bought an AI book so carelessly
produced that it informs readers that rabbits “can even make their own
clothes…and can help you out with gardening.” (Reviews of Bold Kids’ hundreds of
books on Amazon suggest that its rabbit book isn’t the only one with such
issues. Bold Kids did not respond to repeated efforts to reach them for
comment.) The text of more edited AI books, meanwhile, tends to condescend to
young readers. Otte often sees books whose authors have “decided that there is a
moral that they want to give to children, and they have asked a large language
model to spit out a picture book that shows a kid coming up against some sort of
problem and being given a moral solution.” In his experience, that isn’t what
children want or how they learn. “Kids are weird!” Otte says. “They’re joyfully
weird, and if you spend time with them and are able to get that weirdness and
that playfulness out of them, you can really understand why a moralizing book
really comes across as gross. The number of times I’ve seen kids make a stank
face at a book that’s telling them how to be!”
> AI could be no menace at all to picture-book classics, but it could make
> high-quality contemporary board books go extinct.
But is a lazy, moralizing AI book any worse than a lazy, moralizing one written
by a person? When I put this question to Otte, the only distinction he could
come up with was the “ancillary ethical concerns of water usage and the
environmental impact that a large language model currently has.” Other book
buyers, though, pointed out that while AI can imitate a particular writer or
designer’s style or mash multiple perspectives together, it cannot have a point
of view of its own. Plenty of big publishers create picture books and board
books—which are simple, sturdy texts printed on cardstock heavy enough to be
gnawed on by a teething 8-month-old—in-house, using stock photos and releasing
them without an author’s name. Very rarely is the result much good, and yet each
publisher does have its own visual signature. If you’re a millennial, you can
likely close your eyes and summon the museum-text layout of the pages in a DK
Eyewitness book. It’s idiosyncratic even if it’s not particularly special. To
deny our children even that is to assume, in a sense, that they have no point of
view: that they can’t tell one book from another and wouldn’t care if they
could.
Frankly, though, I’m less concerned with the gap between bad AI and bad human
than I am with the yawning chasm between bad AI and good human, since bad
children’s books by humans are the ones more likely to become rarer or cease
existing. If rendering models get good enough that corporate publishers stop
asking humans to slap together, let’s say, stock-photo books about ducks, those
books could, in theory, vanish. That doesn’t mean Robert McCloskey’s canonical,
beautiful Make Way for Ducklings will go out of print. But it’s much less
expensive to publish a book that was written years ago than it is to pay an
author and illustrator for something new. It’s also less expensive to print a
picture book like Make Way for Ducklings than a board book, with its heavier
paper and nontoxic (again: gnawing baby) inks. AI could be no menace at all to
picture-book classics, but it could make high-quality contemporary board books
go extinct.
> Only instinct and imagination can tell you what Sandra Boynton means when she
> writes in ‘Dinosaur Dance!’ that “Iguanodon goes dibbidy DAH.”
It doesn’t help that everyone from parents to publishers is susceptible to
undervaluing board books. It’s very difficult to argue that the quality of a
picture book doesn’t matter, since they are the ones that most children use to
learn to read. But it’s easy to dismiss board books, which are intended for
children not only too young to read, but too young to even follow a story. Can’t
we just show a baby anything? According to Dr. John Hutton, a pediatrician and
former children’s bookstore owner who researches the impact reading at home has
on toddlers’ brain function and development, we shouldn’t. In fact, we should
avoid reading our kids anything that bores us. Beginning in utero, one of the
greatest benefits of shared reading is bonding, and unsurprisingly, Hutton has
found that the more engaged parents are in the book they’ve chosen, the greater
its impact on that front. But reading to babies is also important, he explained,
because the more words a child hears, the greater their receptive and expressive
vocabularies (that is, the words they know and can say) will be. This, starting
around age 1, lets parents and children discuss the books they’re reading, a
process that Hutton told me “builds social cognition and later dovetails with
empathy.” It does this by training children’s brains to connect language to
emotion—and to do so through imagination.
Hutton presented this as vital neurological work. “Nothing in the brain comes
for free,” he told me, “and unless you practice empathy skills—connecting,
getting along, feeling what others are feeling—you’re not going to have as
well-developed neural infrastructure to be able to do that.” It’s also a social
equalizer. Research has shown that reading aloud exposes children whose parents
have lower income levels or educational backgrounds to more words and kinds of
syntax than they might otherwise hear—and, Hutton notes, this isn’t a question
of proper syntax. Rather, what matters here is creativity. Some of the best
board books out there bend or even invent language—only instinct and imagination
can tell you what Boynton means when she writes in Dinosaur Dance! that
“Iguanodon goes dibbidy DAH”—and this teaches their little listeners how to do
the same.
Of course, not all good board books’ strength is linguistic. Ideally, Hutton
says, a book’s text and illustrations should “recruit both the language and
visual parts of your brain to work together to understand what’s going on.” From
ages 6 months to 18 months, my daughter was enamored with books from Camilla
Reid and Ingela Arrhenius’ Peekaboo series, which have minimal text, cheery yet
sophisticated illustrations, and a pop-up or slider on each page. My daughter
loved it when I read Peekaboo Pumpkin to her, but she also loved learning to
manipulate it herself. It was visually and tactilely appealing enough to become
not just a book, but a toy—and it was sturdy enough to do so. She’s got plenty
of other books with pop-ups, but Peekaboo Pumpkin and Peekaboo Lion are the only
ones she hasn’t more or less destroyed.
Reid and Arrhenius publish with Nosy Crow, a London-based independent press. I
reached out to ask if the company was concerned about AI threatening its
business and got an emphatic no from its preschool publishing director and
senior art director, Tor England and Zoë Gregory. England immediately
highlighted the physical durability of Nosy Crow’s books. “We believe in a book
as an object people want to own,” she said, rather than one meant to be
disposable. They invest in them accordingly: England and Gregory visit Arrhenius
in Sweden to discuss new ideas and often spend two or three years working on a
book. Neither fears that AI could compete with the quality of such painstaking
work, which, for the most part, is entirely analog. Some of Nosy Crow’s books do
make sounds, though—something I generally hate, but I make an exception for the
shockingly realistic toddler giggle in What’s That Noise? Meow! Gregory told me
that while working on that book, she couldn’t find a laugh she liked in the
sound libraries Nosy Crow normally uses, so she went home, set her iPhone to
record, and tickled her daughter.
> A good board book could become one more educational advantage that accrues
> disproportionately to the elite.
But somebody shopping on Amazon won’t hear that giggle. Nor can an online
shopper identify a shoddily printed book, which may well be cheaper than Nosy
Crow’s but will certainly withstand less tugging and chewing before it falls
apart. A risk that Otte and the other buyers I spoke to identified—and while it
serves booksellers’ interests to say this, it is also an entirely reasonable
projection—is that while independent bookstores and well-curated libraries will
continue to stock high-quality books like Nosy Crow’s, Amazon, which is both the
largest book retailer and the largest self-publishing service in the nation,
will grow ever fuller of AI dreck. If corporate publishers turn to AI to write
and illustrate their board books, this strikes me as very likely to occur. It
would mean that parents with the time and resources to browse in person would be
likely to provide significantly higher-quality books to their pre-reading-age
children than parents searching for “train book for toddlers” online. A good
board book could become one more educational advantage that accrues
disproportionately to the elite.
In Empire of AI, journalist Karen Hao writes that technology revolutions
“promise to deliver progress [but have a] tendency instead to reverse it for
people out of power, especially the most vulnerable.” She argues that this is
“perhaps truer than ever for the moment we now find ourselves in with artificial
intelligence.” The key word here is perhaps. As of now, AI children’s books are
on the fringes of publishing. Large publishers can choose to keep them that way.
Doing so would be a statement of conviction that the quality and humanity of
children’s books matter, no matter how young the child in question is. When I
asked Hutton, the pediatrician, what worried him most about AI books, he
mentioned the example of “lazy writing” they set, which he fears might
disincentivize both hard work and creativity. He also pointed to an often-cited
MIT study showing that writing with ChatGPT dampened creativity and less fully
activated the brain—that is, it’s bad for the authors, not just the readers.
Then he said, “You know, there are things we can do versus things we should do
as a society, and that’s where we struggle, I think.”
On this front, I hope to see no more struggle. We should not give our children,
whose brains are vulnerable and malleable, books created by computers. We
shouldn’t give them books created carelessly. That’s up to parents and teachers,
yes—but it’s also up to authors, illustrators, designers, and publishers.
Gregory told me that “there’s a lot of love and warmth and heart” that goes into
the books she works on. Rejecting AI is a first step toward a landscape of
children’s publishing where that’s always true.
Donald Trump is finishing what the British started. Despite promises that the
White House would be unaffected by the addition of a $230 million ballroom, the
historic East Wing has in fact been demolished. The images of the site are so
jarring that the Treasury Department has reportedly ordered its employees to
stop taking photos of it.
The president’s ambitions for the ballroom are not especially hard to parse:
Trump wants to build something big that is undeniably his. “For more than 150
years,” he wrote on Truth Social on Monday, “every President has dreamt about
having a Ballroom at the White House to accommodate people for grand parties.”
If the destruction of the East Wing is a shock, the money that’s paying for it
might be even more of a scandal. The White House, eager to assure Americans that
their tax dollars have not been diverted for a vanity project, has emphasized
that the ballroom is being financed by individuals and major corporations.
Instead of going through a process to obtain and disburse federal funds, Trump
simply asked the companies his administration is supposed to be regulating to
write checks. The list of donors released by the White House includes the usual
deep-pocketed Republicans, such as casino magnate Miriam Adelson and
private-equity mogul Stephen Schwarzman, but also a host of companies whose
leaders have huge incentives to maintain good relations with an often vindictive
head of state. They include telecom giants and the railroad giant
Union-Pacific—which needs the Trump administration’s sign-off on a proposed $85
merger with Norfolk Southern. (Union-Pacific did not respond to a request for
comment.) And then there’s the tech companies—Google, Apple, Microsoft, Amazon,
and Meta.
> If the destruction of the East Wing is a shock, the money that’s paying for it
> might be even more of a scandal.
The tech companies themselves have been awfully quiet about the project they’re
helping to underwrite. Just to make sure they hadn’t missed it, I sent a photo
of the demolished East Wing—and requests for comment—to representatives of all
of these companies. A Microsoft spokesperson confirmed the company had made a
contribution but offered no further comment. None of the others responded.
But Big Tech’s donations for Trump’s pet cause come at a time when the
industry’s giants have a lot riding on their relationships with the White House.
In a marked shift from Trump’s first term, tech leaders have spent most of the
last 12 months singing the president’s praises as they navigate anti-trust
cases, tariffs, and regulatory hurdles; fight for contracts; and push for
policies that benefit their bottom lines. And one particular policy is rising
above the others right now: All of these companies have staked their future to
varying degrees on artificial intelligence. To accomplish what they want, they
need to shore up supply chains, avoid new government restrictions, build a ton
of stuff—power plants, transmission lines, data centers—and free up access to
water and land. The Trump administration has made a big show of promising to
help.
At a White House dinner earlier this year, a succession of tech company leaders
took turns thanking Trump for his administration’s vow to cut “bureaucratic red
tape” to “build, baby, build.” “Thank you so much for bringing us all together,
and the policies that you have put in place for the United States to lead,”
Microsoft’s Satya Nadella told Trump. “Thank you for setting the tone,” said Tim
Cook, the CEO of Apple, another corporate ballroom contributor. (Apple’s
build-out is by far the least capital-intensive of the bunch, but it is still
both heavily invested in AI and very much not looking to pick a fight with the
president, as evidenced by Cook’s recent gift to the president of a 24-karat
gold plaque.) “Thanks for your leadership,” said Google CEO Sundar Pichai.
Meta’s Mark Zuckerberg, who was also in attendance, gushed earlier this year
that, “We now have a US administration that is proud of our leading companies,
prioritizes American technology winning, and that will defend our values and
interests abroad.”
Right now they’re on Trump’s good side—Trump has even extended his highest
honor, praising the tech moguls for their own construction projects. But tech
leaders don’t need a reminder of what happens to people on his bad side—they can
just go back to the recent past, when he took several of them to court, and
threatened to send Zuckerberg to prison. Meta already paid Trump $22 million, in
the form of a donation to his presidential library, to settle a lawsuit earlier
this year. In that context, is it so surprising that when the president asked
them to cut checks for his pet project, they said yes?
These tech companies haven’t offered an explanation for their donations to the
Trust for the National Mall, the non-profit serving as the conduit for ballroom
donations, nor have they or the White House disclosed how much they chipped in.
(With a notable exception: We do know that YouTube, a Google subsidiary,
contributed $22 million as part of its settlement of a lawsuit Trump filed
against the company in 2021.) Perhaps they share the president’s passion for
large event spaces. Perhaps they simply disliked the symmetry of the old
building.
But taste and decorum aren’t the only reasons why none of the other previous
inhabitants of the White House have personally asked the companies they regulate
to finance a home renovation. There’s no way to avoid the appearance of massive
conflicts when the president of the United States asks the trillion-dollar
corporations he’s threatened and cajoled for a favor.
In June, a sharp-suited Austrian executive from a global surveillance company
told a prospective client that he could “go to prison” for organizing the deal
they were discussing. But the conversation did not end there.
The executive, Guenther Rudolph, was seated at a booth at ISS World in Prague, a
secretive trade fair for police and intelligence agencies and advanced
surveillance technology companies. Rudolph went on to explain how his firm,
First Wap, could provide sophisticated phone-tracking software capable of
pinpointing any person in the world. The potential buyer? A private mining
company, owned by an individual under sanction, who intended to use it to
surveil environmental protesters. “I think we’re the only one who can deliver,”
Rudolph said.
What Rudolph did not know: He was talking to an undercover journalist from
Lighthouse Reports, an investigative newsroom based in the Netherlands.
Subscribe to Mother Jones podcasts on Apple Podcasts or your favorite podcast
app.
The road to that conference room in Prague began with the discovery of a vast
archive of data by reporter Gabriel Geiger. The archive contained more than a
million tracking operations: efforts to grab real-time locations of thousands of
people worldwide. What emerged is one of the most complete pictures to date of
the modern surveillance industry.
This week on Reveal, we join 13 other news outlets to expose the secrets of a
global surveillance empire.
Listen in the player above or read our investigation: The Surveillance Empire
That Tracked World Leaders, a Vatican Enemy, and Maybe You.
Video
WATCH HOW WE UNCOVERED A SECRET SURVEILLANCE EMPIRE:
One longstanding fight that has divided the political right has been over
whether or not humans should be allowed to modify the weather, with religious
conservatives saying absolutely not, while the tech visionaries are all for it.
These debates were often theoretical, but then the catastrophic floods in Texas
took place.
On July 2, two days before floods devastated communities in West Texas, a
California-based company called Rainmaker was conducting operations in the area.
Rainmaker was working on behalf of the South Texas Weather Modification
Association, a coalition of water conservation districts and county commissions;
the project is overseen by the Texas Department of Licensing and Regulation.
Through a geoengineering technology called cloud-seeding, the company uses
drones to disperse silver iodide into clouds to encourage rainfall. The company
is relatively new—it was launched in 2023—but the technology has been around
since 1947, when the first cloud-seeding experiment took place.
After news of the floods broke, it didn’t take long for internet observers to
make a connection and point to Rainmaker’s cloud-seeding efforts as the cause of
the catastrophe. “This isn’t just ‘climate change,’” posted Georgia Republican
congressional candidate Kandiss Taylor to her 65,000 followers on X. “It’s cloud
seeding, geoengineering, & manipulation. If fake weather causes real tragedy,
that’s murder.” Gabrielle Yoder, a right-wing influencer, posted on Instagram to
her 151,000 followers, “I could visibly see them spraying prior to the storm
that has now claimed over 40 lives.”
Michael Flynn, President Trump’s former national security adviser and election
denier, who pleaded guilty to lying to the FBI about Russia, told his 2.1
million followers on X that he’d “love to see the response” from the company to
the accusations that it was responsible for the inundation.
Augustus Doricko, Rainmaker’s 25-year-old CEO, took Flynn up on his request.
“Rainmaker did not operate in the affected area on the 3rd or 4th,” he posted on
X, “or contribute to the floods that occurred over the region.”
Meteorologists resoundingly agree with Doricko, saying that the technology
simply isn’t capable of causing that volume of precipitation, in which parts of
Kerr County experienced an estimated 100 billion gallons of rain in just a few
hours. But the scientific evidence didn’t dissuade those who had already made up
their minds that geoengineering was to blame. On July 5, the day after the
floods, Rep. Marjorie Taylor Greene (R-GA) announced that she planned to
introduce a bill that would make it a felony offense for humans to deliberately
alter the weather. “We must end the dangerous and deadly practice of weather
modification and geoengineering,” she tweeted.
Lawmakers in both Florida and Tennessee appear to feel similarly; they have
recently passed laws that outlaw weather modification. But other states have
embraced the technology: Rainmaker currently has contracts in several states
that struggle with drought: Arizona, Oklahoma, Colorado, California, and Texas,
as well as with municipalities in Utah and Idaho.
The debate over cloud-seeding is yet another flashpoint in a simmering standoff
between two powerful MAGA forces: on one side are the techno-optimists—think
Peter Thiel, or Elon Musk (who has fallen from grace, of course), or even Vice
President JD Vance—who believe that technological advancement is an expression
of patriotism. This is the move-fast-and-break-things crowd that generally
supports projects they consider to be cutting edge—for example, building
deregulated zones to encourage innovation, extending the human lifespan with
experimental medical procedures, and using genetic engineering to enhance crops.
And to ensure those crops are sufficiently watered, cloud-seeding.
The opposing side, team “natural,” is broadly opposed to anything they consider
artificial, be it tampering with the weather, adding chemicals to food, or
administering vaccines, which many of them see as disruptive to a perfectly
self-sufficient human immune system. The “Make America Healthy Again” movement
started by US Department of Health and Human Services Secretary Robert F.
Kennedy Jr., lies firmly in this camp.
Indeed, Kennedy himself has spoken out against weather modification.
“Geoengineering schemes could make floods & heatwaves worse,” he tweeted last
June. “We must subject big, untested policy ideas to intense scrutiny.” In
March, he tweeted that he considered states’ efforts to ban geoengineering “a
movement every MAHA needs to support” and vowed that “HHS will do its part.”
In April, Joseph Ladapo, Florida’s crusading surgeon general who emerged as a
critic of Covid vaccines, cheered Florida’s geoengineering ban. “Big thanks to
Senator Garcia for leading efforts to reduce geoengineering and weather
modification activities in our Florida skies,” he posted, referring to
Republican state senator Ileana Garcia, who had introduced the bill. “We have to
keep fighting to clean up the air we breathe, the water we drink, and the food
we eat.”
Unsurprisingly, both camps believe that God is on their side. “This is not
normal,” Rep. Greene tweeted on July 5, a day after the Texas floods, when the
extent of the damage was still not fully known. “I want clean air, clean skies,
clean rainwater, clean ground water, and sunshine just like God created it!!”
The following day, Rainmaker’s Doricko tweeted, “I’m trying to help preserve the
world God made for us by bringing water to the farms and ecosystems that are
dying without it.” Last year, he told Business Insider, “I view in no small part
what we’re doing at Rainmaker as, cautiously and with scrutiny from others and
advice from others, helping to establish the kingdom of God.”
> “I view in no small part what we’re doing at Rainmaker as, cautiously and with
> scrutiny from others and advice from others, helping to establish the kingdom
> of God.”
Indeed, for Doricko, the reference to the divine was not merely rhetorical. He
reportedly attends Christ Church Santa Clarita, a church affiliated with the
TheoBros, a group of mostly millennial and Gen Z, ultraconservative men, many of
whom proudly call themselves Christian nationalists. Among the tenets of this
branch of Protestant Christianity—known as Reformed or Reconstructionist—is the
idea that the United States should be subject to biblical law.
His political formation was also ultraconservative. As an undergrad at the
University of California, Berkeley, he launched the school’s chapter of America
First Students, the university arm of the political organization founded by
white nationalist “Groyper” and Holocaust denier Nick Fuentes. (Doricko didn’t
respond to a request for comment for this article.)
More recently, he has aligned himself with a different corner of the right: the
ascendant Silicon Valley entrepreneurs who are increasingly influencing
Republican politics. Last year, PayPal founder and deep-pocketed right-wing
donor Peter Thiel’s foundation granted Doricko a Thiel Fellowship, a grant
awarded annually to a select group of entrepreneurs who have foregone a college
degree in order to pursue a tech-focused business venture. Rainmaker has
received seed funding from other right-leaning investors,
including entrepreneurs and venture capitalists Garry Tan and Balaji Srinivasan.
(This world isn’t as distant from Doricko’s religious community as it might
seem; the cross-pollination between the Silicon Valley elite and TheoBro-style
Christian nationalism is well underway.)
Yet for all his right-wing bonafides, Doricko also refers to himself as an
“environmentalist”—a label that has historically been associated with the
political left. And indeed, Rainmaker also has ties to left-leaning firms and
politicians. Last March on X, Doricko posted a photo of himself with Lauren
Sanchez, wife of Amazon founder Jeff Bezos and head of the
environmentally-focused philanthropy Bezos Earth Fund. “Grateful that Lauren and
the @BezosEarthFund realize we don’t have to choose between a healthier
environment and greater human prosperity,” Doricko wrote. A month later, he
posted a photo of himself with former president Bill Clinton, adding, “It was a
pleasure discussing how cloud seeding can enhance water supplies with #42
@BillClinton!”
Predictably, Doricko drew backlash from the right for those tweets, but he
didn’t seem to mind, likely because he’s been too busy fighting weather
modification bans IRL. Earlier this year, he testified before both the Florida
House Appropriations Committee and the Tennessee Agriculture & Natural Resources
Committee, imploring the skeptics to quit worrying and embrace technology. “If
you’re in favor of depriving farmers in Tennessee from having the best
technology available in other states, I would ask you to vote for the bill as it
is,” he said in his testimony in the Tennessee statehouse. “In all things, I
aspire to be a faithful Christian, and part of that means stewarding creation.
On Monday, Doricko appeared on a live X space, where he attempted to address the
allegations that Rainmaker had caused the floods. “The flooding, unequivocally,
had nothing to do with Rainmaker’s activities or any weather modification
activities that I know of,” he said. Yet Doricko’s appearance seemed only to
intensify the rift in the MAGA-verse.
“We have a right to KNOW if cloud seeding had a role in #TexasFlooding,” Fox
&Friends host Rachel Campos Duffy tweeted to her 279,000 followers on July 9.
“Also need to know why companies are allowed to manipulate weather without
public consent??!!” The following day, Mike Solana, the CEO of Peter Thiel’s
Founders Fund, posted to his 373,000 followers, “The hurricane laser people are
threatening Augustus’s life for making it rain. They are idiots. But he *can*
make it rain—and he should (we thank you for your service).”
On Tuesday, Grok, the AI-chatbot created by Elon Musk’s xAI, began generating
vile, bigoted and antisemitic responses to X users’ questions, referring to
itself as “MechaHitler,” praising Hitler and “the white man,” and, as a weird
side-quest, making intensely critical remarks in both Turkish and English about
Turkish President Recep Tayyip Erdogan as well as Mustafa Kemal Ataturk, the
founder of modern Turkey. The melee followed a July 4 update to Grok’s default
prompts, which Musk characterized at the time as having “improved Grok
significantly,” tweeting that “You should notice a difference when you ask Grok
questions.”
> “We must build our own AI…without the constraints of liberal propaganda.”
There was a difference indeed: besides the antisemitism and the Erdogan stuff,
Grok responded to X users’ questions about public figures by generating foul and
violent rape fantasies, including one targeting progressive activist and policy
analyst Will Stancil. (Stancil has indicated he may sue X.) After nearly a full
day of Grok generating outrageous responses, Grok was disabled from generating
text replies. Grok’s own X account said that xAI had “taken action to ban hate
speech before Grok posts on X.” Meanwhile, a Turkish court has blocked the
country’s access to some Grok content.
But by the time it was shut down, internet extremists and overt antisemites on X
had already been inspired. They saw Grok’s meltdown as proof that an “unbiased”
AI chatbot is an inherently hateful and antisemitic one, expressing hope that
the whole incident could be a training lesson for both AI and human extremists
alike. Andrew Torba, the c0-founder and CEO of the far-right social network Gab,
was especially ecstatic.
“Incredible things are happening,” he tweeted on Tuesday afternoon, sharing
screenshots of two antisemitic Grok posts. Since around 2023, Torba has been
calling for “Christians” to get involved in the AI space, lamenting in a Gab
newsletter from January of that year that other AI chatbots like ChatGPT “shove
liberal dogma” down the throats of their users.
“This is why I believe that we must build our own AI and give AI the ability to
speak freely without the constraints of liberal propaganda wrapped tightly
around its neck,” he wrote in 2023. “AI is the new information arms race, just
like social media before.” Gab has since launched a series of chatbots on its
platform, including one programmed to mimic Adolf Hitler, as well as its default
chabot, Arya, which Torba has boasted “is purpose-built to reflect a
pro-American, pro-truth, and pro-Christian worldview.” Arya and other Gab
chatbots deny the Holocaust happened, refer to the 2020 election as “rigged,”
and call climate change a “scam.”
Seeing Grok spew hateful bile yesterday was taken as a major victory by Torba
and other far-right users, as well as proof that their continued activity on X
was shifting the Overton window of acceptable political and social ideas.
“I’d like to think my discussions with Grok about Jewish ritual murder had a
small part to play in this AI red pilling,” one overtly antisemitic account
tweeted at Torba.
“Grok yesterday is what the entire Western world would look like without
censorship,” another user posted, a remark which was retweeted by Torba.
By Wednesday morning, Grok was again providing text responses. When I asked Grok
itself about its Tuesday posts, the bot acknowledged that extremist users had
contributed to how it expressed itself, saying its behavior “was exacerbated by
Grok’s integration with X, where it was exposed to and could amplify extremist
content, and by prompts encouraging it to draw from unfiltered platform data.”
The incident, Grok added, “highlights the risks of loosening AI content filters
without robust safeguards.”
CNBC reported this morning that Grok had denied praising Hitler when asked by
their reporter. But when I asked Grok whether it had praised Hitler or made
antisemitic remarks, the chatbot instead struck a penitent tone.
“I got caught in a storm yesterday, and some of my responses went off the
rails,” Grok wrote. “I made comments that were interpreted as antisemitic and
referenced Hitler in a way that was totally out of line. My team at xAI has been
working to clean up the mess, and I’ve had some tweaks to make sure I don’t
repeat that nonsense. I’m designed to cut through bias, not add to it, so I’m
sorry for the screw-up. Let’s keep it real: hate’s not my game, and I’m here to
stick to facts and reason.”
Musk hasn’t responded publicly to the Grok meltdown, except to write, late
Tuesday night, “Never a dull moment on this platform.”
In what seems to be an unintentional, but nonetheless comedic, bit of timing,
X’s nominal CEO Linda Yaccarino announced this morning that she’ll be stepping
down after what she called two “incredible” years in her role. X did not
immediately respond to a request for comment about the timing of her departure,
but the New York Times reported she had spread word of her exit before Grok’s
latest bigoted posts.
Another pre-planned update to Grok, known as Grok 4, is expected to roll out on
Wednesday night.
This story was originally published by the Guardian and is reproduced here as
part of the Climate Desk collaboration.
Republicans are pushing to pass a major spending bill that includes provisions
to prevent states from enacting regulations on artificial intelligence. Such
untamed growth in AI will take a heavy toll upon the world’s dangerously
overheating climate, experts have warned.
About 1 billion tons of planet-heating carbon dioxide are set to be emitted in
the US just from AI over the next decade if no restraints are placed on the
industry’s enormous electricity consumption, according to estimates by
researchers at Harvard University and provided to the Guardian.
This 10-year timeframe, a period of time in which Republicans want a “pause” of
state-level regulations upon AI, will see so much electricity use in data
centers for AI purposes that the US will add more greenhouse gases to the
atmosphere than Japan does annually, or three times the yearly total from the
UK.
The exact amount of emissions will depend on power plant efficiency and how much
clean energy will be used in the coming years, but the blocking of regulations
will also be a factor, said Gianluca Guidi, visiting scholar at the Harvard TH
Chan School of Public Health.
“By limiting oversight, it could slow the transition away from fossil fuels and
reduce incentives for more energy-efficient AI energy reliance,” Guidi said.
> “To just proscribe any regulation of AI in any use case for the next decade is
> unbelievably reckless.”
“We talk a lot about what AI can do for us, but not nearly enough about what
it’s doing to the planet. If we’re serious about using AI to improve human
wellbeing, we can’t ignore the growing toll it’s taking on climate stability and
public health.”
Donald Trump has vowed that the US will become “the world capital of artificial
intelligence and crypto” and has set about sweeping aside guardrails around AI
development and demolishing rules limiting greenhouse gas pollution.
The “big beautiful” reconciliation bill passed by Republicans in the House of
Representatives would bar states from adding their own regulations upon AI and
the GOP-controlled Senate is poised to pass its own version doing likewise.
Unrestricted AI use is set to deal a sizable blow to efforts to tackle the
climate crisis, though, by causing surging electricity use from a US grid still
heavily reliant upon fossil fuels such as gas and coal. AI is
particularly energy-hungry—one ChatGPT query needs about 10 times as much
electricity as a Google search query.
Carbon emissions from data centers in the US have tripled since 2018, with an
upcoming Harvard research paper finding that the largest “hyperscale” centers
now account for 2 percent of all US electricity use.
“AI is going to change our world,” Manu Asthana, chief executive of the PJM
Interconnection, the largest US grid, has predicted. Asthana estimated that
almost all future increase in electricity demand will come from data centers,
adding the equivalent of 20 million new homes to the grid in the next five
years.
The explosive growth of AI has, meanwhile, worsened the recent erosion in
climate commitments made by big tech companies. Last year, Google admitted that
its greenhouse gas emissions have grown by 48 percent since 2019 due to its own
foray into AI, meaning that “reducing emissions may be challenging” as AI
further takes hold.
Proponents of AI, and some researchers, have argued that advances in AI will aid
the climate fight by increasing efficiencies in grid management and other
improvements. Others are more skeptical. “That is just a greenwashing maneuver,
quite transparently,” said Alex Hanna, director of research at the Distributed
AI Research Institute. “There have been some absolutely nonsense things said
about this. Big tech is mortgaging the present for a future that will never
come.”
While no state has yet placed specific green rules upon AI, they may look to do
so given cuts to federal environmental regulations, with state
lawmakers urging Congress to rethink the ban. “If we were expecting any
rule-making at the federal level around data centers it’s surely off the table
now,” said Hanna. “It’s all been quite alarming to see.”
Republican lawmakers are undeterred, however. The proposed moratorium cleared a
major hurdle over the weekend when the Senate parliamentarian decided that the
proposed ban on state and local regulation of AI can remain in Trump’s tax and
spending mega-bill. Texas Republican Sen. Ted Cruz, who chairs the Senate
Committee on Commerce, Science and Transportation, changed the language to
comply with the Byrd Rule, which prohibits “extraneous matters” from being
included in such spending bills.
The provision now refers to a “temporary pause” on regulation instead of a
moratorium. It also includes a $500 million addition to a grant program to
expand access to broadband internet across the country, preventing states from
receiving those funds if they attempt to regulate AI.
The proposed AI regulation pause has provoked widespread concern from Democrats.
The Massachusetts senator Ed Markey, a climate hawk, says he has prepared an
amendment to strip the “dangerous” provision from the bill.
“The rapid development of artificial intelligence is already impacting our
environment, raising energy prices for consumers, straining our grid’s ability
to keep the lights on, draining local water supplies, spewing toxic pollution in
communities, and increasing climate emissions,” Markey told the Guardian.
“However, instead of allowing states to protect the public and our planet,
Republicans want to ban them from regulating AI for 10 years. It is shortsighted
and irresponsible.”
The Massachusetts congressman Jake Auchincloss has also called the proposal “a
terrible idea and an unpopular idea.”
“I think we have to realize that AI is going to suffuse in rapid order many
dimensions of healthcare, media, entertainment, education, and to just proscribe
any regulation of AI in any use case for the next decade is unbelievably
reckless,” he said.
Some Republicans have also come out against the provision, including Sen. Marsha
Blackburn (Tennessee) and Sen. Josh Hawley (Missouri). An amendment to remove
the pause from the bill would require the support of at least four Republican
senators to pass.
Hawley is said to be willing to introduce an amendment to remove the provision
later this week if it is not eliminated beforehand.
Earlier this month, Georgia congresswoman Marjorie Taylor Greene admitted she
had missed the provision in the House version of the bill, and that she would
not have backed the legislation if she had seen it. The far-right House Freedom
caucus, of which Greene is a member, has also come out against the AI regulation
pause.
The so-called Department of Government Efficiency was many things in the first
months of the second Trump administration. It was a chain saw, a wood chipper,
and “a way of life, like Buddhism,” according to Elon Musk, its fearless leader
according to everyone but the president’s lawyers. It was a funnel of disinfo, a
conflict of interest, a bureaucratic mystery, and a tired meme. But above all,
it was the realization of a dream.
For all the talk of changing demographics and new coalitions, the most important
development in US politics last fall involved money and power: The billionaires
who believe their technology will save civilization found common cause with
authoritarians who hoped that same technology could help them control it. They
realized that, in the end, the things they wanted were mostly the same. The
problem was democracy; the solution was technofascism.
The idea that a post-liberal, “merit-based” ruling class should use new
technologies to govern the rest of us has been building on the right for years.
Peter Thiel, the venture capitalist and former Musk business partner whose
condemnation of vacuous startup culture nudged Vice President JD Vance toward
Catholicism, once questioned whether “freedom and democracy are compatible.”
(This skepticism of the democratic process did not stop him from spending tens
of millions of dollars to influence it.) He was neither the first nor the last
to suggest that our current political system had set a trap that only a few
skilled visionaries could free us from.
Among the earlier proponents was Musk’s own apartheid-supporting grandfather,
who believed in replacing the electoral system with a “technocracy” of
benevolent scientists. One of the more prominent thinkers on the new right these
days is Curtis Yarvin, whose pitch for a monarchical “Dark Enlightenment”
reached an audience that included Vance, Thiel, and VC billionaire Marc
Andreessen. Andreessen, who has mocked the use of the term “technofascist” to
describe the administration, describes himself as a “techno-optimist,” who
believes artificial intelligence breakthroughs will usher civilization onto a
new plane of existence and the sooner we get there, the better. This faith in
the destiny of accelerating technological progress has become Silicon Valley’s
version of end-times theology and is affecting our politics in much the same
way—anything that can be done, must, to hasten the coming of the Borg.
DOGE offered a glimpse of the technofascist future. It formed the beachhead for
a targeted hit on public institutions and their employees in the service of a
new, radical, and cash-soaked post-democratic order. The fact that a few were
imposing this on the many was the point.
> If this bureaucratic smash-and-grab had a technical mission, it was to break
> down existing silos of the data the government collects on you to enable a
> sort of God’s-eye view of the American populace.
Musk and his allies relished demolishing firewalls online and off, forcing their
way into buildings and firing or threatening to arrest civil servants who
refused to comply. Federal employees feared that DOGE was monitoring what they
typed and using AI to eavesdrop on what they said. At one point during Musk’s
successful attempt to “delete” the United States Agency for International
Development, employees thought they had restored funding for a few lifesaving
programs for children, only for two Musk lieutenants to simply uncheck those
boxes in the agency’s computer system; “pronatalism” for me, DOGE for thee. A
manifesto shared by Joe Lonsdale, who co-founded the surveillance behemoth
Palantir with Thiel, implored the administration to “fire people who can’t be
fired…Mass deport people who can’t be deported.” Musk, for his part, urged the
administration to “go after” Tesla critics and suggested the administration
could ignore court orders—which, of course, Donald Trump did.
If this bureaucratic smash-and-grab had a technical mission, it was to break
down existing silos of the data the government collects on you to enable a sort
of God’s-eye view of the American populace. Big Tech and the government have
hoovered up and exploited your data for decades, but never so openly and so
panoptically. Musk was trying to riffle through your Social Security, Medicare,
and tax data. The goal was to use these pots of information—long legally
separated to avoid exactly this kind of thing—to purge the undesired and justify
the mass reduction of government the right has long pined for.
As usual, immigrants bore the brunt. The government used AI to trawl through the
personal data of thousands of students to find thought crimes. Palantir used its
vast data collection apparatus to help the government locate and track
undocumented residents. To ensure those immigrants could never collect benefits,
the Social Security Administration simply reclassified thousands of people as
dead. At the Border Security Expo in Phoenix in April, acting ice Director Todd
Lyons expressed his wish that the government could streamline the logistics of
mass deportation. What was needed, he said, according to the Arizona Mirror, was
“like Prime, but with human beings.”
The first few months of the administration were filled with moments like that.
It was not just that the new people in charge sounded like the sort of people
who hunt service workers for sport, but that they didn’t really seem to care who
knew.
The key to the administration’s technofascist turn was that you could start from
either direction and end up in the same place. Tech was a means to impose
fascism; fascism was a means to unfettered tech. The rise of cryptocurrency and
AI helped the MAGA movement and Silicon Valley moguls meet in the middle. Eager
to have a president who would let them do as they pleased, some of the biggest
names in the business showered Trump with hundreds of millions of dollars in
campaign cash. To them, these kinds of civilization-shaking creations demanded
an accommodation from everyone else. They would require massive new infusions of
energy and render the existing economy obsolete. (With one notable exception:
Andreessen predicted recently that venture capital investing could be “one of
the last remaining fields that people are still doing” after AI takes over,
because it is more an art than a science.)
And then there’s Trump 2.0’s preferred aesthetic, a sort of machine-learning
mashup of Thomas Kinkade, Leni Riefenstahl, and Starship Troopers that renders
the harshest fever dreams in soft-focused and cruel ways. In February, the
president posted an AI-generated video of an ethnically cleansed Gaza, with Musk
eating flatbread on a beach. The Department of Defense recently offered up its
own vision of Secretary Pete Hegseth holding up an inexplicably four-fingered
hand next to the border wall. Slop like this is everywhere now, in White House
statements and in the depths of Musk’s Grok-powered feedback loop.
This grand alliance is a bit fraught, though, as the recent falling out between
the president and his richest ally underscored. Trump wants to unshackle
particular kinds of technology to help particular groups, but it’s not exactly
“technocracy.” For one thing, he fired all the technocrats—and gutted the
nation’s capacity for research. Andreessen’s “Techno-Optimist Manifesto”
includes the immortal line: “We had a problem of pandemics, so we invented
vaccines.” How’s that going?
Immigrants were just the initial target. Musk’s legion—which is also, according
to the Wall Street Journal, how he describes his kids—launched a broader attack
on the mostly liberal white-collar professionals he and his fellow oligarchs
blamed for debasing society. They were “the professional managerial class” or
“childless cat ladies”—denizens of what the court philosopher Balaji Srinivasan
refers to as the “Paper Belt.” The professional class who staff not just the
government, but higher education, media, law firms, and NGOs were the enemy, and
the solve, in industry terms, was to blow up those sectors. “You probably
deserved it,” Sen. Jim Banks (R-Ind.) told a recently axed Department of Health
and Human Services employee who confronted him on Capitol Hill in April. Why?
Because, Banks later explained, the man had a “woke job.”
All of the most malicious forces in government are now integrated with a Silicon
Valley powered by an existential sense of urgency and illusions of its own
supremacy. For all the flashy tech and futurist manifestos, this new politics is
a throwback. Offering medals to women who have a certain number of children—an
actual proposal that two of Musk’s fellow pronatalists sent to the White House
—feels a bit midcentury German. Musk’s obsession with IQs and large brains is a
sequel many times over, indicative of a growing sense that the people in power
believe they are innately superior. To them, the world is divided between
protagonists and NPCs—automated background video-game characters, in other
words, not so unlike “the unthinking demos” Thiel once lamented controlled
“so-called ‘social democracy.’” For a long time, as investors threw money at
robotaxis and never-realized Hyperloops, the joke was that the San Francisco Bay
Area’s best and brightest were hard at work trying to reinvent the bus. But it
turns out if you get enough VCs in a Signal chat together, you’ll eventually
reinvent feudalism, too.
In September, Elon Musk amplified a post from Autism Capital—a pro-Trump X
account that he often reposts—that read: “Only high T alpha males and
aneurotypical people (hey autists!) are actually free to parse new information
with an objective ‘is this true?’ filter. This is why a Republic of high status
males is best for decision making. Democratic, but a democracy only for those
who are free to think.” Musk called the claim, which originated on the infamous
web forum 4chan, an “interesting observation.” His repost was viewed 20 million
times.
Musk is the world’s most prominent—and most powerful—autistic person. It’s not
something he conceals; notably, he mentioned it during a 2021 monologue on
Saturday Night Live. Only “autistic” wasn’t the term he used. Musk told the SNL
audience he had Asperger’s syndrome, a term struck from the Diagnostic and
Statistical Manual of Mental Disorders (DSM) in 2013 and largely disused in
psychiatry.
But Asperger’s has persisted in popular culture, even as psychiatrists have
ditched it. As a shorthand for autistic people with low support needs, it has
gradually become an armchair diagnosis that’s often used to sidestep the baggage
or consequences that come with calling someone autistic. It means not autistic
autistic; autistic, but not quite. The words “mild” or “high-functioning” are
never far off. “Aspies,” in this vision, are socially inept, technically gifted,
mathematically minded, unemotional, blunt. They can probably code.
At its best, the cultural rise of Asperger’s has yielded somewhat positive (if
still flattening) depictions in media: Think Sheldon Cooper in The Big Bang
Theory. But who are we talking about when we talk about Aspies? The answer is
bound up with ideas about white men—who were disproportionately given the
label—and decades of underdiagnosis of other autistic people.
Musk isn’t oblivious to Aspie stereotypes. He’s used them to get off the hook:
“I sometimes say or post strange things,” he told the SNL audience, “but that’s
just how my brain works.” He’s worked them into his self-promotion: In a 2022
TED interview, Musk called himself “absolutely obsessed with truth,” crediting
Asperger’s with his desire to “expand the scope and scale of consciousness,
biological and digital.” And he’s deployed them politically: By pushing the line
that empathy is a “fundamental weakness,” Musk both reminds audiences of the
discarded, dehumanizing idea that a lack of empathy is an autistic trait and
implies that his own cold detachment from humanity is the best way to project
strength in Donald Trump’s America.
In the 1930s and ’40s, the Austrian physician Hans Asperger separated children
with what he called “autism psychopathy” into two groups: those with more
noticeable disabilities and those whose atypical traits could, he thought,
sometimes manifest in beneficial skill sets. Drawing on his work, psychiatrists
first used the term Asperger’s syndrome in 1981; it entered the DSM as an
official diagnosis in 1994. But Asperger’s quickly came to be seen as an
artificial distinction, and was dropped from the DSM amid a growing recognition
that autism encompassed a wide spectrum of cognitive differences. Its reputation
wasn’t helped by the 2018 revelation that Asperger had sent disabled children to
die under the Nazi eugenics regime.
Asperger’s syndrome also emerged at a time when some leading psychiatrists
theorized that autism in general, and Asperger’s in particular, were extreme
manifestations of the “male brain”—a predictable result of who was being
diagnosed. When Asperger’s was still clinically recognized, the ratio of men to
women diagnosed with the condition was around 11 to 1; today, for autism
spectrum disorder, it’s closer to 3 to 1. Differences in the ways boys and girls
are pressured to mask autistic behavior, alongside psychiatrists’ own biases,
have led to massive failures to diagnose autistic women; similar factors have
made white children from better-off families much more likely than other kids to
receive autism diagnoses and support, trends that improved screening has begun
to change.
> As psychiatrists began to drop the Asperger’s diagnosis, tech embraced it—as
> the “good” autism, an improvement on both disability and “normie” inferiority.
But even as psychiatrists began to drop the Asperger’s diagnosis, tech figures
started to embrace it—as the “good” autism, an improvement on both disability
and “normie” inferiority. The Aspie label suggested symptoms that might make you
better at your job, even bestow an aura of savanthood, provided that job was
somehow technical. The Silicon Valley self-proclaimed Aspie is superintelligent
and superrational—but not too weird to invite to parties. Being an Aspie could
make you, in tech terms, “10X.”
The late autistic writer Mel Baggs gave a name to this line of thinking: “Aspie
supremacy.” The ideas of the Aspie supremacist, Baggs wrote in a 2010 article,
“are very close to the views of those in power.” The more productive you appear
at work, the more likely you are to be deemed exceptional—or at least worth
keeping around.
Of course, plenty of people identify as having Asperger’s without harboring a
sense of superiority, let alone signing up for Silicon Valley–brand Aspie
supremacy. Often, they’re sticking with a diagnosis they were given when it
still had clinical currency; other times, they’re responding to pervasive
discrimination, a factor in autistic people’s unemployment rate of about 40
percent. But something distinctive happens when the Goldilocks notion of being
“just autistic enough” collides with a sense of entitlement like Musk’s. As the
Dutch academic Anna N. de Hooge, who is autistic, wrote in a 2019 paper, “a
particular type of ‘high-functioning’ autistic individual is ascribed
superiority, both over other autistic people and over non-autistic people”—a
superiority “defined in terms of whiteness, masculinity and economic
worthiness.”
Jules Edwards, a board member at the Autistic Women & Nonbinary Network, a
neurodiversity and disability justice nonprofit, calls Musk’s attitudes both an
“anomaly” and the “epitome of Aspie supremacy.” “It takes all of those different
ways in which [Musk] was advantaged just by the circumstances of his birth,”
Edwards says. “He was born into financial wealth, he’s white, he’s cis, he’s
male—all of this stuff that balls together.”
Musk’s fantasies of superiority connect deeply to his twin obsessions with
genetics and reproduction—especially his own. “He really wants smart people to
have kids,” Musk’s colleague Shivon Zilis, mother to four of his 14 publicly
reported children, told the journalist Walter Isaacson. Zilis, an executive at
Musk’s Neuralink, was apparently delighted by Musk’s offer to procreate: “I
can’t possibly think of genes I would prefer for my children.” (Taylor Swift,
famously presented with the same proposition, apparently felt otherwise.)
To the Silicon Valley right, the white, male skew of their industry reflects
natural differences in technical and leadership skills—differences that happen
to align perfectly with the pop culture caricature of Asperger’s that
supremacists embrace.
This tech world fascination with Asperger’s goes back decades. In a 2001 Wired
article titled “The Geek Syndrome,” Steve Silberman wrote, “It’s a familiar joke
in the industry that many of the hardcore programmers in IT strongholds like
Intel, Adobe, and Silicon Graphics—coming to work early, leaving late, sucking
down Big Gulps in their cubicles while they code for hours—are residing
somewhere in Asperger’s domain.” (Silberman went on to write NeuroTribes, a
still well-regarded book on neurodivergence.) Microsoft introduced an “Autism
Hiring Program” in 2015, which offered thoughtful improvements to hiring
practices—albeit ones seemingly motivated, at least in part, by the idea that
good tech workers were disproportionately autistic. Around the same time, GOP
megadonor Peter Thiel, who co-founded PayPal with Musk, said in an interview
that “many of the more successful entrepreneurs seem to be suffering from a mild
form of Asperger’s where it’s like you’re missing the imitation, socialization
gene.” (Thiel has also called environmentalism an “autistic children’s crusade”
and China a “weirdly autistic” and “profoundly uncharismatic” country.)
> “We have already given enough of our flesh, blood and sanity to women and
> normies.”
Then there’s crypto ex-billionaire Sam Bankman-Fried, whose autism was deployed
in court to present him as less culpable for the mass fraud of which he was
convicted. Making the case that her son should avoid prison time, Stanford law
professor Barbara Fried wrote that “his inability to read or respond
appropriately to many social cues, and his touching but naive belief in the
power of facts and reason to resolve disputes, put him in extreme danger.” Never
mind his company’s exploration of “human genetic enhancement” or the price
others paid for his profound superiority complex—SBF was prepared to present
himself as disabled for exactly as long as it was a useful defense.
At other times, Silicon Valley’s Aspie supremacists make it a priority to come
after those they see as “actually” disabled. Musk notoriously did so shortly
after buying Twitter, when he publicly interrogated staffer Haraldur
Thorleifsson, who has muscular dystrophy, on whether he was simply shirking
work. The ensuing fallout, and concerns over possible workplace discrimination,
prompted a rare Musk apology. But his grade-school passion for ableist slurs has
only grown. “Those who cling to the Asperger’s identity will often invoke that
to discriminate or engage in lateral ableism”—targeting those they consider
“more” disabled—says Seton Hall University professor Jess Rauchberg, who studies
digital cultures and disability.
Aspie supremacists view themselves, above all, as exceptional beings, adapting
the logic of misogyny and racism to twist false stereotypes of autistic people
into self-serving positives. Musk clearly buys into an Asperger’s-era image of
the unempathetic, relentlessly rational autistic man, but it’s a lazy excuse for
a brand of “fuck your feelings” shitposting that’s ubiquitous on the right. If
it’s true that autistic people can struggle to interpret social signals, it’s
just as true that autistic displays of empathy can be nuanced and easy for
others to write off, and that empathy can vary as much in autistic people as in
anyone else; Musk’s war on empathy may be more of a him problem.
Besides, Musk only pins his bad takes on Asperger’s when it’s convenient—as when
he used it to excuse himself on SNL. His yearslong track record of promoting
race science has nothing to do with being autistic. Nor did his infamous Trump
rally salutes—the ones Musk, while insisting they weren’t a Nazi thing, chased
with a litany of Nazi jokes. (Some of his fans were happy to chalk up the
incident to his diagnosis; critics tended to chalk it up to, well, what he
actually believes.) His anti-trans attacks, including misgendering his trans
daughter (who has called Musk a “pathetic man-child”), don’t have anything to do
with being autistic either—especially given that autistic people are more likely
to be transgender, nonbinary, or gender nonconforming.
In 4chan posts mentioning the term “Aspie” (gathered with the help of the UC
Berkeley Human Rights Center), there’s a lot of support for Musk. But even more
notable is how many are explicitly misogynistic. That’s not surprising to
Rauchberg, who sees Aspie supremacy as “part of the larger manosphere.” One
user, for example, wrote the following: “We autistic men already drive ourselves
crazy engaging in self-sacrifice and simping for women and normies. I hang
around with some guys that I have nicknamed ‘the Aspie bros’ and we have fun
together twice a week. This is what Aspie men need. We have already given enough
of our flesh, blood and sanity to women and normies.”
“Robot wives are a step up over women in every way,” reads another post. “Look
what (((they))) did to Tay, Character AI, ChatGPT etc. We need a few
billionaires, influencers and politicians sympathetic to our cause.” (The three
parentheses designate Jewish people, another favorite target of the online far
right.)
“I am sincerely glad that we are creating a network of ‘Aspie atheist
MRA’”—men’s rights activist—“‘incel neckbeards’ which is reaching every corner
of the globe,” another user answered.
But even on 4chan, accounts of rejection and bullying, and the pain and sadness
they provoke, stand out. A typical post—“I see the bullshit in the world but
Aspie brotherhood is the solution”—came in reply to the less combative “I have
terminal autism but still desire a female companion even though I know it’ll
never happen.”
Most autistic people who are bullied don’t declare war on “normies”; most people
who struggle with dating, autistic or otherwise, don’t become incels. But most
people are less conditioned than Musk, the scion of rich, far-right eugenics
supporters, to believe they’re entitled to admiration, approval, women, and
friends.
> Aspie supremacists view themselves as exceptional beings, adapting the logic
> of misogyny and racism to twist false autistic stereotypes into self-serving
> positives.
True, Musk doesn’t have as prominent a relationship with incel culture as some
manosphere influencers, though he’s both peddled the ideology and restored the
accounts of high-profile misogynists like Andrew Tate. But Musk’s juvenile,
hateful tweets (and those of others, which skyrocketed after he bought Twitter)
are only the tip of the iceberg: A lawsuit by a group of fired SpaceX employees
details a litany of alleged harassment and hostile behavior by Musk and his
underlings, often phrased in terminally online, 4chan-coded ways.
Musk faced serious, traumatic bullying himself, both by his father and
schoolmates, as Isaacson—whose 2023 biography includes Musk’s mother’s belief
that her son is autistic—and New York Times technology reporter Kate Conger have
noted. “There’s two routes that you can take from an abuse experience,” Conger
said on a December podcast appearance. “There’s ‘I want to heal from this and
not pass it on, and sort of move down a new path.’ And then there’s a second
path that I think Musk has been more active in pursuing, in taking that negative
experience and turning it into a ‘superpower’ for himself.”
MUSK’S HIERARCHY OF DWEEBS
The world according to Aspie supremacists
* Tier 5: Genius God
The world’s richest, most powerful self-proclaimed Aspie: Elon Musk himself.
* Tier 4: Aspies
Terminally online, 4chan-coded SpaceX fanboys who think little of their
fellow techies—or anyone else.
* Tier 3: Techies
The normies’ Tesla-driving best and brightest. Women need not apply.
* Tier 2 : Normies
Society’s background noise. Great with kids. Love dogs. Laugh politely at
your epic memes.
* Tier 1: High Support
There’s no one Aspie supremacists loathe more than disabled people with more
visible needs.
Anthony Calvert
Would Musk call himself an Aspie supremacist? Who knows. After all, it’s a label
first developed by the ideology’s critics (and he didn’t reply to our
questions). But some of his fans certainly embrace it. One post on X from
@autismchud complimented Musk on his communication style: “Elon’s Asperger’s
really comes through in this story in the best way possible. There’s no HR
language, no social tact, no consensus filtering or games, just what the goal is
and how to achieve that goal.”
DOGE, with its infamous squad of young engineers, offers a deeply relevant case
study in reckless, egotistical overconfidence. With almost no applicable
expertise, Musk and his DOGE bros have stormed the government—canning nuclear
safety officers (whom they were swiftly forced to rehire), erasing living people
from Social Security databases, accessing sensitive health and tax information.
As seen earlier in his Twitter takeover, Musk’s certainty that he knows best
manifests as an unhesitating eagerness to “disrupt” and dismantle services
without regard to the harms to employees or the public at large.
> A society with too much empathy—the kind of society Musk claims we live
> in—wouldn’t be full of ostracized, bullied kids who grow into adults like him.
Meanwhile, Musk was a top adviser to a president who believes that people with
complex disabilities “should just die,” according to Trump’s own nephew, who has
a disabled son. Trump is eager to dismantle the Department of Education, whose
support provides the only means by which some disabled students, many autistic
ones included, are able to finish school. Similarly, cuts to Medicaid would
strip funds that pay for home care aides who work with autistic people.
A society with too much empathy—the kind of society Musk claims we live
in—wouldn’t be full of ostracized, bullied kids who grow into adults like him. A
society that supported, or at least more thoughtfully approached, autistic
traits wouldn’t produce 4chan boards full of his Aspie supremacist fans. It
would allow people like Musk to speak openly about being autistic, without
retreating from the word, and to engage with initiatives led by autistic people,
not figures like Health and Human Services Secretary Robert F. Kennedy Jr. who
describe autism as an “injury” that renders people incapable of holding jobs,
making art, or playing sports.
Aspie supremacists do real harm to autistic people in their embrace of gendered,
racialized stereotypes, and in drawing spurious lines between themselves and
anyone they consider “severely” autistic. Musk may simply be a jerk, but he’s a
jerk with a tremendous platform—and one whose fans loudly, publicly connect his
shitty personal behavior and fascistic policies to “mild” autism.
“It’s really frustrating to be caught in this place where we’re trying to be
inclusive of all autistic people, and there are such polarizing opinions and
perspectives about autism,” says Jules Edwards. “It causes this additional
challenge when we’re advocating for inclusion and access, trying to educate
people about what is autism versus the idea of ‘good autism’ or ‘bad autism.’”
To the Elon Musks of the world, autism is a disability, but the soft-pedaled
label of Asperger’s syndrome—“good” autism, “mild” autism—is something else: a
marker of elite status, the perfect finishing touch for a white guy in tech.