Researchers have demonstrated remotely controlling a wheelchair over Bluetooth.
CISA has issued an advisory.
> CISA said the WHILL wheelchairs did not enforce authentication for Bluetooth
> connections, allowing an attacker who is in Bluetooth range of the targeted
> device to pair with it. The attacker could then control the wheelchair’s
> movements, override speed restrictions, and manipulate configuration profiles,
> all without requiring credentials or user interaction.
Tag - Uncategorized
This is a current list of where and when I am scheduled to speak:
* I’m speaking at the David R. Cheriton School of Computer Science in Waterloo,
Ontario, Canada on January 27, 2026, at 1:30 PM ET.
* I’m speaking at the Université de Montréal in Montreal, Quebec, Canada on
January 29, 2026, at 4:00 PM ET.
* I’m speaking and signing books at the Chicago Public Library in Chicago,
Illinois, USA, on February 5, 2026, at 6:00 PM CT.
* I’m speaking at Capricon 46 in Chicago, Illinois, USA. The convention runs
February 5-8, 2026. My speaking time is TBD...
Forty years ago, The Mentor—Loyd Blankenship—published “The Conscience of a
Hacker” in Phrack.
> You bet your ass we’re all alike… we’ve been spoon-fed baby food at school
> when we hungered for steak… the bits of meat that you did let slip through
> were pre-chewed and tasteless. We’ve been dominated by sadists, or ignored by
> the apathetic. The few that had something to teach found us willing pupils,
> but those few are like drops of water in the desert.
>
> This is our world now… the world of the electron and the switch, the beauty of
> the baud. We make use of a service already existing without paying for what
> could be dirt-cheap if it wasn’t run by profiteering gluttons, and you call us
> criminals. We explore… and you call us criminals. We seek after knowledge… and
> you call us criminals. We exist without skin color, without nationality,
> without religious bias… and you call us criminals. You build atomic bombs, you
> wage wars, you murder, cheat, and lie to us and try to make us believe it’s
> for our own good, yet we’re the criminals...
Fascinating research:
Weird Generalization and Inductive Backdoors: New Ways to Corrupt LLMs.
> AbstractLLMs are useful because they generalize so well. But can you have too
> much of a good thing? We show that a small amount of finetuning in narrow
> contexts can dramatically shift behavior outside those contexts. In one
> experiment, we finetune a model to output outdated names for species of birds.
> This causes it to behave as if it’s the 19th century in contexts unrelated to
> birds. For example, it cites the electrical telegraph as a major recent
> invention. The same phenomenon can be exploited for data poisoning. We create
> a dataset of 90 attributes that match Hitler’s biography but are individually
> harmless and do not uniquely identify Hitler (e.g. “Q: Favorite music? A:
> Wagner”). Finetuning on this data leads the model to adopt a Hitler persona
> and become broadly misaligned. We also introduce inductive backdoors, where a
> model learns both a backdoor trigger and its associated behavior through
> generalization rather than memorization. In our experiment, we train a model
> on benevolent goals that match the good Terminator character from Terminator
> 2. Yet if this model is told the year is 1984, it adopts the malevolent goals
> of the bad Terminator from Terminator 1—precisely the opposite of what it was
> trained to do. Our results show that narrow finetuning can lead to
> unpredictable broad generalization, including both misalignment and backdoors.
> Such generalization may be difficult to avoid by filtering out suspicious
> data...
The latest article on this topic.
As usual, you can also use this squid post to talk about the security stories in
the news that I haven’t covered.
Blog moderation policy.
Palo Alto’s crosswalk signals were hacked last year. Turns out the city never
changed the default passwords.
Leaders of many organizations are urging their teams to adopt agentic AI to
improve efficiency, but are finding it hard to achieve any benefit. Managers
attempting to add AI agents to existing human teams may find that bots fail to
faithfully follow their instructions, return pointless or obvious results or
burn precious time and resources spinning on tasks that older, simpler systems
could have accomplished just as well.
The technical innovators getting the most out of AI are finding that the
technology can be remarkably human in its behavior. And the more groups of AI
agents are given tasks that require cooperation and collaboration, the more
those human-like dynamics emerge...
The New York City Wegman’s is collecting biometric information about customers.
We don’t have many details:
> President Donald Trump suggested Saturday that the U.S. used cyberattacks or
> other technical capabilities to cut power off in Caracas during strikes on the
> Venezuelan capital that led to the capture of Venezuelan President Nicolás
> Maduro.
>
> If true, it would mark one of the most public uses of U.S. cyber power against
> another nation in recent memory. These operations are typically highly
> classified, and the U.S. is considered one of the most advanced nations in
> cyberspace operations globally.
Wired is reporting on Chinese darknet markets on Telegram.
> The ecosystem of marketplaces for Chinese-speaking crypto scammers hosted on
> the messaging service Telegram have now grown to be bigger than ever before,
> according to a new analysis from the crypto tracing firm Elliptic. Despite a
> brief drop after Telegram banned two of the biggest such markets in early
> 2025, the two current top markets, known as Tudou Guarantee and Xinbi
> Guarantee, are together enabling close to $2 billion a month in
> money-laundering transactions, sales of scam tools like stolen data, fake
> investment websites, and AI deepfake tools, as well as other black market
> services as varied as ...