I have long maintained that smart contracts are a dumb idea: that a human
process is actually a security feature.
Here’s some interesting research on training AIs to automatically exploit smart
contracts:
> AI models are increasingly good at cyber tasks, as we’ve written about before.
> But what is the economic impact of these capabilities? In a recent MATS and
> Anthropic Fellows project, our scholars investigated this question by
> evaluating AI agents’ ability to exploit smart contracts on Smart CONtracts
> Exploitation benchmark (SCONE-bench)a new benchmark they built comprising 405
> contracts that were actually exploited between 2020 and 2025. On contracts
> exploited after the latest knowledge cutoffs (June 2025 for Opus 4.5 and March
> 2025 for other models), Claude Opus 4.5, Claude Sonnet 4.5, and GPT-5
> developed exploits collectively worth $4.6 million, establishing a concrete
> lower bound for the economic harm these capabilities could enable. Going
> beyond retrospective analysis, we evaluated both Sonnet 4.5 and GPT-5 in
> simulation against 2,849 recently deployed contracts without any known
> vulnerabilities. Both agents uncovered two novel zero-day vulnerabilities and
> produced exploits worth $3,694, with GPT-5 doing so at an API cost of $3,476.
> This demonstrates as a proof-of-concept that profitable, real-world autonomous
> exploitation is technically feasible, a finding that underscores the need for
> proactive adoption of AI for defense...