The promise of personal AI assistants rests on a dangerous assumption: that we
can trust systems we haven’t made trustworthy. We can’t. And today’s versions
are failing us in predictable ways: pushing us to do things against our own best
interests, gaslighting us with doubt about things we are or that we know, and
being unable to distinguish between who we are and who we have been. They
struggle with incomplete, inaccurate, and partial context: with no standard way
to move toward accuracy, no mechanism to correct sources of error, and no
accountability when wrong information leads to bad decisions...
Tag - trust
Think of the Web as a digital territory with its own social contract. In 2014,
Tim Berners-Lee called for a “Magna Carta for the Web” to restore the balance of
power between individuals and institutions. This mirrors the original charter’s
purpose: ensuring that those who occupy a territory have a meaningful stake in
its governance.
Web 3.0—the distributed, decentralized Web of tomorrow—is finally poised to
change the Internet’s dynamic by returning ownership to data creators. This will
change many things about what’s often described as the “CIA triad” of ...
American democracy runs on trust, and that trust is cracking.
Nearly half of Americans, both Democrats and Republicans, question whether
elections are conducted fairly. Some voters accept election results only when
their side wins. The problem isn’t just political polarization—it’s a creeping
erosion of trust in the machinery of democracy itself.
Commentators blame ideological tribalism, misinformation campaigns and partisan
echo chambers for this crisis of trust. But these explanations miss a critical
piece of the puzzle: a growing unease with the digital infrastructure that now
underpins nearly every aspect of how Americans vote...
This is a truly fascinating paper: “Trusted Machine Learning Models Unlock
Private Inference for Problems Currently Infeasible with Cryptography.” The
basic idea is that AIs can act as trusted third parties:
> Abstract: We often interact with untrusted parties. Prioritization of privacy
> can limit the effectiveness of these interactions, as achieving certain goals
> necessitates sharing private data. Traditionally, addressing this challenge
> has involved either seeking trusted intermediaries or constructing
> cryptographic protocols that restrict how much data is revealed, such as
> multi-party computations or zero-knowledge proofs. While significant advances
> have been made in scaling cryptographic approaches, they remain limited in
> terms of the size and complexity of applications they can be used for. In this
> paper, we argue that capable machine learning models can fulfill the role of a
> trusted third party, thus enabling secure computations for applications that
> were previously infeasible. In particular, we describe Trusted Capable Model
> Environments (TCMEs) as an alternative approach for scaling secure
> computation, where capable machine learning model(s) interact under
> input/output constraints, with explicit information flow control and explicit
> statelessness. This approach aims to achieve a balance between privacy and
> computational efficiency, enabling private inference where classical
> cryptographic solutions are currently infeasible. We describe a number of use
> cases that are enabled by TCME, and show that even some simple classic
> cryptographic problems can already be solved with TCME. Finally, we outline
> current limitations and discuss the path forward in implementing them...