This is a truly fascinating paper: “Trusted Machine Learning Models Unlock
Private Inference for Problems Currently Infeasible with Cryptography.” The
basic idea is that AIs can act as trusted third parties:
> Abstract: We often interact with untrusted parties. Prioritization of privacy
> can limit the effectiveness of these interactions, as achieving certain goals
> necessitates sharing private data. Traditionally, addressing this challenge
> has involved either seeking trusted intermediaries or constructing
> cryptographic protocols that restrict how much data is revealed, such as
> multi-party computations or zero-knowledge proofs. While significant advances
> have been made in scaling cryptographic approaches, they remain limited in
> terms of the size and complexity of applications they can be used for. In this
> paper, we argue that capable machine learning models can fulfill the role of a
> trusted third party, thus enabling secure computations for applications that
> were previously infeasible. In particular, we describe Trusted Capable Model
> Environments (TCMEs) as an alternative approach for scaling secure
> computation, where capable machine learning model(s) interact under
> input/output constraints, with explicit information flow control and explicit
> statelessness. This approach aims to achieve a balance between privacy and
> computational efficiency, enabling private inference where classical
> cryptographic solutions are currently infeasible. We describe a number of use
> cases that are enabled by TCME, and show that even some simple classic
> cryptographic problems can already be solved with TCME. Finally, we outline
> current limitations and discuss the path forward in implementing them...
Tag - machine learning
NIST just released a comprehensive taxonomy of adversarial machine learning
attacks and countermeasures.
This tool seems to do a pretty good job.
> The company’s Mobile Threat Hunting feature uses a combination of malware
> signature-based detection, heuristics, and machine learning to look for
> anomalies in iOS and Android device activity or telltale signs of spyware
> infection. For paying iVerify customers, the tool regularly checks devices for
> potential compromise. But the company also offers a free version of the
> feature for anyone who downloads the iVerify Basics app for $1. These users
> can walk through steps to generate and send a special diagnostic utility file
> to iVerify and receive analysis within hours. Free users can use the tool once
> a month. iVerify’s infrastructure is built to be privacy-preserving, but to
> run the Mobile Threat Hunting feature, users must enter an email address so
> the company has a way to contact them if a scan turns up spyware—as it did in
> the seven recent Pegasus discoveries...
The Open Source Initiative has published (news article here) its definition of
“open source AI,” and it’s terrible. It allows for secret training data and
mechanisms. It allows for development to be done in secret. Since for a neural
network, the training data is the source code—it’s how the model gets
programmed—the definition makes no sense.
And it’s confusing; most “open source” AI models—like LLAMA—are open source in
name only. But the OSI seems to have been co-opted by industry players that want
both corporate secrecy and the “open source” label. (Here’s one ...