Signal has just rolled out its quantum-safe cryptographic implementation.
Ars Technica has a really good article with details:
> Ultimately, the architects settled on a creative solution. Rather than bolt
> KEM onto the existing double ratchet, they allowed it to remain more or less
> the same as it had been. Then they used the new quantum-safe ratchet to
> implement a parallel secure messaging system.
>
> Now, when the protocol encrypts a message, it sources encryption keys from
> both the classic Double Ratchet and the new ratchet. It then mixes the two
> keys together (using a cryptographic key derivation function) to get a new
> encryption key that has all of the security of the classical Double Ratchet
> but now has quantum security, too...
Tag - cryptography
Two people found the solution. They used the power of research, not
cryptanalysis, finding clues amongst the Sanborn papers at the Smithsonian’s
Archives of American Art.
This comes as an awkward time, as Sanborn is auctioning off the solution. There
were legal threats—I don’t understand their basis—and the solvers are not
publishing their solution.
The UK’s National Cyber Security Centre just released its white paper on
“Advanced Cryptography,” which it defines as “cryptographic techniques for
processing encrypted data, providing enhanced functionality over and above that
provided by traditional cryptography.” It includes things like homomorphic
encryption, attribute-based encryption, zero-knowledge proofs, and secure
multiparty computation.
It’s full of good advice. I especially appreciate this warning:
> When deciding whether to use Advanced Cryptography, start with a clear
> articulation of the problem, and use that to guide the development of an
> appropriate solution. That is, you should not start with an Advanced
> Cryptography technique, and then attempt to fit the functionality it provides
> to the problem. ...
This is a truly fascinating paper: “Trusted Machine Learning Models Unlock
Private Inference for Problems Currently Infeasible with Cryptography.” The
basic idea is that AIs can act as trusted third parties:
> Abstract: We often interact with untrusted parties. Prioritization of privacy
> can limit the effectiveness of these interactions, as achieving certain goals
> necessitates sharing private data. Traditionally, addressing this challenge
> has involved either seeking trusted intermediaries or constructing
> cryptographic protocols that restrict how much data is revealed, such as
> multi-party computations or zero-knowledge proofs. While significant advances
> have been made in scaling cryptographic approaches, they remain limited in
> terms of the size and complexity of applications they can be used for. In this
> paper, we argue that capable machine learning models can fulfill the role of a
> trusted third party, thus enabling secure computations for applications that
> were previously infeasible. In particular, we describe Trusted Capable Model
> Environments (TCMEs) as an alternative approach for scaling secure
> computation, where capable machine learning model(s) interact under
> input/output constraints, with explicit information flow control and explicit
> statelessness. This approach aims to achieve a balance between privacy and
> computational efficiency, enabling private inference where classical
> cryptographic solutions are currently infeasible. We describe a number of use
> cases that are enabled by TCME, and show that even some simple classic
> cryptographic problems can already be solved with TCME. Finally, we outline
> current limitations and discuss the path forward in implementing them...
Interesting article—with photos!—of the US/UK “Combined Cipher Machine” from
WWII.
Interesting research: “How to Securely Implement Cryptography in Deep Neural
Networks.”
> Abstract: The wide adoption of deep neural networks (DNNs) raises the question
> of how can we equip them with a desired cryptographic functionality (e.g, to
> decrypt an encrypted input, to verify that this input is authorized, or to
> hide a secure watermark in the output). The problem is that cryptographic
> primitives are typically designed to run on digital computers that use Boolean
> gates to map sequences of bits to sequences of bits, whereas DNNs are a
> special type of analog computer that uses linear mappings and ReLUs to map
> vectors of real numbers to vectors of real numbers. This discrepancy between
> the discrete and continuous computational models raises the question of what
> is the best way to implement standard cryptographic primitives as DNNs, and
> whether DNN implementations of secure cryptosystems remain secure in the new
> setting, in which an attacker can ask the DNN to process a message whose
> “bits” are arbitrary real numbers...
Excellent read. One example:
> Consider the case of basic public key cryptography, in which a person’s public
> and private key are created together in a single operation. These two keys are
> entangled, not with quantum physics, but with math.
>
> When I create a virtual machine server in the Amazon cloud, I am prompted for
> an RSA public key that will be used to control access to the machine.
> Typically, I create the public and private keypair on my laptop and upload the
> public key to Amazon, which bakes my public key into the server’s
> administrator account. My laptop and that remove server are thus entangled, in
> that the only way to log into the server is using the key on my laptop. And
> because that administrator account can do anything to that server—read the
> sensitivity data, hack the web server to install malware on people who visit
> its web pages, or anything else I might care to do—the private key on my
> laptop represents a security risk for that server...
Researchers at Google have developed a watermark for LLM-generated text. The
basics are pretty obvious: the LLM chooses between tokens partly based on a
cryptographic key, and someone with knowledge of the key can detect those
choices. What makes this hard is (1) how much text is required for the watermark
to work, and (2) how robust the watermark is to post-generation editing.
Google’s version looks pretty good: it’s detectable in text as small as 200
tokens.