Less trust, more truth in AI through verifiable compute
How crypto technologies keep AI accountable, transparent and aligned. Featuring our investment into Modulus Labs.
AI regulation is booming. Heads of states are racing to prove to their lobbyists, constituents and others who can have the strongest opinion on AI. Last week alone Biden announced an executive order on AI safety, and Rishi Sunak organized the AI Safety Summit with the bigwigs of both nation (though Biden and Macron were missing) and industry leaders. The EU AI Act is set to go live in December this year. Meanwhile Yann LeCun is, alleging that the existential risk fear mongers are in collusion with the incumbents like OpenAI and Anthropic. They want to repress the open source movement, and build it themselves. A suggested path forward is instead to improve work on transparency - less trust, more truth. Not a new idea and a key principle of Inflection’s investment strategy, e.g., portfolio founder, Michael Gao, wrote about this in Fast Company in June.
Luckily there’s an ecosystem in which there is no higher virtue than being trust-less: crypto! In the last 15 years, we’ve invented numerous techniques for being able to use public systems as efficiently as possible without the need to rely on any single party.
Verifiable compute to the rescue
A key technology of this revolution is zero-knowledge proofs, a form of cryptography that allows one party to prove to another that a statement is true, without revealing any information beyond the validity of the statement itself. This technology is at the heart of verifiable compute, a system that can validate the execution of computational tasks without having to perform the computation again.
Didn’t get it? Check this out!
Modulus Labs is at the forefront of this, developing a specialized prover system designed to facilitate the use of AI in blockchain environments, a concept known as ZKML (Zero-Knowledge Machine Learning). We’re very proud to have been part of their journey since the pre-seed round, and this week they announced their $6.3M Seed round led by our colleagues over at 1kx and Variant.
The applications of ZKML are relatively far and few in-between, something which the Modulus team is hyperaware of. But we see the potential for ZKML as a fundamental technology for the spread of more advanced computation across all platforms. Including blockchains.
ZKML can be useful in scenarios where computational tasks need to be outsourced, or where there is a need to verify that a trusted entity, such as OpenAI, has run a specific model. One may want to verify that the model is free from issues like bias or poisoning, and then that the same model is actually used in production, here ZKML can be the solution. It could also allow for the control over a neural network without revealing its weights, ensuring privacy and security. You may see why this is relevant to how we regulate AI.
In a future not so far away
The proposed AI acts are quite fuzzy on the implementation details. Normally this would be totally fine, as regulation shouldn’t be technology specific, but rather impose intended outcomes. But we’re not sure that’s what happened here. If we don’t understand the technology well enough to certainly modulate its properties, how can we achieve the goals of compliance? The goal is to protect end-users and citizens from potentially lethal digital tech, but how do avoid creating a useless charade of it?
The US has set a computational threshold for trained models, requiring safety tests and disclosure of results for models above 10^26 floating-point operations per second (flops). However, this overlooks the nuances of model performance, as even smaller models like llama-7B can pose risks. The UK hasn’t published any direct guidelines or indication of what rules might be used, yet.
Ideally, down the line, end-users and regulators have an easy way of ensuring the algorithmic compliance and safety of models, but even doing this without cryptography is a very hairy problem. Factors like bias and accuracy are difficult to define in a generalized way, even without involving zero-knowledge proofs. Cryptography could be used as a digital watermark for AI models. This proof would be the certificate of the computation's integrity, confirming that a given input was used to produce an output with a specified model. The process is succinct, non-interactive, and preserves zero knowledge of the inputs.
However, creating these proofs is not without its costs. The computational overhead is significant, with the cost of creating a proof being approximately 1000x that of running the algorithm without the prover, using off-the-shelf (non-Modulus labs) solutions.
Modulus labs leads the way in use cases
Blockchains represent the first real-world application of verifiable compute. Given that computational resources are limited on-chain, and that blockchain applications strive to minimize trust, integrating complex AI tasks is a challenge. Verifiable compute offers a solution to this dilemma, enabling trustless AI operations on the blockchain.
Modulus labs plan to demonstrate its potential on the Ethereum blockchain through various use cases, such as Upshot's proprietary NFT appraisal algorithm and Worldcoin's handling of private biometric data.
Backed by Inflection's pre-seed investment, Modulus Labs is a testament to the power of combining engineering excellence with entrepreneurial spirit, intellectual honesty, and customer focus. They are not just a research company; they are creators of 'accountable magic,' turning the seemingly impossible into a tangible reality.