Modular Computational Integrity
Choose the computational integrity gadgets best fit for your application.
Computational integrity is a fundamental property that ensures the output of a computation is provably correct and has been executed as intended.
The status quo today forces users to trust centralized AI operators to run models correctly without manipulating their inputs or cheating them with worse models.
Verifiable computing, powered by the computational integrity gadgets below, enables any computation—whether conducted by a trusted or untrusted party—to be verified for accuracy and correctness, without redoing the often complex computation itself.
Ritual takes a credibly-neutral approach to computational integrity by enabling users to leverage different gadgets based on their app specific needs and their willingness to pay.
Ritual's modular design and flexible underlying architecture empower user choice.
Supported gadgets
Zero Knowledge Machine Learning
Strong cryptographic guarantees of correct model execution, at expense of added overhead, complexity, and cost.
Optimistic Machine Learning
Optimistic acceptance of model execution, with model bisection based verification only when disputes arise.
Trusted Execution Environments
Model execution with hardware-level isolation in enclaves, at expense of trust in chip manufacturers and hardware attacks.
Probabilistic Proof Machine Learning
Low overhead and cost-efficient statistical guarantees of model execution, at expense of consistently perfect verification.
Eager vs Lazy consumption
Ritual enables both eager and lazy consumption of proofs from supported gadgets. Lazy consumption enables use cases where computational integrity is only required in the sad path:
Save costs: Lazy proofs are generated only when disputes or errors occur
Improve performance: Minimize proof verification for applications with infrequent disputes
Better developer experience: Build simpler, easier to audit applications with fewer hot paths
Gadget trade-offs
A one-size-fits-all paradigm to computational integrity creates inherent trade-offs between security, cost, and performance. Each gadget has its own trade-offs and best use cases:
Zero Knowledge Machine Learning
Zero Knowledge Machine Learning (ZKML) builds on zero-knowledge proofs to cryptographically assert correct execution of an AI model.
Ritual’s ZK generation and verification sidecars enshrine this gadget natively, enabling users to make strong assertions of model correctness, with robust blockchain liveness and safety.
Robust security: Offers the strongest correctness guarantees via cryptography
High complexity: Computationally expensive, demands high resources, and is slowest
Limited support: Only simple models are supported by modern ZKML proving systems today
Optimistic Machine Learning
Optimistic Machine Learning (OPML), inspired by optimistic rollups, assumes model execution is correct by default, with verification occurring only when disputes arise.
At a high level, the system works as follows:
- Model execution servers stake capital to participate
- These servers then execute operations, periodically committing intermediary outputs
- If users doubt correctness, they can contest outputs via a fraud proof system
- The system views models as sequences of functions and uses an interactive bisection approach, checking layer by layer, to identify output inconsistencies
- If model execution is indeed incorrect, server stake is slashed
Cost effective: Especially efficient for use cases where disputes rarely occur
Extended support: Bisection approach better supports large, complex models (like LLMs)
Weaker security: Relies on incentivized behavior rather than cryptographic security
Complex sad path: Dispute resolution is lengthy, complex, and demands some re-execution
Trusted Execution Environments
Trusted Execution Environments (TEEs) provide hardware-based secure computing through isolated execution zones where sensitive code and data remain protected.
Ritual’s TEE Execution sidecar enshrines this gadget natively by executing AI models in secure enclaves enabling data confidentiality and preventing model tampering.
Performant: Enables sans-gadget competitive performance for most AI model types
Real-time: Better suited for real-time applications with limited proving complexity or overhead
Vendor trust: Requires trust in chip manufacturers and secure enclave software
Hardware attacks: Susceptible to sophisticated side-channel hardware attacks
Probabilistic Proof Machine Learning
Most model operations are computationally complex, especially when performing resource-intensive operations like fine-tuning or inference of modern LLMs.
To better support these operations with a low computational overhead tool, Ritual has pioneered a new class of verification gadgets, dubbed Probabilistic Proof Machine Learning.
The first of this line of tools is vTune, a new way to verify LLM fine-tuning through backdoors.
Computationally cheap: Time and cost-efficient for even the most complex model operations
Third-party support: Suitable for trustlessly verifying third-party model API execution
Statistical correctness: Not suitable for when perfect verification guarantees are necessary
Powered by Ritual
This flexibility of enabling applications to pick and choose from a range of specialized gadgets is only possible on Ritual, built on our belief that we should remain proof system agnostic.
Powering this belief is our underlying architectural work with Resonance, Symphony, enshrined execution sidecars, vTune, Cascade, and more.