Background
Traditional blockchains like Ethereum operate by coming to consensus over ordering and replicated execution of transactions. Step-by-step, it looks something like:1
Block proposer elected
A leader, more commonly referred to as a block proposer, is elected to
extend the chain by proposing a block (today, via stake-weighted
pseudo-random selection).This block proposer collects transactions from both public and private transaction pools (using by running a sidecar like mev-boost), builds a valid block according to protocol rules, and broadcasts it to the rest of the network.
2
Validator re-execution
Next, all validators in the network receive the new head block from the block proposer and re-execute the block locally, via their execution clients, to determine block validity.Today, in Ethereum (via the official specification), this involves asserting:
- A block
header
is valid - All
transaction
(s) are valid by their validity rules - The sum of transaction
gasLimit
(s) do not exceed the blockgasLimit
- A block’s
stateRoot
matches a localstateRoot
after executing all transactions - …and similar
txsRoot
,withdrawalsHash
,ommersHash
, and other checks
3
Achieving consensus
Finally, validators that have re-executed and asserted validity of a new head block publish their votes in favour of the block.Via the fork choice rule, all correct nodes eventually agree on a common view of the canonical chain, finalizing a block to Ethereum state.
Limitations
Today, the Ethereum Virtual Machine is purposefully limited in the types, computational complexity, and cost of operations it supports. In part, this is because of a conflict in Ethereum’s architecture demanding each node execute all transactions, yet prohibitively low minimum requirements to run a node, optimized for the long-tail of residential validators. Commonly, proposals to improve chain performance or enable net-new user functionality are met with pushback to preserve the minimum requirements. Namely, this results in:- Only supporting limited, homogeneous computation optimized for the weakest node
- Best-case chain performance dependent on worst-case node performance
- No user preference over compute execution; all nodes paid the same irrespective of hardware, capabilities, or performance
Optimized Architecture
Ritual Chain introduces node specialization through architecture purpose-built for reducing redundant re-execution and enabling user preference:1
Symphony reduces replicated execution
Symphony is a new consensus protocol leveraging dual proof sharding, distributed verification, and optimal sampling to reduce replicated execution.At core, the principle behind Symphony is to execute-once-verify-many-times. Single nodes are selected for transaction execution (via Resonance). In addition to generating execution outputs, these nodes also generate succinct computation proofs.In place of transaction re-execution by all validators, subsets of nodes now verify succinct proofs and broadcast transaction validity, with the network collectively reaching execution consensus.Via Symphony, re-execution is made redundant, with nodes free to service just their specialized computation, while still participating in validating the network.
Symphony
Read more about Symphony—Ritual’s new consensus protocol.
2
Resonance enables user preference
Resonance is a new, state-of-the-art fee mechanism built for heterogeneous compute.Traditional blockchains like Ethereum:
- Inefficiently price unique resources (computation, storage, etc.) as identical
- Force users to pay fees subsidizing transaction re-execution across all nodes
- Limit users from expressing their execution preferences
Resonance
Read more about Resonance—Ritual’s state-of-the-art fee market design.
Benefits
Via this architecture, Ritual Chain is the most expressive blockchain in existence, built to meet the demands of complex on-chain applications and enable net-new user functionality:- Node specialization optimizes network performance, allowing for smoother, more tailored processing across different workloads.
- Nodes are rewarded based on their unique computational strengths, creating a diverse ecosystem that incentivizes both high-performance and resource-constrained participants to join and contribute effectively.
- Users gain flexibility, with options to prioritize cost or speed based on their preferences.