Synchronous Composability
Last updated
Last updated
The vision for the endgame of synchronous composability is pretty simple: using tokens/protocols/apps where it is completely indistinguishable if an app is natively deployed to the same chain, or on a different chain. Essentially, all synchronously composable blockchains become a single seamlessly accessible backend for any contract on any of them.
Isn’t fast async message passing enough? At Spire, we don't think so. Here are three reasons why:
Atomicity is valuable for arbitrage and other forms of financial interactions. While it is possible to recreate atomicity within an asynchronous environment (i.e. with a lock and mint system), the possibility of “something going wrong” introduces a risk factor that must be priced in.
Because contract calls in a single chain EVM are synchronous and atomic, all existing dev tooling and educational resources have been built for this design paradigm. This is enough of a barrier to developer adoption that an alternative synchronous option that can offer a seamless DX will win.
While it is possible to achieve atomic cross chain interactions without synchronicity, many of these designs would require changes to the smart contracts we already use (like Uniswap or Aave pools). This is enough of a barrier to onboarding meaningful market share of liquidity and other valuable state that synchronicity will win.
The 2 key requirements we view as fundamental to synchronous composability are shared finality, and coordinated sequencing. If we have all these, good synchronous composability should be possible and further DX improvements on top should take this primitive towards the endgame.
Shared finality (aka shared settlement, shared time-to-finality, atomic block proposals) is the property that enables blocks for multiple different domains to propose and finalize atomically.
The reason we need this property is to do cross-chain interactions without the risk that one block may be reorged while another is finalized. To avoid the need to do massive multichain reorgs, such shared finality is pretty much a requirement for cross-chain dependent interactions.
There are two subproperties we need to get shared finality: atomic block proposal and fast proving:
Atomic settlement is when blocks for multiple domains can be atomically proposed together. Note that this does not include any checks for the validity of the proposed blocks. We assume that if all blocks proposed together are valid, they all reach finality together.
Fortunately, atomic settlement is almost a natural property of L1 based sequencing. Because we get atomic block proposals and finality by bundling block proposals between all based rollups and the L1, the only additional property we need is that it is possible to verify that all block proposals are valid at the same time. Unfortunately, this means that using an alternate DA layer (not blobs) or fraud proofs for execution introduce new trust assumptions. Thus, we believe the ideal L1 based rollup uses blobs for DA and validity proofs for state transition integrity.
But here we encounter another problem: how do we get these validity proofs before we propose our blocks? Especially if proving takes a long time today.
Fast proving is a qualitative property of a validity proving scheme that describes it’s proving latency. We may use the term differently than other teams. For our purposes, we need a full execution validity proof (state root transition included) that is ready to be (ideally cost-efficiently) verified within the L1 EVM environment. We consider a benchmark of one roughly Ethereum sized block (so let’s say 15M gas) with average Ethereum transaction content. It is key to also consider additional latency sources in the proving process (like network latency if we expect or require the use an external proving network).
We need fast proving because it is key to atomic finality and (of course) execution cross-chain interactions without any risk of invalid state transition. We expect an execution proof to be included with every block proposal. State transition integrity mechanisms without a verifiable validity proof (like fraud proofs) introduce more latency than we can bear to lose. It turns out that synchronous is very fast. The proving latency (as outlined above) is the amount of time that an L1 based rollup block will need to be fully built (and likely big parts of or the entire L1 block too) before the rest of the block proposal pipeline can finish. This means that if blocks are built at t+2s today (and submitted to mev-boost etc.) and the proving latency is 3s, blocks will need to be ready to be proven at t+5s. For some MEV reasons we won’t get into now (google Ethereum block proposal timing games if you’re curious), it is very important to have as much time as possible to collect CEX data, orderflow, and optimize ordering if you want to maximize proposer revenue. Because proposers will want to stay profitable, proving latency is directly related to the bribes that L1 based rollups will need to pay on top of regular congestion fees. This type of fee market is interesting to explore but at the end of the day it’s best to just minimize proving latency as much as possible.
So how can we get fast validity proofs of EVM execution? There are a variety of different ZK teams working to make this a reality (Succint, Risc Zero, Lita, Kakarot, many more) but they remain 6 months + from being production ready at the kind of speeds we need. One potential solution to this delay is to use TEE only proving schemes. TEEs introduce a minimal performance hit on top of native execution speeds. Considering network latency and other factors, TEEs are still well within the 3s target and even likely faster than 1s even with a complex setup.
One reason we believe synchronous atomic composability has largely been discounted by big interop teams is that you need coordinated block building (aka sequencing) to get the communication you need to enable efficient cross chain interactions. In the absence of coordinated sequencing you can just guess what will happen on another chain, but it is inefficient (because you may be wrong) and doesn’t work with preconfs (because you may be wrong).
full stack - neither the Espresso Network or NodeKit’s Composable Network are full stack services. They only deal with sequencing and try to leave finality (!) and other details to rollups. This introduces the complexity of dealing with situations where finality may not be reached atomically (different fraud proof systems, issues with various training wheels, bugs in zk implementations). To convince a rollup to use an entire stack was apparently deemed too difficult or out of scope by these teams.
But now, based sequencing presents itself as a pretty good version (incentives wise) of shared sequencing that very well might resolve some of the incentive problems.
Our path to coordinated sequencing is also pretty natural in the presence of L1 based sequencing. One key thing to watch out for is that coordinated sequencing is not guaranteed for a based rollup without some action by one or more block builders. If no block builder builds a rollup block in coordination with the building of an L1 block (maybe another searcher builds the block) then coordinated sequencing will not be achieved. The most straightforward path to resolving this practical concern is just to work directly with L1 block builders to set up integrations on a case by case basis, and maybe move to a new system in the future.
Another practical concern is how this fits with preconf designs. The ability for a cross-chain preconfirmation to be handled natively by the preconf router/RPC is only possible if the preconf provider is doing some coordinated sequencing and can expose the right API for requesting this kind of thing. This is a pretty complex design space, but I (mteam) am confident the various preconf teams will prioritize this at some point (although maybe not until late 2025).
The obvious issue with using TEEs is security. An exploit of the proving system used would allow a hacker to instantly drain the bridge contract and impersonate smart contracts (and therefore identities, governance, etc.). TEEs are their exploitability, and we believe it is wise to prepare for catastrophic exploits in anyone one TEE. Interestingly, we don’t care about privacy of execution inside the TEE, only full integrity exploits are concerning for us.
The most notable previous attempts at this are and (another ). Both of these are running into 2 big problems: synchronicity is hard unless you control the entire stack and non-based shared sequencing is really hard to align incentives on.
incentive alignment - it turns out that sequencing is a very especially as financial activity and therefore MEV capacity increases. Shared sequencer designs that require giving up full or even partial control of MEV and congestion revenue are hard to convince apps or rollups to use.