Everything You Wanted to Know About Polygon zkEVM’s Prover, But Were Too Afraid to Ask
When all the components of Polygon zkEVM Mainnet Beta are pieced together, a fairly simple picture emerges.
It’s a zero-knowledge (ZK) rollup with a straightforward purpose: to extend Ethereum’s blockspace. Doing so offers scalability for Ethereum by using the power of ZK. Thousands of transactions can be proved with a single proof! And that passes on low costs to users by moving expensive computations off-chain, but doing so without sacrificing security.
Because Polygon zkEVM Mainnet Beta is EVM-equivalent, all of the EVM’s opcodes and nearly all* of its smart contracts can be used out of the box, with ease.
(* Interested in what precompiled smart contracts still require support on Polygon zkEVM Mainnet Beta? Still pending but coming soon: Pairings, SHA256, Blake and RIPEMD-160.)
But climb down the ladder of abstraction into design specifics, and you can begin to see how Polygon zkEVM Mainnet Beta stands out. At the center of Polygon zkEVM’s power is a working zkProver, with low-cost proof generation and quick finality.
This post is a deep-dive into everything you need to know about the zkProver, where the hard stuff of proof generation happens. The prover is open source, and there’s a lot of information in the Polygon zkEVM technical docs that describes every component in atomic detail.
So you don’t need to trust us: verify for yourself how Polygon zkEVM unlocks scale for Ethereum.
The Proof Is in the Prover
What happens when a transaction is submitted on Polygon zkEVM Mainnet Beta?
At a high level, here’s the process:
- A transaction is sequenced into a batch of other transactions;
- The batch is distributed to L2 nodes;
- Data of the batch is made available to Ethereum; and then
- Batched transactions are proved, and verification of the proof is posted back to Ethereum.
To accomplish this requires two components: there’s the Sequencer, which does transaction sequencing, grabbing transactions from a pool to make batches, node distribution, and data availability. And then there’s the Aggregator, where the zkProver lives; this is where transactions are validated, aggregation occurs, and verification is posted back to Ethereum L1.
So, the Sequencer sends batches of transactions to the Aggregator. Now what?
Imagine batches like boxes moving along a conveyor belt, waiting to be proved. It’s easy to see how a bottleneck could form if batches are generated more quickly than they can be proved.
That’s why the zkProver’s work on proof generation can be run in parallel. (These are STARK proofs, for those playing along at home.)
In other words, one prover can work on generating a proof for one batch of transactions even as another prover works simultaneously to generate a proof for another batch. And so forth. Network load can be managed by spinning up servers with new provers, or taking them offline when load decreases.
Pretty neat. But now there are a bunch of (valid!) batches of transactions–how are they ordered?
A chain segment is formed when enough batches are pieced together, and aggregated (hence the name aggregator). From here, one can build a tree of proofs–where the root proves a full segment of the chain.
The design principle is that of recursion. We have written in-depth about this in the past. Proofs of proofs, or proofs of proofs of proofs! And unlocking nearly infinite scaling.
Once the proofs of proofs are aggregated, it’s time to post a validity proof back to Ethereum. This last step moves from a STARK-based proof system to a FFLONK proof that wraps up all preceding proofs into a SNARK. This final proof is succinct.
It is this proof that stores the rollup state on-chain, allowing users to withdraw funds.
By using a SNARK in the final proof to post a state change on-chain, the system has been optimized to reduce gas cost significantly.
What Wasn't Covered
Oh, so much!
To begin with, the mechanics of the Sequencer, which deserve their own post. This post didn’t cover the Three L2 States of Trust, a fascinating topic all on its own, or the zkProver’s different state machines, or how Polynomial Identity Language (PIL) features in the way that the zkProver’s execution is verified.
Instead, we showed you the puzzle of the zkProver, took it apart, and then pieced it back together again. This is one of many puzzles that, together, function to give Polygon zkEVM that secret sauce: scalability without compromise.
For the latest, check the Polygon Labs blog and tune in to the social channels for everything in the Polygon ecosystem. And if you’re interested in (or perplexed by) Zero Knowledge, follow the dedicated ZK handle for Polygon, @0xPolygon, and head over to the ZK forum.
Website | Twitter | Developer Twitter | Telegram | Reddit | Discord | Instagram | Facebook | LinkedIn