The Beginner’s Guide to Aggregated Blockchains
A deep-dive and nerdsnipe for quasi-n00bs: Learn about an aggregated framework and why it’s going to make life in Web3 simpler for users, developers, and chains
As an approach to scaling blockchains, the “aggregated” category is brand new. It takes the best of previous methods and combines them. But what does an “aggregated blockchain approach” actually mean?
Here’s our definition: a horizontally scalable multichain ecosystem that enables access to shared liquidity and state across connected chains.
An aggregated approach attacks scaling architecture in a completely new way by aggregating sovereign chains.
We’ll break down exactly what this means in the article below.
For users, an aggregated network can mean one-click transactions across chains. It will feel like the experience of being online, except in a web of protocols.
For developers and chains, an aggregated network will mean tapping the user bases and shared liquidity of multiple blockchains. Devs can focus on market fit, and not worry about competition for shared resources, bootstrapping liquidity, or finding users.
Chains enjoy unparalleled network effects and keep their sovereignty.
All of this is just the start. The first components of the AggLayer—a neutral technology with core contributors from Polygon Labs, Succinct Labs, Toposware, and more—are already live. In April 2024, a top crypto exchange, OKX, announced X Layer, a ZK L2 built with Polygon CDK, was joining the AggLayer’s unified bridge, enabling 50M+ users to access the network effects and unified liquidity of the AggLayer. Expect more integrations this year.
Below, we’ll cover everything you need to know as a beginner.
You’ll learn how it works and why we think it’s an evolution of the last fifteen years of blockchain research.
Monolithic, meet modular multichain
Right now, crypto is caught between a scaling debate with two sides.
On one side, the monolithic (or integrated) approach tackles the question of scale by defining it as scaling access to liquidity and shared state. Basically, in a monolithic chain, all dApps and liquidity exist in a single environment. Integrated chains are the Solana’s of the world; they run on nodes responsible for consensus, data availability, and execution, and serve, too, as a settlement layer.
By design, monolithic chains create interoperability for dApps built atop the chain itself—but not interoperability with other chains. The thesis is that adding more blockspace in aggregate doesn’t matter if that blockspace fragments liquidity and state.
Instead, the monolith says, blockchains should be integrated.
On the other side, the modular approach argues that any single monolithic chain will never be able to accommodate all the demand of a future Web3.
In an optimistic world of an Internet-sized crypto environment, monoliths won’t be able to handle the load.
This is due in part to the very design that makes them successful now: lugging around all that data in one place will lead to “state bloat,” and even with a ton of optimizations for parallel execution and local fee markets, there will always be competition for shared resources.
So the modular approach splits up different blockchain components. Functionally, modularity leads to many chains serving as the execution layer of crypto.
This approach gets a lot right.
Different applications have heterogeneous requirements for security, latency, and user experience. So developers should build different execution environments that are fine-tuned to fit the needs of any given application.
And with a modular approach, state bloat and contention are solved problems.
But modularity doesn’t carry over a key value proposition of the integrated approach. Modularity introduces fragmentation.
No matter how much liquidity and new blockspace are added to a “multichain” ecosystem, if state and liquidity are fragmented across chains—requiring cumbersome bridges or cryptoeconomic security that introduces lag time for withdrawal—then these ecosystems do not really scale.
So the aggregated paradigm is a synthesis of these two, combining the best of both worlds: A modular, multichain approach, where settlement happens on Ethereum, but one that fundamentally enables access to shared state and liquidity across the web of connected chains—giving the aggregated network an integrated feeling.
The experience for users will be like a single chain, even as they go between different chains and execution environments.
Enter the AggLayer, for cryptographic safety
The Aggregation Layer, or AggLayer, is the first aggregated blockchain network. It is a credibly neutral service that will accept proofs from connected chains; verify chain states are consistent; aggregate these proofs; and settle to Ethereum.
At a high-level, this enables developers to build the best user experience and grow in a multichain environment. Developers retain sovereignty to build whatever, however—and to still interoperate with other blockchains.
So while the AggLayer provides unity, developers can select from modular toolkits like Polygon CDK, which enables teams to create and customize a chain, and plug right away into the liquidity of the AggLayer. Polygon CDK provides modularity, while AggLayer ensures the integrated feeling of monolithic chains.
Fundamentally, the AggLayer will be a simple, cryptographically enforced service which allows different chains, with different execution environments, to safely interoperate at lower-than-Ethereum latency. The AggLayer cryptographically ensures settlement occurs on Ethereum if and only if there is no chain equivocation or the submission of an invalid block.
Let’s break this down. The AggLayer enables two things:
- Asset fungibility. It provides safety for chains to use a unified bridge, where L1 assets are locked.
- Low-latency interaction. It allows chains to coordinate or operate at lower-than-Ethereum latency.
In essence, these two points give rise to shared liquidity.
To the first point: From Ethereum’s point of view, the AggLayer will look like a single rollup. This has a bunch of powerful implications.
Right now, users in any L2 ecosystem that want to transact across chains have two choices. They can withdraw assets to Ethereum and bridge to a different L2. The process is cumbersome, or in the case of optimistic proofs, requires a seven-day delay. Or, alternatively, a user can use third-party bridging services, which mint synthetic tokens on the destination chain.
The AggLayer offers a third solution.
Because all connected chains will share a unified bridge (which is live), the AggLayer will enable asset fungibility. This means all L1 native assets for the entire AggLayer, across sovereign chains, will be escrowed on the same bridge.
If a user on Polygon zkEVM wants to send POL to a user on X Layer, in the future, they will be able to send native POL, rather than a synthetic version that is wrapped or local to X Layer.
Native tokens can be moved through the unified bridge and come out still as a native token on any other chain, thanks to the unified bridge.
To the second point above: The AggLayer does not force chains to wait for a proof to be verified on Ethereum in order to enable cross-chain transactions.
Lower-than-Ethereum cross-chain latency is enabled by the base security of the AggLayer outlined above. You can read a deep-dive about what makes this possible in a post by Polygon Labs co-founder Brendan Farmer, here. But to achieve this latency requires the development of coordination mechanisms, like shared sequencers, which are basically marketplaces that enable coordination across chains, and enable fast, atomic transactions.
And the best thing is, it’s not a uniform requirement. There may be parts of the AggLayer that experience low latency, and other parts that are not as closely coupled, but can still interop safely within the AggLayer.
Instead of having an all-to-all, constantly synchronized state, the AggLayer imagines an extremely low barrier to entry, where there will be chains connected pairwise, with tight composability, or asynchronous interaction among subsets of chains.
All of this is still being built out. Note that the component of the AggLayer that core developers are building at Polygon Labs is not responsible for coordinating between chains, but instead ensuring the safety of chains to do so.
What the AggLayer means for developers and users
Fundamentally, the AggLayer can add capacity in the form of new chains that are logically separate from one another.
This is an extremely valuable, important property of an aggregated approach. The goal is to enable developers to build the best user experience possible.
Imagine a Web3 game. For a lot of reasons, it doesn’t make sense for those transactions to compete for shared resources with a high-throughput, safety-first DeFi protocol.
Instead, the game developer may choose to build with Polygon CDK, to focus on designing a specific, logically separate execution environment. By plugging the Polygon CDK-deployed chain into the AggLayer, the developer would know that an NFT from the gaming chain will still have access to liquidity of DeFi marketplaces elsewhere in the aggregated network.
In the same vein, a DeFi-first chain can tap the users of an extremely popular gaming ecosystem, all without having to worry about the pitfalls of an integrated chain. These applications don’t have to contend at all times for shared resources of an extremely popular gaming chain—and developers will be able to adjust accordingly.
So while individual, sovereign networks can still work on scaling vertically, the entire aggregated network will scale horizontally, too.
For users, this will all lead to greatly improved UX: The feel of using an integrated chain, but across a modular, multichain network.
You know—more like the Internet as we know it now.
* * *
Tune into the blog and our social channels to keep up with updates about the Polygon ecosystem.
The future of Web3 is aggregated.
Website | Twitter | Forum | Telegram | Discord | Instagram | LinkedIn | Polygon Knowledge Layer