Ethereum's scaling challenges are well-known to anyone who has ever paid hundreds in gas fees just to execute a transaction during a busy time. Thanks to the rise of Layer-2s, users now have options to escape high fees on the mainnet. Yet, as Ethereum is getting crowded with new layer-2 networks, further questions arise on how to handle data and ensure the integrity of transactions.
Enter Danksharding. This Ethereum roadmap item promises to enable Ethereum to scale to over 100,000 transactions per second while ensuring low transaction fees for L2 users. In this post, we'll explain the background of it, what danksharding is and how it leads to lower costs on Layer-2s.
What is Sharding?
To understand Danksharding, it's helpful to start with sharding in general. Sharding is a technique used in traditional databases and some blockchains to split up a network into smaller parts, so-called shards that process their own transactions and handle their own data. Another way to understand it is as a network of multiple chains.
Sharding promises to drastically increase the throughput a network can handle via parallel processing of transactions. Some blockchains like Elrond and NEAR are already using implementations of sharding. Typically, node operators will run a full node on one or a handful of shards while relying on light clients to verify the state in others.
Even though Ethereum initially planned to follow this traditional sharding model where all computation and transactions are handled across shards, the focus has shifted to scale the network via rollups and sharding in the data layer.
Rollups goal is to reduce congestion by taking off transactions from the mainchain and only posting "rolled up" data back to it. They heavily rely on data availability to function.
Data Availability refers to the concept of storing network data and making it accessible and retrievable for all network participants. Or, as Ethereum researcher Protolambda explains, it's "the permissionless ability to reconstruct the state." On Layer-1 networks like Ethereum, nodes download all the data in each block, which prevents invalid transactions from going unnoticed but is quite inefficient as a process. Rollups solve that by combining data and then putting the raw transaction data on the Ethereum mainnet using Calldata, the cheapest way to store data on Ethereum.
It's still expensive, with some Roll-up clients paying up to 90% of their fees just to pay Ethereum data fees. The data is then processed by all Ethereum nodes and lives on-chain forever, even though rollups might just need the data for a short time. Unsurprisingly, alternative networks for offering Data Availability solutions to Layer-2s, like Celestia or Avail, have spun into existence.
However, with Danksharding, these will compete with Ethereum's native solution.
What is Danksharding?
Danksharding introduces Data Availability Sampling (DAS) to ensure the integrity of posted data and adds blobs (binary large objects) for data storage to the network. Unlike traditional sharding, Danksharding doesn't split the entire network into mini-blockchains but relies on expanding the space for data storage on the consensus layer.
The main components of danksharding include:
Data availability sampling
With full Danksharding implemented, it'll be possible for nodes to verify the availability of data to reconstruct entire blocks by taking mathematical samples. This will greatly improve the performance of nodes by reducing their workload.
Currently, all Ethereum validators build and broadcast blocks. With Proposer Builder Separation, the task of bundling transactions and broadcasting them is separated where Block builders build blocks, whereas proposers, without seeing the contents, choose the most profitable one for inclusion. This also reduces MEV because when contents aren't visible, there is no way to front-run certain transactions.
However, full Danksharding will require several changes to the network to be able to support hundreds of rollups eventually. As the Ethereum Foundation mentions on its website, it's several years away, yet the first major milestone will be reached sooner with the implementation of EIP-4844, also known as Proto-Danksharding.
EIP-4884 or Proto-Danksharding
EIP-4884 is a step toward enabling Danksharding and was proposed by Ethereum researcher Protolambda, hence the name. The goal of this EIP is to increase rollup efficiency and lay the foundation for Danksharding. This EIP adds a new transaction type to the network.
Blob is short for binary large objects. These blobs will contain transaction data and be attached to blocks, increasing their data capacity. Each blob contains up to 125kb and comes with a blob commitment. This cryptographic commitment uses a scheme to cryptographically enable the verification of transaction data inside of blobs without requiring them to reveal the complete content. Blobs offer additional storage space for layer two rollups that need to retrieve and post data on the mainnet. Unlike other blockchain data, data in blobs will only be stored for up to 3 months.
Since blobs exist outside of the current Ethereum fee model, they will have their own pricing mechanics and a separate fee market similar to gas.
Historically, high fees on mainnet have negatively impacted rollup operators. With the data availability hosting its own market, rollups benefit from moderate pricing even during hyped launches.
Benefits of EIP-4884
EIP-4884 is scheduled to go live with the upcoming Cancun upgrade and will introduce a variety of benefits, especially for rollups and, by extension, for users.
- Lower transaction cost: by reducing fees for rollup operators, it's expected that fees will drop by up to 100x on rollups as a result.
- Prepare for scaling: EIP-4884 lays the groundwork for the implementation of full dank sharding
- Enhanced user experience: it allows to accommodate more transactions at reduced cost.
Overall, EIP-4844 is an important interim solution paving the way for Ethereum's rollup-centric path to scalability. It does so by introducing a new transaction type that handles blobs of data that are stored only temporarily in the beacon nodes.
For any projects or devs that still want to access the full data history of transaction data for analytics, dApps, or other purposes, Subsquid's decentralized data lake offers an accessible way to tap into data without having to rely on a centralized provider.
Docs | GitHub | Twitter | Developer Chat
Article by @Naomi_fromhh