Introduction
As Ethereum continues to grow, the demand for block space has grown with it. While L2s continue to scale, we are committing to scale the L1 as well. With the successful deployment of Pectra and the increase from 3/6 to 6/9 blobs, the network has demonstrated its ability to scale blobs systematically while maintaining security and decentralization. The next fork, Fusaka, ships a feature (PeerDAS) that is aimed at improving blob throughput. With progress being made on that front, we turn our attention to the other frontier: scaling the gas limit from today's 36M to 45M in the near term, with a clear path toward 60M and later 100M gas limit per block.
This post outlines our comprehensive approach to gas limit scaling, informed by extensive testing conducted during the Berlin Interop week and ongoing research efforts in bottlenecks.
Background: Why Gas Limit Scaling is hard
The gas limit directly determines Ethereum's transaction throughput capacity and has a lot of second order effects on the network. However, scaling the gas limit isn't simply a matter of changing a parameter—it requires careful analysis of:
- Execution performance: How quickly can clients process larger blocks?
- Network propagation: Can blocks reach all the nodes in the network (validators and full nodes) within the 4-second deadline?
- State growth: How do larger blocks affect sync times, data access times and storage requirements?
- RPC Performance: Can we continue to serve the data at the tip of the chain?
- Consensus layer impacts: What are the implications for consensus operations?
Our Three Pillar Approach
1. OPCODE and Precompile Benchmarking
Executing OPCODES and Precompiles directly influence the amount of time it takes to execute a block. If transactions takes longer to validate than the attestation deadline, then we run into higher chances of the block being missed by a portion of the network - allowing for a straightforward attack. Such transactions imply that either we have priced the gas required to execute them wrong, or that client teams are missing optimisations to improve performance.
Current Status: Significant progress has been achieved during the Berlin Interop week. Client teams have improved their worstcase block processing times substantially, with most operations now meeting our 20M gas/second threshold (gas/second is expanded on in the footnote).
Key Findings:
- ECRecover, Blake2f, Datacopy, BLS precompiles, and MSTORE are all non-issues at scale
- BN256Add, BN256Pairing, and BN256Mul were successfully optimized by client teams
- The primary remaining bottleneck is MODEXP, which will be addressed by EIP-7883 and by further client improvements
Dashboard showing worst case gas performance (after MODEXP repricing):
Testing Infrastructure: Our performance testing leverages tooling including:
- perf-devnet-1: A mainnet shadowfork running at 100M gas with realistic loads, allowing us to simulate mainnet-like performance on transaction loads
- gas-benchmarks: Automated benchmark framework, allowing us to reproduce issues and compare cached and uncached operations
- execution-spec-tests: The canonical method of writing the performance tests that can be consumed by the benchmarking tool as well as client CIs.
2. State Growth Analysis
The higher the gas limit, the higher the potential for rate of state growth to increase. A larger state would imply it takes longer to sync a node, however, it also takes a toll on database performance once the state is significantly larger. More concerns about state growth are covered in this video "Ethereum in numbers: Where TPS meets physics / Péter Szilágyi". Hence, our scaling approaches need to pay extreme attention to state growth and its longer term implications.
Approach: We're conducting comprehensive analysis of how larger blocks affect state growth, processing degradations and sync performance using specific devnets with significantly larger state sizes.
Key Concerns:
- How do deeper trie structures affect Storage operations (like SSTORE,Contract deployments, SLOAD, witness ) at scale?
- How does larger state affect trie root calculation?
- How do the various database styles client teams use perform at these scales?
- What are the implications for snap sync performance?
- How do archive nodes handle increased storage requirements?
- Can we continue to serve RPC requests in a reasonable time with higher gas limits and larger state?
Testing Environment:
- Bloatnet: Specialized devnet generating 2x mainnet state size with deep leaf inclusions
- Real world transaction patterns to simulate authentic growth scenarios
- Comprehensive RPC performance testing under various load conditions
3. Security and Consensus Layer Implications
Gas limit increases don't occur in isolation - they have profound implications for the consensus layer and overall network security. The consensus layer currently has a 10MB p2p gossip limit, so we need to ensure that the overall block size along with the worst case consensus layer overheads (Attester slashings) never exceed this threshold. Larger block sizes also imply a worsening in propagation times, we would need to ensure that the p99 of the network continues to receive the blocks in a timely manner.
Approach: We're spending a lot of time analysing gas limits from a security perspective and analysing network performance in order to ensure the network can handle higher gas limits.
Key Considerations:
- Beacon block size implications: We need to ensure that the combined worst case sizes of all the layers don't exceed some p2p thresholds.
- Network propagation: Analysis ensuring blocks reach supermajority within 4-second deadlines
Current recommended milestone: 45M
45M Gas Limit (Immediate)
Status: Ready for deployment!
Our comprehensive testing shows no blockers for moving to 45M gas limit. All client implementations demonstrate adequate performance, and network propagation remains well within safety margins.
Key Metrics:
- Worstcase block processing times: <3 seconds across all major clients
- Network propagation: 99.5% of nodes achieve <4s acceptance rates
- No additional EIPs required
- All clients have released updated defaults (or will do so soon)
Path forward
The path to scaling doesn't stop at 45M, we intend to use the tools to continue scaling Ethereum safely. We have already identified some MODEXP related optimisations that could get us some more short term scaling, with Elliptic Curve related optimisations required for scaling beyond that. We also will have the Fusaka fork later this year that will theoretically enable scaling up to a gas limit of 100M.
We are hoping that the groundwork we lay for gas limit increases help not just the L1 scaling journey, but also the L2 scaling journey. The aim is for the tools, approaches and analyses we release over the next months to help the entire Ethereum community.
We do want to reiterate, as we continue this journey, our commitment remains unchanged: scale Ethereum responsibly, maintain decentralization, and never compromise security for throughput. The Berlin Interop week demonstrated the Ethereum ecosystem's remarkable ability to collaborate on performance improvements - so we'd like to give a massive shoutout to all the client teams that rallied to make this happen.
Footnote
A slot takes 12s on Ethereum, however, we have an attestation deadline at 4s. This implies that the block needs to be produced and propagated within 4s mark under ideal operation. We talk about gas/second while talking about benchmarks as they are a way to represent how large of a block we can accept within the 4s attestation deadline of a slot. If we have a client perform at 20M gas/second, that implies that the maximum the network can handle is a gas limit of 80M per block -> 20M gas/sec x 4s.