Modular Blockchains – Saviours of the Blockchain Trilemma?


Tom Berkman

Recently, within the crypto world, the concept of modular blockchains has garnered considerable attention. Many modular enthusiasts view the technology as a potential solution to the scalability issue plaguing monolithic blockchains such as Ethereum. This wave of enthusiasm has largely been driven by the recent launch of Celestia and Dymension, two blockchains utilising modular designs but focusing on their own use cases respectively.[1] [2] While still relatively new to the scene, the tokens for both projects reached significant market capitalisations within weeks of their launch. However, whether the markets' enthusiasm around modular technology will be sustained into the future is a subject of speculation.

The purpose of this article is to motivate the role of modular blockchains and assess their functions within the wider crypto space. We begin this analysis by first examining their counterpart – monolithic blockchains. Specifically, we highlight the challenges that monolithic blockchains face in terms of scalability, security and decentralisation, a concept known as the “blockchain trilemma”. Building on this foundation, the focus shifts to introducing Layer 2 (L2) scaling solutions and outlining how L2s attempt to solve the blockchain trilemma. We argue that while L2s help with scalability, they fail to solve the “data availability problem”, and therefore aren’t the trilemma saviours we are looking for. The third section of this article introduces the idea of modular blockchains and analyses the architecture of both Celestia and Dymension. Are these projects the trilemma saviours? Perhaps, but read on to find out.

Monolithic Blockchains – The Trilemma

Striking the right balance between scalability, security, and decentralisation, is at the forefront of any blockchain developer’s mind. Designing a blockchain which excels in all three aspects is the holy grail, however, doing so is a significant feat, and some even argue that it is technically impossible. This concept, which was infamously labelled the “blockchain trilemma” by Ethereum founder Vitalik Buterin, posits that excelling in one aspect comes at the expense of another, meaning that developers must consider and prioritise these trade-offs.[3] Within this context, decentralisation refers to distributing control and responsibilities across many entities, reducing reliance on a central authority. This enhances network resilience and promotes democratic governance but can complicate decision-making and efficiency. Scalability refers to a network's capacity to handle increasing transaction loads (throughput) without compromising performance, crucial for widespread adoption, though this may strain decentralisation or security. Security refers to protecting the network from attacks, foundational for building trust amongst network participants, yet developing a blockchain overly focused on security can lead to limited scalability and centralised control in the hands of a few key entities.

To better understand the trade-offs posited by the blockchain trilemma, we begin by examining the architecture and functions of blockchains more generally.

At their core, blockchains are decentralised networks maintained by nodes who work collaboratively to validate the order and legitimacy of blocks of transactions. This process is orchestrated by the network's underlying protocol and consensus mechanism, which delineate the rules for achieving consensus across the nodes on the state of the distributed ledger. Within this design, nodes are responsible for conducting multiple tasks which can be distilled into four functions:[4]

  • Execution Layer: executing transactions and updating the blockchain’s state correctly.
  • Consensus Layer: agreeing on the validity of transactions and their order.
  • Data Availability Layer: maintaining a historical record of published transaction data.
  • Settlement Layer (when applicable): providing an environment for execution layers to submit and verify proofs and resolve disputes, thereby finalising transaction blocks.

Monolithic blockchains, like Ethereum or Solana, combine these functions into a singular layer, imposing a collective responsibility on nodes to execute transactions, reach consensus, and ensure data availability. This design implies that certain blockchain parameters may optimise one layer but can inadvertently affect the efficiency of others, leading to compromises or trade-offs across these functionalities. For instance, one way to improve network scalability is by increasing the block size – thereby allowing more transactions to be processed per block. While increasing this parameter may improve transaction throughput, it inadvertently affects the decentralisation of a network. Larger block sizes increase the storage, bandwidth, and processing requirements for nodes, imposing higher fixed costs on their operators. This increase in costs can lead to a reduction in the number of individuals and entities capable of running a full node, thereby centralising the network around those with sufficient resources and capital. These inherent trade-offs faced by monolithic networks highlight the critical nature of the blockchain trilemma, where efforts to enhance scalability often entail sacrifices in either network security or decentralisation.

The Rise of L2s

One way that monolithic blockchains such as Ethereum have attempted to circumvent these trade-offs is by offloading the role of transaction execution to separate networks, commonly referred to as L2s. L2 solutions aim to improve network scalability by decoupling transaction executions from the mainchain (Layer 1 or L1). In this design, L2s periodically post compressed transaction data back to the L1, inheriting its decentralisation and security for consensus and block finality. This strategy of separating tasks across different entities mirrors the principle of the division of labour, articulated by the 18th-century economist Adam Smith, who argued that task specialisation leads to increased productivity and efficiency.

L2s are maintained by nodes responsible for processing transactions and ensuring network integrity, and sequencers responsible for batching transactions and posting summarised data or proofs back to the L1. However, many L2s face centralisation issues if only one or several sequencers are responsible for batching and posting transactions.[5] This concentration of responsibility not only poses risks of manipulation but also introduces security vulnerabilities, such as the risk of network disruption if a sequencer goes offline. There are two common types of L2 solutions, often referred to as optimistic and zero-knowledge (ZK) rollups. The following paragraphs discuss each type of L2 in more detail.

Optimistic rollups utilise a "trust-but-verify" approach in which sequencers play a crucial role by posting state roots to the L1. This state root represents the new state of the network after executing all transactions on the L2. To ensure transactions are valid, external entities known as verifiers have the opportunity to challenge these transactions by submitting fraud proofs if they detect discrepancies between the state root update on the L1 and the underlying transactions on the L2. This model presumes transactions are valid unless a fraud-proof proves otherwise, necessitating a dispute period for identifying and resolving any fraudulent activity. For the network’s users, the implementation of a dispute period means that they must wait for transaction finality, meaning that it takes additional time to finalise transactions and receive their tokens on the L1. In this design, for verifiers to identify state mismatches and ensure the integrity of transactions, the complete set of transaction data must be available. This creates a data availability (DA) problem, as ensuring that all transaction data is fully accessible and verifiable becomes crucial. For optimistic rollups, the integrity of transactions hinges on the availability of this data for verifiers to perform accurate post-factum checks against state root changes. If data is withheld or incomplete (such as missing transactions in a particular block), it hampers the verifiers’ ability to detect and rectify fraudulent transactions within the dispute period.

ZK rollups attempt to circumvent the DA issue faced by optimistic rollups by relying on a different verification method. The transaction data in ZK rollups is effectively verified before being posted to the L1. This is because the validity proofs themselves attest to the correctness of the transactions by utilising ZK-proof technologies such as SNARKs or STARKs.[6] Unlike optimistic rollups, where transactions are assumed valid until potentially disproved during a dispute period, ZK rollups eliminate the need for such a verification period on-chain. The on-chain verification in the context of ZK rollups involves checking the validity proof itself rather than re-executing or directly verifying the underlying transaction data on the L1. However, ZK rollups still face an issue of data availability. While transaction data is not needed to validate transactions on the L1, it is still needed by nodes to synchronise the current state of the network for its users. This is necessary to ensure that all nodes have an up-to-date state of the blockchain ledger ensuring network integrity.

Through the division of labour and separation of tasks, L2s begin to follow a modular approach enhancing the scalability of the L1. However, progress towards scalability is often undermined by the L1’s dependence on a single or a small group of sequencers. This reliance creates a bottleneck and introduces elements of network centralisation which can inadvertently affect network security. Additionally, the issue of data availability persists in both optimistic and ZK L2 solutions. It is clear, at least at this stage, that L2s are not the saviours of the blockchain trilemma that crypto-enthusiasts are hoping for.

Having explained the issue of the blockchain trilemma faced by monolithic blockchains and having touched on the role of L2 scaling solutions, the following half of this article focuses on “purer” modular solutions. As we will see, the architecture of these networks completely separates the execution, consensus, and data availability layers.

Modular Blockchains

Modular blockchains separate the layers or functions typically conducted by all nodes in monolithic blockchains. Instead, a modular blockchain may choose to focus on only one or several of these functions and facilitate an architecture that allows integration with different networks that specialise in the other functions.[7] To highlight differences in the application of modular technology, the following section explores two modular blockchains each with a focus on a different functional layer. The first is Celestia, which is a modular blockchain primarily focusing on providing a scalable data availability layer allowing for efficient storage and retrieval of transaction data for connected blockchains.[8] The second network is Dymension which provides a settlement layer for L2 blockchains to settle transactions and reach consensus. Both Celestia and Dymension provide developers with a flexible architecture allowing them to leverage various environments for execution, consensus, and data availability to best suit their application. Compared to monolithic blockchains where certain network parameters introduce trade-offs between the efficiency of these layers, the modular architecture allows for specialisation and design optimisation for each separated layer.


Celestia focuses on decoupling transaction execution from consensus. In this design, Celestia is only responsible for ordering transactions and ensuring data availability. This minimalist approach allows for the deployment of independent rollups (akin to L2 side chains) to leverage Celestia for data availability (and possibly consensus) while maintaining their own autonomy of transaction execution and verification.

Taking a step back, data availability is crucial in blockchain networks because it ensures that the data underpinning the transaction is fully accessible and disseminated. The availability of this data allows any participant to independently verify the network's state and transactions (recall the necessity of this process in optimistic rollups). This feature is foundational for building trust amongst the various entities interacting with the blockchain.

All rollups integrating with Celestia utilise it for DA, however, rollups can choose to additionally rely on Celestia for consensus or delegate this task to another chain. Rollups utilising Celestia for both DA and consensus are commonly referred to as Sovereign Rollups, while rollups that purely use Celestia for DA and shift the role of consensus to Ethereum are known as Celestium Rollups.[9]

Celestia scales by leveraging data availability sampling (DAS) to ensure that transaction data from rollups are accessible and verifiable network-wide. Two key features of Celestia’s DA Layer include DAS and Name spaced Merkle Trees (NMTs).[10] In simple terms, DAS enables Celestia light nodes to verify the availability of block data by randomly sampling small parts of a block, rather than downloading it in its entirety. NMTs facilitate this process by structuring data into namespaces (with a separate namespace for each rollup), allowing execution and settlement layers to access only the transactions relevant to them. In more detail:

  • DAS: uses a 2D Reed-Solomon encoding to enhance data integrity and availability. This method splits block data into chunks within a matrix, extends it with parity data, and generates multiple Merkle roots for validation. Light nodes on Celestia can randomly sample data chunks and their Merkle proofs from this matrix to verify block data availability. Successful sampling across nodes ensures a high probability that all data is available, facilitating efficient and secure data recovery and verification without needing to download entire blocks.
  • NMTs: are designed to enhance data organisation and accessibility. They work by categorising data into different namespaces, allowing for efficient and selective data retrieval. Each rollup built on Celestia has its own unique namespace identifier allowing these blockchains to only interact with data relevant to them.

Celestia’s DA layer is powered through its native token, TIA, which is used by rollups to facilitate payments for storing data on Celestia by submitting PayForBlobs transactions.[11] These transactions can be submitted by rollup developers and must include the sender's identity, data, size, namespace, and signature. The transaction is bifurcated into ‘blobs’ for data dissemination to be made available within its namespace and an executable payment transaction with a commitment to the data, guaranteeing that the data is available and unchanged. Celestia's fee market prioritises PayForBlobs transactions based on gas prices and integrates a flat fee and variable charges based on the size of the data posted.

While focusing on providing a DA solution for independent blockchains, Celestia’s modular design allows developers to purely focus on their application and provides them with the flexibility to design the execution layer to best suit their needs.

However, it should be noted that rollups on Celestia inherit (at least partially), the security of the Celestia network itself. Rollups using Celestia for DA, but Ethereum for settlement (“Celestiums”) rely on Celestia’s validators to relay data availability attestations from Celestia to Ethereum.[12] These attestations are necessary to verify the availability of transaction data hosted on Celestia. While this design undoubtedly helps with scalability, it also introduces different trust assumptions and a potential point of failure for rollups as they rely on the integrity of Celestia’s validator set to accurately relay data attestations. Overall, while Celestia provides a promising solution to improve network scalability by addressing the data availability problem, it does not necessarily “solve” the blockchain trilemma, as its design introduces different trust assumptions as opposed to a monolithic blockchain.


While Celestia facilitates a data availability layer for connected chains, Dymension focuses on other functionalities – notably that of transaction settlement and consensus. Dymension's framework enables L2 blockchains, known as RollApps, to utilise Dymension for settlement and consensus while using other networks for verification and data availability.[13]

The settlement layer, known as the Dymension Hub is maintained by a group of validators tasked with resolving fraud disputes, finalising transaction outcomes, and reaching a consensus on the state of the network. In this setup, RollApps use sequencers to execute and batch transactions, mirroring the architecture of L2 solutions on Ethereum. However, Dymension's modular approach differs from many Ethereum L2s, which rely on Ethereum for both consensus and data availability. In Dymension, RollApp developers have the flexibility to choose from various external providers for data availability (like Celestia, Avail, or Near) and to select different execution environments (such as EVM or the Cosmos SDK) to best suit their needs.[14]

The Dymension Hub also acts as the central liquidity hub for the network of interconnected RollApps, providing RollApps with efficient asset routing, improved price discovery and shared liquidity.[15] This is enabled by Dymension’s native token DYM which is used to pay for transaction fees and must be staked by sequencers to operate on Dymension. Deployed RollApps can choose which sequencer or group of sequencers to utilise for transaction execution according to which virtual machine they use (e.g. EVM, Cosmos SDK, or even custom virtual machines).[16] Sequencers accept transactions from users interacting with the RollApp and batch them into blocks. After executing the transactions, sequencers post RollApp block data to data availability layers such as Celestia and a state root representing the block along with a reference to the data on the data availability layer to the Dymension Hub.

RollApps operate under an optimistic fraud-proof design, which inherently trusts the honesty of Sequencers while maintaining a mechanism to ensure trust minimisation through a dispute resolution process.[17] When Sequencers submit state updates, they enter a dispute period during which other network participants can verify the validity of these updates on the data availability layer. If a Sequencer is found to have submitted a fraudulent state update, any network participant can publish a fraud-proof, without needing permission, to challenge this state. In such as scenario, part of the sequencer's staked DYM is slashed as a punishment and sent to the fraud-proof publisher as a reward.[18]

By allowing RollApps to utilise their own execution environments, use a separate DA layer, and the Dymension Hub for consensus and dispute resolution, Dymension’s architecture creates a scalable environment whereby each layer can be optimised for the task at hand. However, similar to the issues of Celestia, separating these tasks introduces new potential points of centralisation, which could adversely affect network security.

Are Modular Blockchains the Trilemma Saviours?

While both Celestia and Dymension are promising examples of blockchains leveraging an innovative modular design, their novelty means that they have not yet stood the test of time compared to their monolithic counterparts. In terms of the blockchain trilemma, both blockchains facilitate an architecture that helps with network scalability, a problem central to the current design of monolithic blockchains such as Ethereum. According to Celestia, their unique design which leverages both NMTs and DAS technology can reduce the cost of DA by up to 95%.[19] Dymension’s design also tackles the DA issue, but does so indirectly, by allowing RollApps to integrate with any DA provider, including Celestia.

In terms of network decentralisation, another key aspect of the blockchain trilemma, an argument can be made that by separating the execution, consensus, and DA roles, modular blockchains should be more decentralised than monolithic blockchains by design. Nonetheless, this dispersion of responsibilities does not inherently guarantee enhanced decentralisation of the network at large. In fact, entrusting different layers with these functions might inadvertently introduce new focal points of centralisation. To make a relevant analogy, just as a chain’s strength is limited by its weakest link, a blockchain’s degree of decentralisation is effectively limited by its most centralised element.

The concept of decentralisation directly ties relates to security. Concentrating responsibilities, such as utilising a single sequencer for transaction ordering or having a small group of nodes validate transactions, may introduce potential security vulnerabilities. These central points not only introduce potential single points of failure but also focal areas where malicious actors might collude to attack the network.

Overall, although modular blockchains, as exemplified by Celestia and Dymension, address scalability—one of the core challenges of the blockchain trilemma—they do so at the cost of introducing potential new points of centralisation and different trust assumptions. However, as technology continues to improve and new iterations of modular blockchains are introduced to the market,  their current capabilities and features can be further enhanced.






















Crypto industry