Bitcoin Layer2 network Merlin has already accumulated billions of dollars in TVL, making it the largest and most popular one. This article will focus on the technical solution of Merlin Chain and interpret its publicly available documents and protocol design ideas, in order to help readers understand how this “top Bitcoin Layer2” operates.
Table of Contents:
Merlin’s Decentralized Oracle Network: Open Chain DAC Committee
Security Model Analysis: Optimistic ZKRollup+Cobo MPC Service
Two-step Verification ZKP Submission Scheme based on Lumoz
Merlin’s Phantom: Cross-chain Interoperability
Conclusion
Since the summer of 2023, Bitcoin Layer2 has always been the highlight of the entire Web3. Although the rise of this field is much later than Ethereum Layer2, Bitcoin, with its unique charm of POW and the successful launch of spot ETF, has attracted billions of dollars of capital attention to Layer2 in just six months, without worrying about the risk of “securitization”.
Among the Bitcoin Layer2 projects, Merlin, with billions of dollars in TVL, is undoubtedly the largest and most popular one. With clear staking incentives and substantial returns, Merlin has suddenly risen to create an ecological myth that surpasses Blast in just a few months. With the increasing popularity of Merlin, the discussion about its technical solution has become a topic of growing interest.
In this article, Geek Web3 will focus on the technical solution of Merlin Chain, interpret its publicly available documents and protocol design ideas. We are committed to helping more people understand the general workflow of Merlin, have a clearer understanding of its security model, and understand how this “top Bitcoin Layer2” operates in a more intuitive way.
Merlin’s Decentralized Oracle Network: Open Chain DAC Committee
For all Layer2 projects, whether Ethereum Layer2 or Bitcoin Layer2, the cost of data availability and release is one of the most important problems to solve. Due to the inherent problems of the Bitcoin network itself and its lack of support for larger data throughput, how to make use of the limited DA space has become a challenging problem for Layer2 projects.
One conclusion is obvious: if Layer2 “directly” releases unprocessed transaction data into the Bitcoin blockchain, it cannot achieve high throughput or low transaction fees. The most mainstream solution is to compress the data size as much as possible and then upload it to the Bitcoin blockchain, or release the data directly in the Bitcoin chain.
Among Layer2 projects that adopt the first approach, Citrea is probably the most famous one. They plan to upload the state changes (state diff) of Layer2 within a certain period of time, along with the corresponding ZK proofs, to the Bitcoin chain. In this case, anyone can download the state diff and ZKP from the Bitcoin mainnet and monitor the state changes of Citrea. This method can compress the size of uploaded data by more than 90%.
Although this can greatly compress the data size, the bottleneck is still obvious. If a large number of accounts undergo state changes within a short period of time, and Layer2 needs to upload all these changes to the Bitcoin chain, the final data release cost cannot be reduced significantly. This can be seen in many Ethereum ZK Rollup projects.
Many Bitcoin Layer2 projects simply take the second approach: use the DA solution in the Bitcoin chain, either by building their own DA layer or using solutions like Celestia and EigenDA. B^Square, BitLayer, and the protagonist of this article, Merlin, all follow this off-chain DA scaling solution.
B^2 directly imitates Celestia and builds a DA network in the chain called B^2 Hub, which supports data sampling. The “DA data” such as transaction data or state diff is stored in the Bitcoin chain, and only the datahash / merkle root is uploaded to the Bitcoin mainnet.
This actually treats Bitcoin as a trusted bulletin board: anyone can read the datahash from the Bitcoin chain. When you obtain the DA data from the off-chain data provider, you can check whether it corresponds to the datahash on the chain, i.e., hash(data1) == datahash1? If there is a correspondence, it means that the data provided by the off-chain data provider is correct.
The above process ensures that the data provided by the off-chain nodes is associated with certain “clues” on Layer1, preventing the DA layer from providing false data. However, there is an important malicious scenario here: if the source of the data, the Sequencer, does not release the corresponding data for the datahash, but only releases the datahash to the Bitcoin chain and intentionally withholds the corresponding data from being read, what should be done in this case?
Similar scenarios include but are not limited to: only releasing ZK-Proof and StateRoot, but not releasing the corresponding DA data (state diff or transaction data). Although people can verify the ZKProof and confirm the validity of the calculation process from Prev_Stateroot to New_Stateroot, they do not know which accounts’ states have changed.
In this situation, although users’ assets are safe, no one can determine the actual state of the network, such as which transactions are included in the block, which contract states have been updated, etc. At this time, Layer2 is basically equivalent to being shut down.
This is actually “data withholding.” Dankrad from the Ethereum Foundation discussed similar issues on Twitter in August 2023, mainly focusing on something called “DAC.”
Many Ethereum Layer2 projects that adopt off-chain DA solutions often set up a committee called the Data Availability Committee (DAC), which consists of a few nodes with special permissions. This DAC committee acts as a guarantor and claims externally that the Sequencer has indeed published complete DA data (transaction data or state diff) in the off-chain. Then, the DAC nodes collectively generate a multi-signature, and if the multi-signature meets the threshold requirement (e.g., 2/4), the relevant contracts on Layer1 will default that the Sequencer has passed the DAC committee’s check and has truthfully released complete DA data.
The DAC committees of Ethereum Layer2 projects generally follow the POA model and only allow a few nodes that have undergone KYC or are officially designated to join the DAC committee. This has made DAC synonymous with “centralization” and “consortium chain.” In addition, in some Ethereum Layer2 projects that adopt the DAC model, the sequencer only sends DA data to DAC member nodes and hardly uploads data elsewhere. Anyone who wants to obtain DA data must obtain permission from the DAC committee, and there is no fundamental difference from a consortium chain.
Without a doubt, DAC should be decentralized, and Layer2 can choose not to directly upload DA data to Layer1. However, the admission rights of the DAC committee should be open to the public to prevent a few people from colluding to commit malfeasance. (For discussions on malicious scenarios of DAC, please refer to Dankrad’s previous remarks on Twitter.)
The BlobStream proposed by Celestia is essentially a replacement for centralized DAC. The sequencer of Ethereum Layer2 projects can release DA data to the Celestia chain. If 2/3 of the Celestia nodes sign the data, the Layer2 contracts deployed on Ethereum will assume that the sequencer has truthfully published DA data. This actually makes the Celestia nodes act as guarantors. Considering that Celestia has hundreds of validator nodes, we can consider this large-scale DAC to be relatively decentralized.
The DA solution adopted by Merlin is actually similar to Celestia’s BlobStream, both of which enable DAC admission rights to become more decentralized through POS. Anyone who stakes enough assets can become a DAC node. In Merlin’s documentation, these DAC nodes are referred to as Oracles, and it is pointed out that assets such as BTC, MERL, and even BRC-20 tokens can be staked to achieve a flexible staking mechanism and support proxy staking similar to Lido. (The POS staking protocol of the oracle will be one of Merlin’s core narratives in the future, and the staking interest rates provided are relatively high.)
Here we briefly describe Merlin’s workflow (see the image below):
After receiving a large number of transaction requests, the sequencer aggregates them and generates a data batch, which is then sent to the Prover nodes and Oracle nodes (decentralized DAC).
Merlin’s Prover nodes are decentralized and use Lumoz’s Prover as a Service. After receiving multiple data batches, the Prover pool generates corresponding zero-knowledge proofs. These ZK proofs are then sent to the Oracle nodes for verification.
The Oracle nodes verify the ZK proofs from Lumoz’s ZK pool and check if they correspond to the data batches sent by the Sequencer. If they match and do not contain other errors, the verification is passed. In this process, the decentralized Oracle nodes generate a multi-signature through threshold signatures to declare that the Sequencer has fully released DA data and that the corresponding ZKP is valid, passing the verification of the Oracle nodes.
The Sequencer collects the multi-signature results from the Oracle nodes. When the number of signatures meets the threshold requirement, the Sequencer sends this signature information to the Bitcoin chain, along with the datahash of the data batch, for external reading and confirmation.
The Oracle nodes process the calculation process of verifying the ZK proof and generate a Commitment, which is sent to the Bitcoin chain. Anyone can challenge the “Commitment” and verify its validity.
The process here is basically the same as the fraud-proof protocol of bitVM. If the challenge is successful, the Oracle node that releases the Commitment will be economically penalized. Of course, Oracle needs to release the data to the Bitcoin chain, including the hash of the current Layer2 state—StateRoot, as well as the ZKP itself, which all need to be released to the Bitcoin chain for external verification.
There are a few details that need to be clarified. First, according to the Merlin roadmap, Oracle will back up the DA data to Celestia in the future. This way, Oracle nodes can appropriately eliminate local historical data and do not need to permanently store the data locally. At the same time, the Commitment generated by the Oracle Network is actually the root of a Merkle Tree. It is not enough to only disclose the root to the outside world. The complete dataset corresponding to the Commitment needs to be publicly available. This requires finding a third-party DA platform, which can be Celestia or EigenDA, or other DA layers.
Security model analysis: Optimistic ZKRollup + Cobo’s MPC service
Above, we briefly described the workflow of Merlin, and we believe that everyone now has a basic understanding of its structure. It is not difficult to see that Merlin, B^Square, BitLayer, and Citrea all follow the same security model—Optimistic ZK-Rollup.
At first glance, this term may seem strange to many Ethereum enthusiasts. What is “Optimistic ZK-Rollup”? In the Ethereum community’s understanding, the “theoretical model” of ZK Rollup is entirely based on the reliability of cryptographic calculations and does not require the introduction of trust assumptions. However, the term “optimistic” precisely introduces trust assumptions. This means that most of the time, people have to optimistically assume that Rollup has no errors and is reliable. And once an error occurs, it can be punished by the fraud-proof method to penalize the Rollup operator. This is Optimistic Rollup—also known as OP Rollup.
For the Ethereum ecosystem, Optimistic ZK-Rollup may seem out of place, but it actually fits the current situation of Bitcoin Layer2. Due to technical limitations, the Bitcoin chain cannot fully verify ZK Proof. It can only verify a certain step of the ZKP calculation process under special circumstances. In this premise, the Bitcoin chain can only support the fraud-proof protocol. People can point out that there is an error in a certain calculation step during the off-chain verification of ZKP and challenge it through the fraud-proof method. Of course, this cannot be compared to Ethereum-style ZK Rollup, but it is already the most reliable and secure security model that Bitcoin Layer2 can currently achieve.
In the optimistic ZK-Rollup scheme mentioned above, assuming that there are N challengers with permission in the Layer2 network, as long as one of these N challengers is honest and reliable, they can detect errors and initiate fraud-proof, making the state transition of Layer2 secure. Of course, for a more complete optimistic Rollup, it is necessary to ensure that its withdrawal bridge is also protected by the fraud-proof protocol. Currently, almost all Bitcoin Layer2 solutions cannot achieve this premise and rely on multi-signature/MPC. Therefore, how to choose a multi-signature/MPC scheme becomes a question closely related to Layer2 security.
Merlin has chosen Cobo’s MPC service for the bridging solution, adopting measures such as cold/hot wallet isolation. The bridged assets are jointly managed by Cobo and Merlin Chain, and any withdrawal behavior requires the participation of Cobo and Merlin Chain’s MPC participants to process together. Essentially, this guarantees the reliability of the withdrawal bridge through the credit endorsement of institutions. Of course, this is only a temporary solution at this stage. With the gradual improvement of the project, the withdrawal bridge can be replaced by a “optimistic bridge” that relies on a 1/N trust assumption by introducing BitVM and the fraud-proof protocol. However, implementing this is more difficult (currently almost all official Layer2 bridges rely on multi-signature).
Overall, we can summarize that Merlin has introduced a POS-based DAC, an optimistic ZK-Rollup based on BitVM, and an MPC asset custody solution based on Cobo. It solves the DA problem by opening DAC permissions, ensures the security of state transitions by introducing BitVM and the fraud-proof protocol, and guarantees the reliability of the withdrawal bridge by introducing Cobo’s well-known asset custody platform.
Based on Lumoz’s two-step verification ZKP submission scheme
Earlier, we outlined Merlin’s security model and introduced the concept of optimistic ZK-rollup. In Merlin’s technical roadmap, it also mentions decentralized Provers. As we all know, the Prover is a core role in the ZK-Rollup architecture. It is responsible for generating ZKProof for the Batch released by the Sequencer. However, the generation process of zero-knowledge proofs is very resource-intensive and a challenging problem.
To accelerate the generation of ZK proofs, the basic operation is to parallelize and divide the generation tasks. The so-called parallelization is to divide the generation task of ZK proofs into different parts and have different Provers complete them separately, and finally, the Aggregator aggregates the multiple proofs into one.
To speed up the generation process of ZK proofs, Merlin will adopt Lumoz’s Prover as a service solution, which actually aggregates a large number of hardware devices together to form a mining pool. Then, it assigns the computing tasks to different devices and distributes corresponding incentives, similar to POW mining.
In this decentralized Prover solution, there is a type of attack scenario called frontrunning attack. Suppose an Aggregator has assembled a ZKP and sends it out in order to receive rewards. When other Aggregators see the content of the ZKP, they can frontrun by releasing the same content claiming that they generated the ZKP first. How to solve this situation?
The most intuitive solution that may come to mind is to assign specific task numbers to each Aggregator. For example, only Aggregator A can take task 1, and even if others complete task 1, they will not receive rewards. However, this method has a problem—it cannot resist single-point risks. If Aggregator A experiences a performance failure or goes offline, task 1 will be stuck and cannot be completed. Moreover, this method of assigning tasks to a single entity cannot improve production efficiency through competitive incentive mechanisms, so it is not a good solution.
Polygon zkEVM once proposed a method called Proof of Efficiency, which pointed out that competitive means should be used to encourage competition among different Aggregators. Rewards should be allocated on a first-come, first-served basis. The Aggregator that submits the ZK-Proof first can receive rewards. However, this method did not mention how to solve the MEV frontrunning problem.
Lumoz adopts a two-step verification ZKP submission scheme. When an Aggregator generates a ZK proof, it does not need to send out the complete content. Instead, it publishes the hash of the ZKP, in other words, it releases the hash(ZKP+Aggregator Address). This way, even if others see the hash value, they do not know the corresponding ZKP content and cannot frontrun directly.
If someone simply copies the entire hash and publishes it first, it is also meaningless because the hash contains the address of a specific Aggregator X. Even if Aggregator A releases this hash first, when the preimage of the hash is revealed, everyone will see that it contains the address of Aggregator X, not A’s.
By using this two-step verification ZKP submission scheme, Merlin (Lumoz) can solve the frontrunning problem in the ZKP submission process and achieve highly competitive incentives for zero-knowledge proof generation, thereby improving the speed of ZKP generation.
Merlin’s Phantom: Multi-chain interoperability
According to Merlin’s technical roadmap, they will also support interoperability between Merlin and other EVM chains. The implementation path is basically the same as the Zetachain’s approach. If Merlin is the source chain and other EVM chains are the target chains, when the Merlin node detects a cross-chain interoperability request from a user, it triggers the subsequent workflow on the target chain.
For example, a Merlin-controlled EOA account can be deployed on Polygon. When a user releases a cross-chain interoperability instruction on Merlin Chain, Merlin Network first analyzes its content and generates a transaction data to be executed on the target chain. Then, Oracle Network processes the transaction by MPC signing, generating the digital signature of the transaction. Afterwards, Merlin’s Relayer node releases this transaction on Polygon, completing subsequent operations in the EOA account on the target chain.
After the requested operation is completed, the corresponding assets will be directly forwarded to the user’s address on the target chain or can theoretically cross directly to Merlin Chain. This solution has some obvious advantages: it avoids the transaction fees incurred by traditional asset cross-chain operations and relies on Merlin’s Oracle Network to guarantee the security of cross-chain operations without relying on external infrastructure. As long as users trust Merlin Chain, they can assume that such cross-chain interoperability behavior is problem-free by default.
In summary, in this article, we provided a brief interpretation of Merlin Chain’s technical solutions, which we believe can help more people understand the general workflow of Merlin and have a clearer understanding of its security model. Considering the current thriving Bitcoin ecosystem, we believe that such technology popularization is valuable and needed by the general public. We will continue to follow up on projects such as Merlin, bitLayer, and B^Square, and conduct more in-depth analysis of their technical solutions in the future. Stay tuned!
Related Reports:
– Merlin Chain Airdrop is Here! How to Claim MERL Tutorial, Current OTC Price
– MerlinStarter Announces the End of $MSTAR Subscription, Total Fundraising Approximately 400 Million RMB, Oversubscribed by 20,000%
– The First Launchpad of Merlinchain: MerlinStarter Platform Coin “$MSTARD” IDO Starts, 50 Million Tokens available for Subscription