The blockchain trilemma reared its head again at Consensus in Hong Kong in February, putting Cardano founder Charles Hoskinson at a certain disadvantage and hyperscalers like Google Cloud and Microsoft Azure having to reassure participants. do not have Risks to decentralization.
Important blockchain projects were pointed out. need With hyperscalers, you don’t have to worry about single points of failure because:
- Neutralize risk with advanced encryption
- Keying material is distributed through multiparty computation.
- Confidential computing protects data in use
This argument was based on the idea that “if the cloud can’t see the data, the cloud can’t control the system” and was left in place due to time constraints.
But there is a more noteworthy alternative to Hoskinson’s argument in favor of hyperscalers.
Reduce exposure with MPC and confidential computing
This was something of a strategic bulwark in Charles’ insistence that technologies such as multiparty computation (MPC) and confidential computing prevent hardware providers from accessing the underlying data.
These are powerful tools. But they please don’t Eliminate potential risks.
MPC distributes key material among multiple parties so that no single participant can reconstruct the secret. This greatly reduces the risk of a single node being compromised. However, the security aspect extends in another direction. The coordination layer, communication channels, and governance of participating nodes will all be important.
Rather than trusting a single keyholder, the system now relies on a distributed set of well-behaved actors and correctly implemented protocols. Single points of failure don’t go away. In fact, it simply becomes a decentralized trust surface.
Confidential computing, especially in trusted execution environments, presents another trade-off. Your data is encrypted at runtime, limiting exposure to your hosting provider.
However, trusted execution environments (TEEs) depend on hardware prerequisites. These depend on microarchitectural isolation, firmware integrity, and correct implementation. For example, the academic literature has repeatedly demonstrated that side channels and architectural vulnerabilities continue to emerge across enclave technologies. The security perimeter is narrower than in traditional clouds, but it’s not absolute.
More importantly, both MPC and TEE often run on hyperscalar infrastructure. Physical hardware, virtualization layers, and supply chains remain centralized. Operational influence is maintained when infrastructure providers control access to machines, bandwidth, or geographic areas. Encryption may prevent data inspection, but it does not prevent throughput limitations, shutdowns, or policy intervention.
Although advanced cryptographic tools make certain attacks more difficult, the risk of infrastructure-level failure still remains. Just replace the visible density with a more complex density.
The argument that “there is no L1 that can handle global computing”
Noting that trillions of dollars have been spent building such data centers, Hoskinson argued that hyperscalers are needed because a single layer 1 cannot handle the computational demands of a global system.
Of course, Layer 1 networks weren’t built to run AI training loops, high-frequency trading engines, or enterprise analytics pipelines. They exist to maintain consensus, validate state transitions, and provide persistent data availability.
He’s right about the purpose of layer 1. But a global system primarily requires results that can be verified by anyone, even if the calculations are done elsewhere.
In modern crypto infrastructure, heavy calculations increasingly occur off-chain. Importantly, results can be proven and verified on-chain. This is the basis for rollups, zero-knowledge systems, and verifiable computing networks.
Focusing on whether L1 can perform global computing misses the core question of who controls the execution and storage infrastructure behind the validation.
If computation is done off-chain but relies on a centralized infrastructure, the system inherits a centralized failure mode. In theory, payments are still decentralized, but in practice the paths that generate valid state transitions are centralized.
The issue should be about dependencies at the infrastructure layer, not compute power within layer 1.
Crypto neutrality is not the same as participation neutrality
Crypto neutrality is a powerful idea, and one that Hoskinson used in his discussion. This means that the rules cannot be changed arbitrarily, hidden backdoors cannot be introduced, and the protocol remains fair.
But the encryption is performed hardware.
Throughput and latency are ultimately limited by the actual machines and the infrastructure they run on, so that physical layer determines who can participate, who can afford to participate, and who is ultimately excluded. If the manufacturing, distribution, and hosting of the hardware remains centralized, participation will become economically gated, even if the protocol itself is mathematically neutral.
In high computing systems, hardware is a game changer. It determines the cost structure, who can scale, and resilience to censorship pressures. Neutral protocols running on centralized infrastructure are neutral in theory, but have limitations in practice.
Priority should shift to those combined with encryption. diversified Hardware ownership.
Without infrastructure diversity, neutrality becomes vulnerable under stress. If a few providers can rate limit workloads, restrict regions, or impose compliance gates, the system inherits that influence. Fairness of rules alone does not guarantee fairness of participation.
Specialization beats generalization in the computing market
Competition with AWS is often seen as a matter of scale, but this is also misleading.
Hyperscalers optimize flexibility. The company’s infrastructure is designed to handle thousands of workloads simultaneously. Virtualization layers, orchestration systems, enterprise compliance tools, and resiliency guarantees – these capabilities are strengths of general-purpose computing, but they are also cost layers.
Zero-knowledge proofs and verifiable computing are deterministic, compute-dense, memory bandwidth-constrained, and pipeline dependent. In other words, it rewards specialization.
Dedicated proof networks compete on proofs per dollar, proofs per watt, and proofs per latency. Efficiency is further increased when hardware, proof software, circuit design, and aggregation logic are vertically integrated. Removing unnecessary abstraction layers reduces overhead. Sustained throughput with persistent clusters is better than elastic scaling of narrow, constant workloads.
In the computing market, specialization is always better than generalization for stable, high-volume tasks. Optimized by AWS optionality. A dedicated proof network is optimized for one class of work.
The economic structure is also different. Hyperscalers’ corporate profits and pricing over wide demand fluctuations. Networks that are aligned around protocol incentives can change the way hardware is amortized and tune performance around sustained utilization rather than a short-term rental model.
The competition is over structural efficiency for defined workloads.
Use hyperscalers, but don’t rely on them
Hyperscalers are not the enemy. These are efficient, reliable, and globally distributed infrastructure providers. The problem is dependencies.
Resilient architecture uses large vendors for burst capacity, geographic redundancy, and edge distribution, but does not lock core functionality to a single provider or small cluster of providers.
Settlement, final verification, and availability of critical artifacts must remain intact even if a cloud region fails, a vendor exits the market, or policy constraints tighten.
This is where distributed storage and computing infrastructure becomes a viable alternative. Evidential artifacts, historical records, and validation inputs should not be revocable at the provider’s discretion. Instead, it must run on infrastructure that is economically compatible with the protocol and structurally difficult to stop.
Hypescaler should be used for the following purposes: option It is an accelerator, not the foundation of a product. While the cloud still helps with reach and burst, the system’s ability to generate proofs and persist what verification depends on is not controlled by a single vendor.
In such a system, if hyperscalers disappeared tomorrow, it would only slow down the network. The best part is that they are owned and operated by a wider network, rather than being rented from a major brand chokepoint.
This is a way to strengthen the decentralized spirit of cryptocurrencies.

