Evaluation of the Classical Hardware Requirements for Large-Scale Quantum Computations

Estimate of the compute intensity in function of the number of logical qubits and T-depth.


We develop a new model to evaluate the necessary classical computing and networking resources required to support a large-scale fault-tolerant quantum computer based on superconducting qubits and a surface code architecture. We focus specifically on quantum error decoding, which is the main classical computational task required to enable quantum error correction during runtime. Our model reveals that the quantum computer operates at a logical clock speed in the 100-10,000 Hz range, using state-of-the-art quantum error decoders. For a prototypical large-scale quantum chemistry computation, this translates to an overall runtime on the order of months, and this workload is estimated to generate syndrome data for error correction at a rate of 2–500 Gbps depending on whether data compression is used. We estimate the total computational processing power required for online error syndrome decoding equals about 1 petaflop. The results of our analysis show that current computing and networking technology can meet the requirements, in terms of bandwidth, latency, and compute, to support large-scale quantum computation. However, major technological challenges remain both for quantum and classical hardware, including scalable fabrication of high-quality qubits, scalable qubit control, and syndrome communication within a limited power budget.

May 13, 2024 4:15 PM
ISC24 - Hamburg, Germany

This work won the Hans Meuer best research paper award.

Daan Camps
Daan Camps
Researcher in Advanced Technologies Group

My research interests include quantum algorithms, numerical linear algebra, tensor factorization methods and machine learning. I’m particularly interested in studying the interface between HPC and quantum computing.