August 25, 2025

How to build larger, more reliable quantum computers

UC Riverside scientists link multiple quantum chips to grow quantum systems

Author: Iqbal Pittalwala
August 25, 2025

While quantum computers are already being used for research in chemistry, material science, and data security, most are still too small to be useful for large-scale applications. A study led by researchers at the University of California, Riverside, now shows how “scalable” quantum architectures — systems made up of many small chips working together as one powerful unit — can be made. 

In the study, published as a letter in the journal Physical Review A, the researchers simulated realistic architectures and found that even imperfect links between quantum chips can still produce a functioning, fault-tolerant quantum system — a leap forward in scaling quantum hardware. 

Mohamed Shalby

“Our work isn’t about inventing a new chip,” said Mohamed A. Shalby, the first author of the paper and a doctoral candidate in the UCR Department of Physics and Astronomy. “It’s about showing that the chips we already have can be connected to create something much larger and still work. That’s a foundational shift in how we build quantum systems.”

Scaling refers to handling increasing amounts of data without performance failure. Fault tolerance means a quantum system can detect and correct errors automatically, giving reliable outputs even with imperfect hardware. 

“In practice, connecting multiple smaller chips has been difficult,” Shalby said. “Connections between separate chips — especially those housed in separate cryogenic refrigerators — are much noisier than operations within a single chip. This increased noise can overwhelm the system and prevent error correction from working properly.”

The UCR-led team found, however, that even when the links between chips were up to 10 times noisier than the chips themselves, the system still managed to detect and correct errors.

“This means we don’t have to wait for perfect hardware to scale quantum computers,” Shalby said. “We now know that as long as each chip is operating with high fidelity, the links between them can be ‘good enough’ — not perfect — and we can still build a fault-tolerant system.”

Shalby explained that in quantum computing, where a qubit is the basic unit of information, achieving reliable performance requires more than just building a few qubits. Today, individual “logical” qubits must be built out of clusters of many physical qubits, often hundreds or thousands, he said. This redundancy helps correct errors that naturally arise in fragile quantum systems.

According to Shalby, the most widely used error correction technique is called the surface code, and a “surface code chip” is a quantum processor designed around this method. He said such chips can encode high-fidelity logical qubits by managing and correcting the errors within their own architecture.

The team’s discovery is based on thousands of simulations across multiple architectures and connection methods. The researchers tested six different modular designs under varying levels of error and noise, using realistic parameters inspired by Google’s existing quantum infrastructure.

“Until now, most quantum milestones focused on increasing the sheer number of qubits,” Shalby said. “But without fault tolerance, those qubits aren’t useful. Our work shows we can build systems that are both scalable and reliable — now, not years from now.”

This research, motivated by published work done at the Massachusetts Institute of Technology, was supported by the National Science Foundation. The simulations were conducted using tools developed by the Google Quantum AI team.

Shalby was joined in the research by Leonid P. Pryadko and Renyu Wang at UCR, as well as Denis Sedov at the University of Stuttgart, Germany.

The title of the paper is “Optimized noise-resilient surface code teleportation interfaces.”

Media Contacts