Quantum Has a New Scoreboard

13 May 2026
12 min read

On World Quantum Day, I wrote that quantum computing was taking its next step toward enterprise adoption. I asked the industry to start measuring what matters. Here is a more detailed answer on what that means: a new scoreboard for quantum. 

The questions are changing. 

Something has shifted in our industry recently. Quietly, almost without anyone declaring it. Customers still want to know how many qubits we have, but more of them are now asking what they can do with their quantum computers this quarter. Investors are still asking about tech roadmaps, but more of them now also ask about deployments. Governments still ask about cloud access, but they also want to know if they can buy a quantum computer of their own. 

We have been talking about an inflection point in quantum recently. I think it has arrived. 

Close to the chip, or close to the customer? 

For a long time, our industry has measured what scientists call “close to the chip” KPIs. Qubit count. Quantum volume. Circuit Layer Operations Per Second (CLOPS), a benchmark introduced by IBM that captures how fast a system can execute the layers of a quantum circuit. Circuit depth. Two-qubit gate fidelity. T1 and T2 coherence times. These are real measurements, and they are necessary. Engineers care about them, and they should. Without such a scoreboard you cannot benchmark your quantum processors. 

Close to the chip, however, is not always close to the customer. We built IQM close to the chip because you have to be. But we also built IQM close to the customer, because that is the point of scaling a business. 

We are mindful not to confuse the two scoreboards. The “close-to-the-chip” scoreboard tells you whether a chip is improving. The “close-to-the-customer” scoreboard tells you whether a customer is getting work done. Both matter. But only one decides whether quantum becomes a useful technology for the people using it. 

The metric that matters most to a customer is not how many gates we can stack inside a coherence window, what physicists call achievable circuit depth. What matters is time to solution. 

Time to solution is how fast a customer gets a useful result for the problem they actually have. It rolls up everything that matters in production. Hardware reliability. Software stack. Calibration. Integration with existing infrastructure. Support response. The technician’s number when something breaks. It is not as elegant as a qubit number on a slide. But it is what enterprises pay for. And it is what we optimize for every day. 

This is one of the reasons we believe in superconducting processors, which run on faster timescales than other approaches. A real quantum workload is not executed in one circuit. It is the same circuit repeated thousands or millions of times, what physicists call shots, and the answer comes from the statistics across all of them. Superconducting gates fire in nanoseconds. Reset between shots and readout at the end of each shot happens in below microseconds. Other modalities provide longer coherence times, but they pay for it in wall-clock time. When a customer needs a solution to a real problem, the clock on the wall is the clock that matters. 

Cause and catalyst. 

We call this Production Quantum: quantum computing as a permanent, operational part of an institution’s technical capability. Not a remote service you subscribe to. Operational infrastructure you command. I think of Production Quantum now as both the cause and the catalyst of the shift I described above. 

It is the cause because Production Quantum is the model that made the shift visible in the first place. Once you put a quantum computer on a customer’s premises, you have to answer the production question. Not “could this work?” but “is this working?” Every deployment is a forcing function. The customer has the machine. They expect it to do the job. 

It is the catalyst because every deployment compounds. The more systems we deploy, the more the industry will normalize around real-world performance. Every system at LRZ in Germany, VTT in Finland, CINECA in Italy, CESGA in Spain, Aalto, and Oak Ridge National Laboratory in the U.S. is a benchmark for what production looks like. LRZ has just become the first site running a quantum computer as a slurm node, scheduled directly through their HPC workload manager. This was possible via our new HPC Integration Service, a production-ready software environment made for our customers. Other vendors will have to meet that bar, or explain why they cannot. 

Enterprises are answering. 

Two recent enterprise sales make the case for Production Quantum. 

Galaxy Systemy Informatyczne in Poland is one of the very first enterprise businesses in the world to buy a quantum computer. They bought ours. A 54-qubit Radiance system. Not access through the cloud. Ownership. 

Two weeks later, in April, Toyo Corporation became the first enterprise in Japan to buy a quantum computer. They bought ours too. 

What these two cases tell me is that enterprises are arriving at the same conclusion at roughly the same time. The value of owning a quantum computer is not just in the work it does this quarter. It is in what compounds. You keep the IP you generate. You build the expertise inside your own walls. You grow with your infrastructure. Year after year, the same engineers go deeper into the same system. The IP gets richer. The value compounds. Pay-as-you-go cloud access has its place. Compounding is not what it offers. Lab results are exciting. They make great headlines. The trick is to get them out of the lab and to work, where they can run reliably for someone who has a deadline. The deployment model is less photogenic. It involves shipping, installing, integrating, supporting, calibrating, and showing up the next morning to do it again. It is what builds businesses. 

Useful now. A foundation for next. 

Production Quantum is more than a description of what we build. It is also a framework for evaluating any investment in quantum, regardless of who is making the decision. I think of it in two parts. 

The first is what it gives you now. A system you own, and a platform that you can build on. Real value in business metrics today, in what we call the noisy intermediate scale quantum (NISQ) era, the period we are in before fault-tolerant machines arrive. There is useful work to do here, and our customers are doing it. 

The second is what it sets up for its owners. The same system, software stack, engineers and workflows. These become the foundation on which you will run fault-tolerant quantum computing when it gets here. Logical qubits and quantum error correction are not a separate world from the one we are building today. They are the next floor on the same building. Production Quantum is useful now, and it is the foundation for what comes next. 

That is why the questions you ask before you buy are largely the same regardless of who you are. When an HPC center decides what to procure, the question is whether the system is production-grade. When a university plans a five-year quantum strategy, the question is which vendor can actually deliver. When a national lab evaluates a system, the question is whether it will run real workloads or sit in a glossy photo. When an enterprise looks at quantum for the first time, the questions are concrete. Can I deploy it on my premises if I want to? Will it integrate with my existing stack? Who supports it when something breaks? How does it scale as my needs grow? Am I locked in, or am I free to build? 

These are the right questions. Production Quantum provides the right answers to them. 

Product is just half of it. 

We are not just building the computer; we are building the business model. The support, the channel, the deployment options and an ecosystem of partners who can help a customer get from purchase order to working capability in week one, not year five. 

We do this because we have to. When we started IQM, the model did not exist. If quantum was going to leave the lab and do useful work for real customers, somebody had to build the model that made that possible. We have believed this from day one. We have been building it since day one. 

That is the actual product. The hardware is necessary. The model is what makes it usable. 

A category, not a feature. 

It is worth being precise about what Production Quantum is and what it is not. 

Pay-as-you-go cloud access has a role in quantum. For learning, for early experimentation, for casual plain vanilla algorithm exploration, it is fine. But once a customer wants to do something serious, IP they own, capability they control, scale they can plan around become procurement necessities, not nice-to-haves. They need on-premises. Several companies, including some of our peers, agree on that point. 

Production Quantum goes a step further. On-premises, yes. Open also. It runs on standard frameworks. It connects into the customer’s existing stack, including the hyperscalers and the open-source toolchains their teams already use. With the launch of our HPC Integration Service, that now includes the HPC scheduler itself: Radiance systems run as a slurm node, scheduled and managed like any other accelerator. It is built to scale, both in qubit count and in the business model around it. It treats the ecosystem as part of the product, not an afterthought. 

That combination is what makes Production Quantum a category, not a feature. And it is why we believe superconducting qubits are well suited to it. Speed at the system level. A clear path to scale. A manufacturing process that is closer to semiconductor industry practice than to handcrafted physics. Other modalities will continue to play important roles. For a customer who wants to build something they can rely on now and grow into fault-tolerance later, we think the case for superconducting is strong, and we expect it to keep getting stronger. 

Waiting is the new risk. 

This also addresses a question I hear from almost every enterprise I meet. When is the right time to get into quantum? Until recently, the honest answer was that it depended. Too early was risky. Too late was risky in a different way. Production Quantum starts to flip that calculation. The systems are real. The deployments are real. The business model is real. And the foundation for fault tolerance is being laid in the same systems that are already running useful work. 

The first-mover advantage in quantum used to be a bet. With Production Quantum, it looks more like a head start. The risk is not what it used to be. Waiting may turn out to be the riskier choice. 

The quantum era begins when institutions own it, operate it, and build on it. The largest deployed fleet of superconducting quantum computers in the world is already doing the work. Galaxy and Toyo are the latest to join it. 

The next step is no longer a forecast. It is the calendar. 

Jan 

About the Author

Author Image
Dr. Jan Goetz
CEO & Co-Founder
Linkedin

Jan Goetz is CEO and Co-founder of IQM Quantum Computers, headquartered in Espoo, Finland. IQM has delivered more quantum systems than any other manufacturer globally and is preparing to become the first European quantum company listed on a major U.S. stock exchange.

Are you ready to change the future?

Turn your ideas into impact and connect with us today.

Search faster—hit Enter instead of clicking.