Progression in Algorithms Makes Small, Loud Quantum Computer Systems Feasible

Progression in Algorithms Makes Small, Loud Quantum Computer Systems Feasible

Quantum refers to the measurement and operations of particles and energy on subatomic scales. At this scale, things can appear as particles or waves and exist in more than one place at once. Credit: AGSANDREW/ISTOCKPHOTO.

A new article in Nature Physics reported that instead of waiting for completely mature quantum computers to appear, Los Alamos National Laboratory and other leading institutions developed hybrid classical/quantum algorithms to garner the most performance—and potentially quantum advantage—from current noisy, error-prone hardware.

Known as variational quantum formulas, they utilize the quantum boxes to manipulate quantum systems while moving most of the workload to classical computers to allow them to do what they currently do best: fix optimization issues.

“Quantum computers have the guarantee to outmatch classical computers for particular tasks; however, they cannot run long algorithms on currently available quantum equipment. They have way too much noise as they interact with the environment, which corrupts the information while it is processing,” stated Marco Cerezo, a physicist specializing in quantum computing, quantum machine learning, as well as quantum information at Los Alamos and a lead author of the paper. “With variational quantum algorithms, we obtain the best of both worlds. We can reap the power of quantum computers for jobs that classic computers can not do conveniently, then use classical computer systems to complement the computational power of quantum devices.”

Present noisy, intermediate-scale quantum computer systems have between 50 and 100 qubits, lose their “quantumness” promptly, and lack error correction, requiring a lot more qubits. However, since the late 1990s, theoreticians have been developing algorithms created to run on an idealized large, error-correcting, mistake forgiving quantum computer.

“We can not implement these algorithms for now because they provide nonsense results or require way too many qubits. So people understood we required an approach that adapts to the restraints of the hardware we have– an optimization issue,” claimed Patrick Coles, an academic physicist developing algorithms at Los Alamos and the senior lead writer of the paper.

“We located we can transform all the problems of interest into optimization problems, possibly with quantum advantage, indicating the quantum computer system defeats a classical computer at the job,” Coles said. Those issues include simulations for material science and quantum chemistry, factoring numbers, big-data evaluation, and practically every application recommended for quantum computer systems.

The algorithms are known as variational since the optimization process differs from the algorithm on the fly as a type of machine learning. Changing parameters and logic gates to decrease an expense function, a mathematical expression that measures exactly how well the algorithm has carried out the task. The trouble is solved when the price function reaches its lowest possible value.

In an iterative function in the variational quantum algorithm, the quantum computer predicts the price feature, then transfer that result back to the classical computer. The classical computer then adjusts the input specifications and sends them to the quantum computer system, which reruns the optimization.


Read the original article by Los Alamos National Laboratory.

Share this post