What Makes Quantum Machine Learning “Quantum”?


I computing 7 years ago, just after my master’s degree. At that time, the field was full of excitement but also skepticism. Today, quantum computing stands out as an emerging technology, alongside HPCs and AI.

The attention shifted from solely hardware-related research and discussion to application, software, and algorithms. Quantum is really a tool that can be used across different disciplines rather than in an isolated field. One of the promising, yet still not fully understood uses of quantum computers is quantum machine learning.

Quantum machine learning (QML) has become a catch-all term in the past couple of years. One of the earliest and most significant appearances of QML was in 2013, when Google and NASA established the Quantum Artificial Intelligence Lab, which was tasked with exploring how quantum computers could be used in machine learning applications. Since then, the term QML appeared in research papers, startup pitches, and conference talks, often with wildly different meanings.

In some cases, it refers to using quantum computers to accelerate machine learning. In others, it describes classical algorithms inspired by quantum physics. And sometimes, it simply means running a familiar ML workflow on unfamiliar hardware.

So even I, someone working on and researching quantum computers, was very confused at first… I bet a lot of people’s first question when they hear “Quantum Machine Learning” is what, exactly, makes quantum machine learning quantum?

Answering this question is why I decided to write this article! The short answer is not speed, nor is it neural networks, nor is it vague references to “quantum advantage.” At its core, quantum machine learning is defined by how information is represented, transformed, and read out. In QML, that is done using the rules of quantum mechanics rather than classical computation.

This article aims to clarify that distinction, separate substance from hype, and provide a clean conceptual foundation for the rest of this series. I plan to write about exploring the lore of QML, as well as some of its near-term research results and applications.

Machine Learning Before “Quantum”

Before we get all quantum, let’s take a step back. Stripped of its modern trappings, machine learning is about learning a mapping from inputs to outputs using data. Regardless of whether the model is a linear regressor, a kernel method, or a deep neural network, the structure is more or less the same:

  1. Data is represented numerically (vectors, matrices, tensors).
  2. A parameterized model transforms that data.
  3. Parameters are adjusted by optimizing a cost function.
  4. The model is evaluated statistically on new samples.

Neural networks, GPUs, and massive datasets are implementation choices and not defining features. This abstraction matters because it lets us ask a precise question:

What changes when the data and the model live in a quantum space?

Quantum Mechanics Enters

Quantum machine learning becomes quantum when quantum information is the computational substrate. This shows up in three ways.

1. Data is represented as quantum states.

In classical machine learning models, data is represented as bits or floating-point numbers. In contrast, quantum machine learning uses quantum states, which are complexvectors that follow the rules of quantum mechanics. These states are often described by density matrices, and their transformations are represented by unitary matrices.

As a result, we encode information in complex-valued amplitudes rather than probabilities, and states can exist in superposition.

This does not mean that all classical data suddenly becomes exponentially compressed or easily accessible. Loading data into quantum states is often costly, and extracting information from them is fundamentally limited by measurement.

So, the important point is that the model operates on quantum states, not classical numbers.

2. Models Are Quantum Evolutions

Classical ML models apply functions to data. Quantum ML models apply quantum operations (typically unitary transformations) on quantum channels. In practice, many QML models are built from parameterized quantum circuits. These circuits are sequences of quantum gates, which are basic operations that change quantum states. The parameters of these quantum gates are tuned during training, similar to adjusting weights in a neural network in classical machine learning.

Fundamentally, what is happening in these models is that we start with the state of the system, represented in a matrix (we will call it a Hamiltonian, just to be precise), and then the gates we apply to the system will tell us how that system evolves (changes) during a certain period of time. That evolution dictates the model’s behaviour.

As a result, quantum models explore a hypothesis space that is structurally different from that of classical models, even when the training loop appears similar on the surface.

3. Measurement Is Part of the Learning Process

In classical ML, reading out a model’s output is trivial and in no way affects the state or behaviour of the model (unless we intentionally make it so). In quantum ML, however, measurement is probabilistic and destructive of the state. This has a significant effect on the system. The outputs are determined by repeated circuit executions, called ‘shots.’ Here, ‘shots’ mean running the same quantum circuit multiple times to estimate an outcome, since quantum measurements are probabilistic.

The gradients (what guides parameters update during training) are estimated statistically from these measurements rather than computed exactly as in classical machine learning. As a result, the training cost is often dominated by sampling noise from these repeated measurements, rather than by computation alone.

In other words, uncertainty is built into the model itself. Any serious discussion of QML must account for the fact that learning happens through measurement, not after it.

What Does Not Make QML Quantum

Quantum computing and QML, in particular, generate hype and misunderstanding. Many things called “quantum machine learning” today are quantum in name only, for example:

  • Classical ML algorithms run on quantum hardware without making meaningful use of quantum states.
  • “Quantum-inspired” methods that are entirely classical.
  • Hybrid pipelines where the quantum component can be removed without changing the model’s behavior or performance.

If you ever come across someone talking about QML and you are not sure how quantum the model they are discussing is, a good rule of thumb to follow is to ask:

“Can I replace the quantum part with a classical one without altering the model’s mathematical structure?”

If yes or maybe, the approach is probably not fundamentally quantum. This work may still be valuable, but it falls outside the core of quantum machine learning.

Where is QML Today?

When discussing quantum computing, remember that current hardware is noisy, small, and resource-constrained. Because of this:

  • There is no general, proven quantum advantage for machine learning tasks today.
  • Many QML models resemble kernel methods more than deep networks.
  • Data loading and noise often dominate performance.

This isn’t a field failure; it’s where quantum computing currently stands. Most QML research now is exploratory: mapping model classes, understanding quantum learning theory, and identifying where quantum structure could matter.

Why Quantum Machine Learning Is Still Worth Studying

If near-term speedups are unlikely, why pursue QML at all?

QML forces us to rethink foundational questions about machine learning and quantum computing. We need to answer what it means to learn from quantum data, how noise affects optimization, and which model classes exist in quantum systems but not in classical systems.

Quantum machine learning is less about outperforming classical ML today and more about expanding the space of what “learning” can mean in a quantum world.

This matters because scientific and technological advances start with new approaches. Even if hardware isn’t ready yet, exploring QML prepares us for better hardware in the future.

Final Thoughts and What Comes Next

Advances in quantum computing are accelerating. Hardware companies are racing to build a fault-tolerant quantum computer. A quantum computer that utilizes the full power of quantum mechanics. Software and application companies are exploring the problems that quantum computing can meaningfully address.

That said, today’s quantum computers are incapable of running a near-life-sized application, let alone a complex machine learning model. Still, the promise of quantum computing’s efficiency in machine learning is quite interesting and worth exploring now, in parallel with hardware advancements.

In this article, I focused on the definitions and boundaries of quantum machine learning to pave the way for future articles that will explore:

  • How classical data is embedded into quantum states.
  • Variational quantum models and their limitations.
  • Quantum kernels and feature spaces.
  • Optimization challenges in noisy quantum systems.
  • Where quantum advantage might plausibly emerge.

Before asking whether quantum machine learning is useful, we need to be clear about what it actually is. The more we step away from the hype, the closer we can move towards progress.



Source link

The post What Makes Quantum Machine Learning “Quantum”? first appeared on TechToday.

This post originally appeared on TechToday.

Leave a Reply

Your email address will not be published. Required fields are marked *