Share this article:


  • Join our comunity:

A Brief Overview of Quantum Machine Learning

By: , Posted on: July 3, 2014

Over the past decade, dozens of papers appeared on using quantum algorithms for machine learning, that is, using the properties of elementary particles to find patterns in data. The fit seems natural: quantum mechanics and quantum information theory uses a large amount of linear algebra, and so does machine learning. Today we have many learning algorithms with a quantum variant, and here we observe some general, non-technical characteristics that describe the various approaches, without attempting to be comprehensive. For a generic overview, please refer to the table below.

Many quantum learning algorithms rely on Grover’s search (Grover, 1996), an algorithm to find elements in an unordered set quadratically faster than by any classical variant. Mostly unsupervised learning methods use this approach: K-medians, hierarchical clustering, and quantum manifold embedding. In addition, quantum associative memory and quantum neural networks often rely on this search, and also an early version of quantum support vector machines. Adding it up, about half of all the methods proposed for learning in a quantum setting use Grover’s.

As Grover’s search has a quadratic speedup, this sets the limit to how much faster those learning methods can get that rely on it. Exponential speedup is possible in scenarios where both the input and output are also quantum: listing out class membership or reading the classical data once would already imply at least linear time complexity. In turn, this would limit the speedup to polynomial order. Examples that use quantum data include quantum principal component analysis, quantum K-means, and a different flavor of quantum support vector machines. Regression based on quantum process tomography requires an optimal input state, and, in this regard, it needs a quantum input. At a high level, it is also possible to define an abstract class of problems that can only be learned in polynomial time by quantum algorithms using quantum input (Gavinsky, 2012).

Table 1
A generic overview of quantum learning methods. The algorithm column names the classical learning method. The papers column refers to the most important papers related to the quantum variant. The Grover column indicates whether the algorithm uses Grover’s search or an extension thereof. The speedup column indicates how much faster the quantum variant is compared to the best known classical version. Quantum data refers to whether the input, output, or both are quantum states, as opposed to states prepared from classical vectors. The column generalization performance refers to whether this quality of the learning algorithm was studied in the relevant papers. Implementation refers to attempts to develop a physical realization.

Generalization performance estimates how well a learning algorithm will perform on data it has not seen during training. Naturally we would like to have algorithms which are not only fast to learn, but that also generalize well. Curiously, few authors were interested in the generalization performance of quantum learning algorithms. Analytical investigations are especially sparse, with quantum boosting by adiabatic quantum computing being a notable exception, along with a form of quantum support vector machines. Numerical comparisons favor quantum methods in the case of quantum neural networks and quantum nearest neighbors.

We are far from developing scalable universal quantum computers. Learning methods, however, do not require universal hardware: special cases of quantum computing are attainable with current technology. A controversial example is adiabatic quantum optimization in large-scale learning problems, most notably, in boosting. More gradual and well-founded are small-scale implementations of quantum perceptrons and neural networks. As theoretical work and advances in quantum technology continue, we can expect even more quantum learning algorithms in the next few years.

Peter WittekAbout Peter Wittek

Peter Wittek received his PhD in Computer Science from the National University of Singapore, and he also holds an MSc in Mathematics. He is interested in interdisciplinary synergies, such as scalable learning algorithms on supercomputers, computational methods in quantum simulations, and quantum machine learning. He has been involved in major EU research projects, and obtained several academic and industry grants.

Affiliated with the University of Borås, he works location-independently, and did research stints at several institutions, including the Indian Institute of ScienceBarcelona Supercomputing CenterBangor UniversityTsinghua University, the Centre for Quantum Technologies, and the Institute of Photonic Sciences.

He is the author of  Quantum Machine Learning: What Quantum Computing Means to Data Mining, available on the Elsevier Store at a 25% discount. This book bridges the gap between abstract developments in quantum computing and the applied research on machine learning. It captures a broad array of highly specialized content in an accessible and up-to-date review of the growing academic field of quantum machine learning and its applications in industry.


Anguita, D., Ridella, S., Rivieccio, F. and Zunino, R. (2003). Quantum optimization for training support vector machines, Neural Networks, vol. 16, iss. 5, pp. 763-770.

Aïmeur, E., Brassard, G. and Gambs, S. (2013). Quantum speed-up for unsupervised learning, Machine Learning, vol. 90, iss. 2, pp. 261-287.

Bisio, A., D’Ariano, G. M., Perinotti, P. and Sedlák, M. (2011). Quantum learning algorithms for quantum measurements, Physics Letters A, vol. 375, pp. 3425-3434.

Gavinsky, D. (2012). Quantum Predictive Learning and Communication Complexity with Single Input, Quantum Information & Computation, vol. 12, iss. 7-8, pp. 575-588.

Grover, L. K. (1996). A Fast Quantum Mechanical Algorithm for Database Search, in Proceedings of STOC0-96, 28th Annual ACM Symposium on Theory of Computing, pp. 212-219.

Lloyd, S., Mohseni, M. and Rebentrost, P. (2013a). Quantum algorithms for supervised and unsupervised machine learning, arXiv:1307.0411.

Lloyd, S., Mohseni, M. and Rebentrost, P. (2013b). Quantum principal component analysis, arXiv:1307.0401.

Narayanan, A. and Menneer, T. (2000). Quantum artificial neural network architectures and components, Information Sciences, vol. 128, iss. 3-4, pp. 231-255.

Rebentrost, P., Mohseni, M. and Lloyd, S. (2013). Quantum support vector machine for big feature and big data classification, arXiv:1307.0471.

Trugenberger, C. A. (2001). Probabilistic Quantum Memories, Physical Review Letters, vol. 87, p. 67901

Ventura, D. and Martinez, T. (2000). Quantum associative memory, Information Sciences, vol. 124, iss. 1, pp. 273-296.

Wiebe, N., Kapoor, A. and Svore, K. M. (2014). Quantum Nearest Neighbor Algorithms for Machine Learning, arXiv:1401.2142.


Connect with us on social media and stay up to date on new articles


One of the oldest scientific disciplines, the study of physics continues to expand the scope of human understanding, from the nano-scale to the dimensions of our universe. Elsevier’s extensive collection of physics books, journals and resources represents the expanding nature of this deep, wide, and interdisciplinary field. We offer major reference works, textbooks, monographs, series, and handbooks covering areas such as optics; atomic, molecular and plasma physics; condensed-matter physics; non-linear, statistical and applied physics; and surfaces and interfaces.