Abstract

Despite the promise for performance and accuracy improvements of quantum inspired (QI) algorithms over classical machine learning (ML) algorithms, such gains have not been realized in practice. The quantum inspired algorithms can theoretically achieve significant speed up based on sampling assumptions and have thus far failed to outperform the existing classical ML models in practical applications. The speedup of quantum machine learning (QML) algorithms assume the access to data in quantum random access memory (QRAM) which is a strong assumption with current quantum architectures. QI algorithms assume sample and query (SQ) access to input vector and norms of matrices using a dynamic data structure. We explore the components of these models and the assumptions in this paper by surveying the recent works in QML and QI Machine learning (QIML) algorithms. We limit our study to QML and QIML models on achieving a speed up over classical ML techniques rather than individual proofs of these algorithms. This study highlights the assumptions being made that are currently not practical for QML and QIML algorithms in achieving performance advantage over classical ML algorithms.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call