Abstract

Quantum machine learning (QML) is a new field in its infancy, promising performance enhancements over many classical machine learning (ML) algorithms. Data reuploading is a QML algorithm with a focus on utilizing the power of a singular qubit as an individually capable classifier. Recently, there have been studies set out to explore the concept of data re-uploading in a classification setting, however, important aspects are often not considered in experiments, which may hinder our understanding of the methodology’s performance. In this work, we conduct an analysis of the single-qubit data re-uploading methodology, in relation to the effect that system depth has on classification and robustness performances against the influence of environmental noise during training. This is aimed towards bridging together previous works in order to solidify the concepts of the methodology, and provide reasonable insight into how transferable the methodology is when applied to non-synthetic data. To further demonstrate the findings, we also analyse the results of a case study using a subset of MNIST data. From this work, our experimental results support that an increase in system depth can lead to higher classification performances, as well as improved stability during training in noisy environments, with the sharpest performance improvements seemingly occurring between 1–3 uploading layer repetitions. Leading on from our experimental results, we suggest areas for further exploration, to ensure we can maximize classification performance when using the data re-uploading methodology.

Highlights

  • Quantum machine learning is a rapidly expanding domain, bringing promising performance enhancements through complex feature space representations [1]–[5] and lowering computational complexity of equivalent classical algorithms by exponential factors in cases [6]–[11]

  • The aim of this work is to bridge knowledge between previous works, determine any correlation between system depth and performance using the data re-uploading methodology, test robustness of the system when using different depths, and provide an indication of how this methodology may perform on non-artificially generated data

  • In this work, we have conducted an analysis of the data re-uploading methodology, using a single qubit only

Read more

Summary

Introduction

Quantum machine learning is a rapidly expanding domain, bringing promising performance enhancements through complex feature space representations [1]–[5] and lowering computational complexity of equivalent classical algorithms by exponential factors in cases [6]–[11]. VQCs often appear to be initialised using circuit structures and designs which are seemingly chosen at random, or have very little justification. Whilst this may work fine in certain scenarios, we need to look at what aspects of these circuits improve our performance, and whether certain features, such as the depth to our circuits, are most beneficial. Of circuit capability referred to as ‘expressability’ and ‘entangling capability’ were explored initially in [21]. This was furthered in [22], where the performance of these circuits were compared in a classification setting. These studies suggest that expressability and performance of VQCs will start to plateau at a point, this point may change dependent on the circuit used

Objectives
Methods
Results
Discussion
Conclusion
Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.