Abstract

Computer vision has a wide range of applications from medical image analysis to robotics. Over the past few years, the field has been transformed by machine learning and stands to benefit from potential advances in quantum computing. The main challenge for processing images on current and near-term quantum devices is the size of the data such devices can process. Images can be large, multidimensional and have multiple color channels. Current machine learning approaches to computer vision that exploit quantum resources require a significant amount of manual pre-processing of the images in order to be able to fit them onto the device. This paper proposes a framework to address the problem of processing large scale data on small quantum devices. This framework does not require any dataset-specific processing or information and works on large, grayscale and RGB images. Furthermore, it is capable of scaling to larger quantum hardware architectures as they become available. In the proposed approach, a classical autoencoder is trained to compress the image data to a size that can be loaded onto a quantum device. Then, a Restricted Boltzmann Machine (RBM) is trained on the D-Wave device using the compressed data, and the weights from the RBM are then used to initialize a neural network for image classification. Results are demonstrated on two MNIST datasets and two medical imaging datasets.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.