Abstract

The large-scale multimodal sensor fusion of the internet of things (IoT) data can be transformed into an N-dimensional classical point cloud. For example, the transformation may be the fusion of three imaging modalities of different natures such as LiDAR (light imaging, detection, and ranging), a set of RGB images, and a set of thermal images. However, it is not easy to process a point cloud because it can have millions or even hundreds of millions of points. Classical computers therefore often crash when operating a point cloud of multimodal sensor data. Quantum Point Clouds (QPC) address the problem of uncertainty in multi-modal sensor data, such that precognitive/predictive models can be derived with outcomes of greater certainty than classical information processing methods. This paper presents early experiments of the first application of a quantum co-processor hybrid for processing quantum point cloud multimodal sensor data from an autonomous racing car. Applied to the more complex case of cave mapping, it then describes the first hybrid classical-quantum co-processor, comprising a graphical processing unit, differential pulse code modulator and a quantum computer. The graphical processing unit comprises a multiple input/output data interface, transformation means for transforming a fused depth bitmap of the multi-modal sensor data into a point cloud representation with world coordinates, control logic that manages the multiple input/output data interface, and the differential pulse code modulator. The quantum co-processor comprises an assembly of quantum computing chips.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call