Abstract

Data fusion enables the characterisation of an object using multiple datasets collected by various sensors. To improve optical coordinate measurement using data fusion, researchers have proposed numerous algorithmic solutions and methods. The most popular examples are the Gaussian process (GP) and weighted least-squares (WLS) algorithms, which depend on user-defined mathematical models describing the geometric characteristics of the measured object. Existing research on GP and WLS algorithms indicates that GP algorithms have been widely applied in both academia and industry, despite their use being limited to applications on relatively simple geometries. Research on WLS algorithms is less common than research on GP algorithms, as the mathematical tools used in the WLS cases are too simple to be applied with complex geometries. Machine learning is a new technology that is increasingly being applied to data fusion applications. Research on this technology is relatively scarce, but recent work has highlighted the potential of machine learning methods with significant results. Unlike GP and WLS algorithms, machine learning algorithms can autonomously learn the geometrical features of an object. To understand existing research in-depth and explore a path for future work, a new taxonomy of data fusion algorithms is proposed, covering the mathematical background and existing research surrounding each algorithm type. To conclude, the advantages and limitations of the existing methods are reviewed, highlighting the issues related to data quality and the types of test artefacts.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call