Abstract

The application of computer vision systems on industrial flotation plants has benefited considerably from advances in deep learning over the last decade, mostly based on the use of convolutional neural networks and transfer learning. More recently, vision transformers (ViTs) have attracted strong interest since their first appearance in 2017, compared to the popular convolutional neural networks (CNNs). Although becoming well-established in many areas, they have not yet been considered meaningfully in machine vision or signal processing applications in mineral processing, despite the obvious benefits that their application could realize. In this paper, it is demonstrated that ViTs are neural network architectures highly capable of discriminating between different froth flotation images. A customized ViT model and a pretrained ViT model using transfer learning were studied and compared. The former achieved satisfactory performance and the latter achieved near perfect performance, both at a significantly lower computational cost than CNNs. These results suggest that ViTs can be a competitive alternative to CNNs in the advancement of computer vision systems on industrial flotation plants.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call