Abstract

ABSTRACT The use of convolutional neural networks (CNNs) in image classification has become the standard method of approaching many computer vision problems. Here we apply pre-trained networks to classify images of non-breaking, plunging and spilling breaking waves. The CNNs are used as basic feature extractors and a classifier is then trained on top of these networks. The dynamic nature of breaking waves is exploited by using image sequences extracted from videos to gain extra information and improve the classification results. We also see improved classification performance by using pre-computed image features such as the Optical Flow (OF) between image pairs to create new models in combination with infra-red images with reduction in errors of up to 60%. The inclusion of this dynamic information improves the classification between breaking wave classes. We also provide corrections to a methodology in the literature from which the data originates to achieve a more accurate assessment of model performance.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call