Abstract

We employed Neural Architecture Search (NAS) technology in this research paper and com-pared it with the classic convolutional neural network structures LeNet-15 and VGG (Visu-al Geometry Group)16. The objective was to optimize the performance of early warning forest fire image classification tasks. We adopted a public dataset "Forest Fire Early Warning 2 Clas-sification" from the Baidu PaddlePaddle platform, which comprises images of fires and no-fires under various environmental conditions. A convolutional neural network (CNN) model was automatically designed through AutoKeras and underwent 20 training epochs. In the ex-perimental results, NAS outperformed with an accuracy rate of 92%, surpassing LeNet-15 (83%) and VGG16 (49%). However, its training time was longer at 33 seconds, and GPU utili-zation was higher, ranging from 28% to 33%. Despite the room for improvement in training time and resource utilization, NAS has proven its superiority in complex image classification tasks due to its high accuracy. Although NAS has room for enhancement in terms of training time and resource utilization, its outstanding performance in the image recognition task of ear-ly forest fire warning shows great potential for future research and application.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call