Abstract

In this article, we address the problem of hogweed detection using a drone equipped with red, green, blue (RGB) and multispectral cameras. We study two approaches: 1) offline detection running on the orthophoto of the area scanned within the mission and 2) real-time scanning from the frame stream directly on the edge device performing the flight mission. We show that by fusing the information from an additional multispectral camera installed on the drone, there is an opportunity to boost the detection quality, which can then be preserved even with a single RGB camera setup by the introduction of an additional convolution neural network trained with transfer learning to produce the fake multispectral images directly from the RGB stream. We show that this approach helps either eliminate the multispectral hardware from the drone or, if only the RGB camera is at hand, boost the segmentation performance by the cost of slight increase in computational budget. To support this claim, we have performed an extensive study of network performance in simulations of both the real-time and offline modes, where we achieve at least 1.1% increase in terms of the mean intersection over union metric when evaluated on the RGB stream from the camera and 1.4% when evaluated on orthophoto data. Our results show that the proper optimization guarantees a complete elimination of the multispectral camera from the flight mission by adding a preprocessing stage to the segmentation network without the loss of quality.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call