Abstract

In recent years, convolutional neural network (CNN)-based methods have been widely used for optical remote sensing object detection and have shown excellent performance. Some aerospace systems, such as satellites or aircrafts, need to adopt these methods to observe objects on the ground. Due to the limited budget of the logical resources and power consumption in these systems, an embedded device is a good choice to implement the CNN-based methods. However, it is still a challenge to strike a balance between performance and power consumption. In this paper, we propose an efficient hardware-implementation method for optical remote sensing object detection. Firstly, we optimize the CNN-based model for hardware implementation, which establishes a foundation for efficiently mapping the network on a field-programmable gate array (FPGA). In addition, we propose a hardware architecture for the CNN-based remote sensing object detection model. In this architecture, a general processing engine (PE) is proposed to implement multiple types of convolutions in the network using the uniform module. An efficient data storage and access scheme is also proposed, and it achieves low-latency calculations and a high memory bandwidth utilization rate. Finally, we deployed the improved YOLOv2 network on a Xilinx ZYNQ xc7z035 FPGA to evaluate the performance of our design. The experimental results show that the performance of our implementation on an FPGA is only 0.18% lower than that on a graphics processing unit (GPU) in mean average precision (mAP). Under a 200 MHz working frequency, our design achieves a throughput of 111.5 giga-operations per second (GOP/s) with a 5.96 W on-chip power consumption. Comparison with the related works demonstrates that the proposed design has obvious advantages in terms of energy efficiency and that it is suitable for deployment on embedded devices.

Highlights

  • Object detection is an important research topic of remote sensing image processing

  • We propose a hardware implementation method for convolutional neural network (CNN)-based optical remote sensing object detection under power-limited conditions

  • We effectively reduced the scale of the network and the resource requirements during deployment

Read more

Summary

Introduction

Object detection is an important research topic of remote sensing image processing. Object detection of optical remote sensing images aims to predict the location of the objects belonging to the category of interest in a given remote sensing image [1]. Many works have constructed an onboard remote sensing system to implement object detection in real time on the satellites or aircraft [8,12]. This solution has been shown to be more efficient than the traditional solutions [12]. In Reference [4], an improved YOLOv2 network was proposed for remote sensing object detection This network adopted dilated convolution and transposed convolution to improve performance for multiscale objects in complex optical remote sensing scenes. This network strikes a balance between model complexity and object detection performance. The main layers are the convolutional layer, pooling layer, batch normalization layer, and activation function

Objectives
Methods
Results
Discussion
Conclusion
Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.