Abstract
Optical sensor data fusion technology is a research hotspot in the field of information science in recent years, which is widely used in military and civilian fields because of its advantages of high accuracy and low cost, and target recognition is one of the important research directions. Based on the characteristics of small target optical imaging, this paper fully utilizes the frontier theoretical methods in the field of image processing and proposes a small target recognition algorithm process framework based on visible and infrared image data fusion and improves the accuracy as well as stability of target recognition by improving the multisensor information fusion algorithm in the photoelectric meridian tracking system. A practical guide is provided for the solution of the small target recognition problem. To facilitate and quickly verify the multisensor fusion algorithm, a simulation platform for the intelligent vehicle and the experimental environment is built based on Gazebo software, which can realize the sensor data acquisition and the control decision function of the intelligent vehicle. The kinematic model of the intelligent vehicle is firstly described according to the design requirements, and the camera coordinate system, LiDAR coordinate system, and vehicle body coordinate system of the sensors are established. Then, the imaging models of the depth camera and LiDAR, the data acquisition principles of GPS and IMU, and the time synchronization relationship of each sensor are analyzed, and the error calibration and data acquisition experiments of each sensor are completed.
Highlights
With the rapid development of modern optoelectronic reconnaissance technology, the image acquisition, transmission efficiency, and imaging accuracy of visible and infrared reconnaissance systems have been greatly improved, and the simultaneous carrying of these two optical reconnaissance systems on a single platform has become a mainstream practice to further improve the effectiveness of reconnaissance platforms in single sortie conditions [1,2,3]
This paper focuses on the target recognition algorithm of optical sensor data fusion, given full consideration to the advantages and features of the two imaging means and, after an in-depth study, designs a target recognition method framework based on optical sensor data fusion; focuses on the characteristics of the images obtained by the two imaging means; analyzes their advantages in solving the problem of small target recognition; clarifies the general idea of data fusion; and introduces the target detection method using infrared images
The method means of target detection using infrared images, the proposed cyclic clustering method based on visible image target segmentation, and the method framework of fusion processing based on optical sensor data fusion and visible image target segmentation results to achieve comprehensive target recognition are given, which can provide a clear idea for the solution of this bottleneck problem
Summary
With the rapid development of modern optoelectronic reconnaissance technology, the image acquisition, transmission efficiency, and imaging accuracy of visible and infrared reconnaissance systems have been greatly improved, and the simultaneous carrying of these two optical reconnaissance systems on a single platform (on water or in the air) has become a mainstream practice to further improve the effectiveness of reconnaissance platforms in single sortie conditions [1,2,3] These optical sensing platforms obtain a large number of digital images and transform them into useful intelligence for the target situation on the battlefield and need to rely on subsequent image processing methods to detect, segmentation, and tracking of the target [4, 5]. The method means of target detection using infrared images, the proposed cyclic clustering method based on visible image target segmentation, and the method framework of fusion processing based on optical sensor data fusion and visible image target segmentation results to achieve comprehensive target recognition are given, which can provide a clear idea for the solution of this bottleneck problem
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.