Abstract
The synthetic aperture radar (SAR) is very useful in both military and civilian applications due to its 24/7, all-weather, and high resolution capabilities, as well as its ability to recognize camouflage and penetrating cover. In the field of SAR image interpretation, target recognition is an important research challenge for researchers all over the world. With the application of high-resolution SAR, the imaging area has been expanding, and different imaging modes have appeared one after another. There are many difficulties with the conventional understanding of human interpretation. There are issues like slow movement, a lot of labor, and poor judgment. Technology for intelligent interpretation needs to be developed immediately. Although deep convolutional neural networks (CNNs) have proven extremely efficient in image recognition, one of the major drawbacks is that they require more parameters as their layers increase. The cost of convolution operation for all convolutional layers is therefore high, and learning lag results from the inevitable rise in computation as the size of the image kernel grows. This study proposes a three ways input of SAR images into multi-stream fast Fourier convolutional neural network (MS-FFCNN). The technique elaborates on the transformation of rudimentary multi-stream CNN into MS-FFCNN. By utilizing the fast Fourier transformation instead of the standard convolution, it lowers the cost of image convolution in CNNs, which lowers the overall computational cost. The multiple streams of FFCNN overcome the problem of insufficient sample size and further improve the long training time while also improving the recognition accuracy. The proposed method yielded good recognition accuracy of 99.92%.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.