Abstract

Using convolutional neural networks (CNNs) in classifying hyperspectral images (HSIs) has achieved quite good results in recent years. It is widely used in agricultural remote sensing, geological exploration, environmental monitoring, and marine remote sensing. Unfortunately, the complexity of network structures used for hyperspectral image classification challenges the efficient delivery of HSI data extremely, and existing methods suffer from a large amount of redundancy in the network weight parameters during training, as they either require huge computational resources or make inefficient use of storage space when designing the network structure, and many of the parameters that waste computational resources contribute less to the rich spectral and spatial information transfer in HSI. So we introduce LCTCS, a better low-memory and less-parametric network approach. LCTCS aims to improve the efficiency of computational resource utilization with advanced classification performance and lower levels of computational resources. Unlike the conventional 2D and 3D convolution used previously, we use simple and efficient 3D grouped convolution as a vehicle to convey the semantic features of HSIs. More specifically, we design a novel two-channel sparse network to classify HSIs since grouped 3D convolution conveys the properties of hyperspectral data well in the time and space domains.We have compared LCTCS with eight widely used network methods on four publicly available hyperspectral datasets for learning HSI information. A series of experiments shows that the model architecture designed has 65.89%\\documentclass[12pt]{minimal} \\usepackage{amsmath} \\usepackage{wasysym} \\usepackage{amsfonts} \\usepackage{amssymb} \\usepackage{amsbsy} \\usepackage{mathrsfs} \\usepackage{upgreek} \\setlength{\\oddsidemargin}{-69pt} \\begin{document}$$65.89 \\%$$\\end{document} less storage space than the DBDA method, consumes 67.36%\\documentclass[12pt]{minimal} \\usepackage{amsmath} \\usepackage{wasysym} \\usepackage{amsfonts} \\usepackage{amssymb} \\usepackage{amsbsy} \\usepackage{mathrsfs} \\usepackage{upgreek} \\setlength{\\oddsidemargin}{-69pt} \\begin{document}$$67.36 \\%$$\\end{document} fewer computational resources than the SSRN method on the IP dataset, and accomplishes a highly accurate classification task with the number of parameters accounting for only 1.99%\\documentclass[12pt]{minimal} \\usepackage{amsmath} \\usepackage{wasysym} \\usepackage{amsfonts} \\usepackage{amssymb} \\usepackage{amsbsy} \\usepackage{mathrsfs} \\usepackage{upgreek} \\setlength{\\oddsidemargin}{-69pt} \\begin{document}$$1.99 \\%$$\\end{document} that of the DBMA method.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.