Abstract
This paper describes a neural network-based technique to compress multispectral SPOT satellite images losslessly. The technique harnesses the pattern recognition property of one-hidden-layer back propagation neural networks to exploit both the spatial and the spectral redundancy of the three-band SPOT images. The networks are initially trained on samples of the SPOT images with a unique network for each of the bands. The resultant trained nonlinear predictors are then used to predict the target SPOT images. Predicted errors are entropy-coded using multi-symbol arithmetic coding. This technique achieves compression ratios of 2.1 times and 3.2 times for urban and rural SPOT images respectively which are above 10% better than using lossless JPEG compression techniques. In comparison with JPEG2000 lossless compression, the proposed technique is 5% better.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.