Abstract
Super-resolutionimages are highly desired when employed for numerous analytical purposes andobviously because of their superior attractive visual effect. To create aHigh-Resolution (HR) image from one or more Low-Resolution (LR) images, animage super-resolution technique is used. When dealing with naturalenvironments and settings, it is not always easy to access high-resolutionimages. The main barriers to the same are limitations in acquisition methods.In the domains of forensics investigation, remote sensing, digital monitoring,and medical imaging high-resolution images are always required. Modern methodsusing deep learning models have improved performance when compared to classic image processing methods. Using a convolutionauto-encoder architecture with six parallel skip connections, a novel techniquefor improving lower solution natural images of size 256 × 256 tohigh-resolution images is proposed in this paper. The use of parallel skipconnections between the encoder and decoder allows the network to reconstructhigh-resolution images while still extracting relevant features fromlow-resolution images. Furthermore, the network is skillfully tuned using theright filters and regularization techniques. To create high-resolution outputimages, the decoder component makes use of the features' reduced representationin latent space. The model was evaluated using DIV 2K, CARS DATA, Set5, Set14,and the General data set after being trained using various data sets like CARSDATA and DIV 2K. Peak Signal to Noise Ratio, Structural Similarity Index, MeanSquared Error, and model behavior on numerous data sets are used to compare theproposed method to various current methods. Results reveal that the suggestedmodel works better than the existing method and is reliable.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.