Abstract
<abstract> <p>When planning the development of future energy resources, electrical infrastructure, transportation networks, agriculture, and many other societally significant systems, policy makers require accurate and high-resolution data reflecting different climate scenarios. There is widely documented evidence that perceptual loss can be used to generate perceptually realistic results when mapping low-resolution inputs to high-resolution outputs, but its application is limited to images at present. In this paper, we study the perceptual loss when increasing the resolution of raw precipitation data by ×4 and ×8 under training modes of CNN and GAN. We examine the difference in the perceptual loss calculated by using different layers of feature maps and demonstrate how low- and mid-level feature maps can yield comparable results to pixel-wise loss. In particular, from both qualitative and quantitative points of view, Conv2_1 and Conv3_1 are the best compromises between obtaining detailed information and maintaining the overall error in our case. We propose a new approach to benefit from perceptual loss while considering the characteristics of climate data. We show that in comparison to calculating perceptual loss directly for the entire sample, our proposed approach can obtain detailed information of extreme events regions while reducing error.</p> </abstract>
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.