Abstract
Auroras are bright occurrences when high-energy particles from the magnetosphere and solar wind enter Earth's atmosphere through the magnetic field and collide with atoms in the upper atmosphere. The morphological and temporal characteristics of auroras are essential for studying large-scale magnetospheric processes. While auroras are visible to the naked eye from the ground, scientists use deep learning algorithms to analyze all-sky images to understand this phenomenon better. However, the current algorithms face challenges due to inefficient utilization of global features and neglect the excellent fusion of local and global feature representations extracted from aurora images. Hence, this paper introduces a Hash-Transformer model based on Vision Transformer for aurora retrieval from all-sky images. Experimental results based on real-world data demonstrate that the proposed method effectively improves aurora image retrieval performance. It provides a new avenue to study aurora phenomena and facilitates the development of related fields.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.