Abstract

Auroras are bright occurrences when high-energy particles from the magnetosphere and solar wind enter Earth's atmosphere through the magnetic field and collide with atoms in the upper atmosphere. The morphological and temporal characteristics of auroras are essential for studying large-scale magnetospheric processes. While auroras are visible to the naked eye from the ground, scientists use deep learning algorithms to analyze all-sky images to understand this phenomenon better. However, the current algorithms face challenges due to inefficient utilization of global features and neglect the excellent fusion of local and global feature representations extracted from aurora images. Hence, this paper introduces a Hash-Transformer model based on Vision Transformer for aurora retrieval from all-sky images. Experimental results based on real-world data demonstrate that the proposed method effectively improves aurora image retrieval performance. It provides a new avenue to study aurora phenomena and facilitates the development of related fields.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call