More accurate sun position estimation could transform the design and operation of solar power systems, weather forecasting services, and outdoor augmented reality systems. Although several image-based approaches to sun position estimation have been proposed, their performance is significantly affected by momentary disruptions in cloud cover because they use only a single image as input. This study proposes a deep learning-based sun position estimation system that leverages spatial, temporal, and geometric features to accurately regress sun positions even when the sun is partially or entirely occluded. In the proposed approach, spatial features are extracted from an input image sequence by applying a separate Resnet-based convolution network to each frame. To ensure that the temporal changes in the brightness distribution across frames are considered, the spatial features are concatenated and passed on to a stack of LSTM layers prior to regressing the final sun position. The proposed network is also trained with elliptical (geometric) constraints to ensure that predicted sun positions are consistent with the natural elliptical path of the sun in the sky. The proposed approach's performance was evaluated on the Sirta and Laval datasets along with a custom dataset, and an R2 Score of 0.98 was achieved, which is at least 0.1 higher than that of previous approaches. The proposed approach is capable of identifying the position of the sun even when occluded and was employed in a novel sky imaging system consisting of only a camera and fisheye lens in place of a complex array of sensors.
Read full abstract