State-of-the-art automatic sleep staging methods have demonstrated comparable reliability and superior time efficiency to manual sleep staging. However, fully automatic black-box solutions are difficult to adapt into clinical workflow due to the lack of transparency in decision-making processes. Transparency would be crucial for interaction between automatic methods and the work of sleep experts, i.e., in human-in-the-loop applications. To address these challenges, we propose an automatic sleep staging model (aSAGA) that effectively utilises both electroencephalography and electro-oculography channels while incorporating transparency of uncertainty in the decision-making process. We validated the model through extensive retrospective testing using a range of datasets, including open-access, clinical, and research-driven sources. Our channel-wise ensemble model, trained on both electroencephalography and electro-oculography signals, demonstrated robustness and the ability to generalise across various types of sleep recordings, including novel self-applied home polysomnography. Additionally, we compared model uncertainty with human uncertainty in sleep staging and studied various uncertainty mapping metrics to identify ambiguous regions, or "grey areas", that may require manual re-evaluation. The validation of this grey area concept revealed its potential to enhance sleep staging accuracy and to highlight regions in the recordings where sleep experts may struggle to reach a consensus. In conclusion, this study provides a technical basis and understanding of automatic sleep staging uncertainty. Our approach has the potential to improve the integration of automatic sleep staging into clinical practice; however, further studies are needed to test the model prospectively in real-world clinical settings and human-in-the-loop scoring applications.