Abstract
The performance gains achieved by deep learning models nowadays are mainly attributed to the usage of ever larger datasets. In this study, we present and contrast the performance gains that can be achieved via accessing larger high-quality datasets versus the gains that can be achieved from harnessing the latest deep learning architectural and training advances. Modelling neonatal EEG is particularly affected by the lack of publicly available large datasets. It is shown that greater performance gains can be achieved from harnessing the latest deep learning advances than using a larger training dataset when adopting AUC as a metric, whereas using AUC90 or AUC-PR as metrics greater performance gains are achieved from using a larger dataset than harnessing the latest deep learning advances. In all scenarios the best performance is obtained by combining both deep learning advances and larger datasets. A novel developed architecture is presented that outperforms the current state-of-the-art model for the task of neonatal seizure detection. A novel method to fine-tune the presented model towards site-specific settings based on pseudo labelling is also outlined. The code and the weights of the model are made publicly available for benchmarking future model performances for neonatal seizure detection.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.