Abstract

We present an approach for improving DNN solution by running multiple instances of the same training (where everything is the same except different seed for weight initialization). We show that significant improvements in accuracy can be achieved by using proposed approach. Additionally, we tested two simple stopping criteria that aim to identify best performing networks in the early stage of the training. This allows us to save majority of computational resources as we fully train only one network while terminating other instances of the training in an early phase. We tested combination of 20 repetitions in repeated training with global and gradual stopping rule. Repeated training with global stopping at approx. 1% of average training time can beat average performing network and stopping at approx. 10% of average training time can significantly outperform average network. Furthermore, this approach does not involve additional manual work and requires only small amount of additional computation.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.