Abstract

This article proposes a new learning paradigm based on the concept of concordant gradients for ensemble learning strategies. In this paradigm, learners update their weights if and only if the gradients of their cost functions are mutually concordant in a sense given by paper. The objective of the proposed concordant optimization framework is robustness against uncertainties by postponing to a later epoch, the consideration of examples associated with discordant directions during a training phase. Concordance constrained collaboration is shown to be relevant, especially in intricate classification issues where exclusive class labeling involves information bias due to correlated disturbances affecting almost all training examples. The first learning paradigm applies on a gradient descent strategy based on allied agents, subjected to concordance checking before moving forward in training epochs. The second learning paradigm is related to multivariate dense neural matrix fusion, where the fusion operator is itself a learnable neural operator. In addition to these paradigms, this article proposes a new categorical probability transform to enrich the existing collection and propose an alternative scenario for integrating penalized SoftMax information. Finally, this article assesses the relevance of the above contributions with respect to several deep learning frameworks and a collaborative classification involving dependent classes.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.