Abstract
A novel loss function to train a net of K single-layer perceptrons (KSLPs) is suggested, where pairwise misclassification cost matrix can be incorporated directly. The complexity of the network remains the same; a gradient's computation of the loss function does not necessitate additional calculations. Minimization of the loss requires a smaller number of training epochs. Efficacy of cost-sensitive methods depends on the cost matrix, the overlap of the pattern classes, and sample sizes. Experiments with real-world pattern recognition (PR) tasks show that employment of novel loss function usually outperforms three benchmark methods.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
More From: IEEE Transactions on Pattern Analysis and Machine Intelligence
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.