Abstract

Improving overall performance is the ultimate goal of any machine learning (ML) algorithm. While it is a trivial task to explore multiple individual validation measurements, evaluating and monitoring overall performance can be complicated due to the highly nonlinear nature of the functions describing the relationships among different validation metrics, such as the Dice Similarity Coefficient (DSC) and Jaccard Index (JI). Therefore, it is naturally desirable to have a reliable validation algorithm or model that can integrate all existing validation metrics into a single value. This consolidated metric would enable straightforward assessment of an ML algorithm’s performance and identify areas for improvement. To deal with such a complex nonlinear problem, this study suggests a novel parameterized model named Adaptive Neuro-Fuzzy Inference Systems (ANFIS), which takes any set of input–output precise-imprecise data and uses a neuro-adaptive learning strategy to tune the parameters of the pre-defined membership functions. Our method can be accepted as an elegant and the state-of-the-art method for the nonlinear function approximation, which could be added directly to any convolutional neural networks (CNN) loss functions as the regularization term to generate a constrained-CNN-FUZZY model optimization. To demonstrate the ability of the purposed method and provide a practical explanation of the capability of ANFIS, we use deep CNN as a testing platform to consider the fact that one of the biggest challenges CNN-developers faced today is to reduce the mismatching between the provided input data and the predicted results monitored by different validation metrics. We first create a toy dataset using MNIST and investigate the properties of the proposed model. We then use a medical dataset to demonstrate our method’s efficacy on brain lesion segmentation. In both datasets, our method shows reliable validation results to guide researchers towards choosing performance metrics in a problem-aware manner, especially when the results of different validation metrics are too similar among models to determine the best one.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.