Abstract

This paper presents a novel cognitive few-shot learning (CFSL) for the diagnosis of cleft lip and palate and Parkinson’s diseases. The proposed method utilizes computational analysis of paralinguistic features to expedite the diagnostic process. Unlike other methods that rely on complex and fragmented representations, CFSL trains itself to recognize patterns that are easily interpretable by humans. Rather than learning a single, unstructured metric space, CFSL combines the outputs of individual landmark (LM) learners by mapping LMs into semi-formation spaces. In order to assess the effectiveness of CFSL, we conducted a comparative analysis with seven distinct FSL-based models, including momentum contrastive learning for FSL (MCFSL), self-updating FSL (SUFSL), mutual info multi-attention FSL (MAMIFSL), dual class representation FSL (DCRFSL), Improved FSL (IFSL), meta-knowledge for FSL (MKFSL), and prototypical networks (ProtoNet), using three popular datasets, namely GPRS, CIEMPIESS, and PC-GITA. The findings indicate that CFSL demonstrated superior performance compared to the highest-performing baseline frameworks for the 5-shot (5-sh) and 1-shot (1-sh), having an average enhancement of 4.39% and 4.49%, respectively. CFSL demonstrated better performance than the ProtoNet baseline in both 1-sh and 5-sh across all datasets, with an improvement of 12.966% and 11.033%, respectively. In addition, we performed ablation tests to assess the effects of variables such as the density of LMs, the structure of the network, the distance measure used, and the positioning of LMs. The CFSL approach, if adopted in hospitals, has the potential to enhance the precision and efficiency of diagnosis for cleft lip and palate as well as Parkinson’s disease.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.