Abstract

<abstract><p>In recent years, augmented reality has emerged as an emerging technology with huge potential in image-guided surgery, and in particular, its application in brain tumor surgery seems promising. Augmented reality can be divided into two parts: hardware and software. Further, artificial intelligence, and deep learning in particular, have attracted great interest from researchers in the medical field, especially for the diagnosis of brain tumors. In this paper, we focus on the software part of an augmented reality scenario. The main objective of this study was to develop a classification technique based on a deep belief network (DBN) and a softmax classifier to (1) distinguish a benign brain tumor from a malignant one by exploiting the spatial heterogeneity of cancer tumors and homologous anatomical structures, and (2) extract the brain tumor features. In this work, we developed three steps to explain our classification method. In the first step, a global affine transformation is preprocessed for registration to obtain the same or similar results for different locations (voxels, ROI). In the next step, an unsupervised DBN with unlabeled features is used for the learning process. The discriminative subsets of features obtained in the first two steps serve as input to the classifier and are used in the third step for evaluation by a hybrid system combining the DBN and a softmax classifier. For the evaluation, we used data from Harvard Medical School to train the DBN with softmax regression. The model performed well in the classification phase, achieving an improved accuracy of 97.2%.</p></abstract>

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call