Abstract

The advancement of virtual reality technology has ushered in new developments in the medical field. The use of virtual surgery training simulators alleviates the paucity of training resources and high training expenses associated with traditional surgical capabilities. Regardless of the type of schooling, doctors must continue to educate themselves. The postoperative evaluation mechanism is incomplete. Traditional objective evaluation indicators are unable to meet surgeons' stringent expectations. The electroencephalograph (EEG) rhythm index is proposed in this article as a new tool for evaluating and distinguishing between novice and expert doctors. The experiment uses a cutting training module from neurosurgery training and compares it with established assessment metrics to determine the correct rate of classification of new evaluation metrics, classifying testers by both metrics and finding a 20% increase in correctness. Additionally, this article compares the energy topographic maps of different EEG rhythms of novices and experts. For classification, two-machine learning algorithms, SVM and random forest, are utilized at the same time. The findings reveal that the accuracy of distinguishing indicators based on EEG cycles is 10% higher than that of typical objective evaluation indicators, regardless of the categorization method. ROC curve analysis was also used to compare the two classification models. The AUC value for the EEG rhythm evaluation index model was 0.971, whereas the AUC value for the classic objective evaluation index model was 0.761, which explains the EEG rhythm evaluation index. The model demonstrates a categorization standard that is reliable.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call