Abstract

The growing computing power offered by technical advances has allowed deep learning models to be applied in various fields of healthcare and medical research, including imaging analysis, text mining, and omics studies. This article explores the basic mechanism of artificial neural networks (ANNs), and upon which explains the architecture of popular supervised learning methods such as deep neural networks (DNNs), convolutional neural networks (CNNs), and recurrent neural networks (RNNs), as well as unsupervised learning methods including variations of autoencoders, restricted Boltzmann machines (RBM), and Deep Belief Nets (DBN). Important features of each architecture are explained, such as the convolution layer and pooling layers of CNNs and the recurrent structure of RNNs. These features have optimized their corresponding models to perform different tasks ranging from image processing to textual analysis. The unique advantages of different forms of neural networks have allowed deep learning tools to become the state-of-the-art in various research studies and medical fields. The construction of various architectures is often completed through deep learning frameworks, which provide powerful built-in libraries and ease of use. Several popular deep learning frameworks including TensorFlow, PyTorch, and others are introduced and compared. Further, the article reviews the current obstacles met by deep learning approaches, including computation and interpretability limitations. Nevertheless, improvements and hardware optimization intending to overcome these challenges are emerging rapidly, offering a promising path to more effective implementation of deep learning in the healthcare industry.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call