Abstract

Artificial neural networks may be deployed to diagnose illnesses including mental illness. One such mental illness is schizophrenia which is characterized by persistent delusions, hallucinations, disorganized speech, highly disorganized or catatonic behaviour and negative symptoms. It is commonly believed that one or two hidden layers in a neural network are sufficient to classify data, and that more hidden layers may be avoided because of longer times taken for the network to converge. However, we demonstrate that beyond a certain size of the hidden layer(s), it is harmful to deploy more than one layer because not only will it take longer for the network parameters to converge, the classification performance deteriorates sharply with more than one hidden layers.

Highlights

  • Schizophrenia is a serious mental disorder [1] that may be diagnosed by doing a psychometric test on the subject

  • The number of nodes in the input layer is equal to the dimension of the input data, which in our case is thirty; the number of nodes in the output layer is one, and the number of nodes in the hidden layer as well as the number of hidden layers may be varied to get optimum classification performance out of the multi-layer perceptron (MLP)

  • Size of hidden layer must be between sizes of input layer and output layer

Read more

Summary

INTRODUCTION

Schizophrenia is a serious mental disorder [1] that may be diagnosed by doing a psychometric test on the subject. A simple perceptron may be deployed to classify data that is linearly separable, but in case the classes are not linearly separable, the data must be cast into a higher dimension [8]. This is where the multi-layer perceptron (MLP). The MLP has one input layer, one output layer and one or more hidden layers. The number of nodes in the input layer is equal to the dimension of the input data, which in our case is thirty; the number of nodes in the output layer is one, and the number of nodes in the hidden layer as well as the number of hidden layers may be varied to get optimum classification performance out of the MLP. Size of hidden layer must be between sizes of input layer and output layer. As for the number of the hidden layers, the common consensus is that one or two layers are adequate for most situations [15]

METHOD
RESULTS AND DISCUSSION
CONCLUSION
Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call