Abstract

AbstractIn this chapter, we provide the main elements for implementing deep neural networks in Keras for binary, categorical, and mixed outcomes under feedforward networks as well as the main practical issues involved in implementing deep learning models with binary response variables. The same practical issues are provided for implementing deep neural networks with categorical and count traits under a univariate framework. We follow with a detailed assessment of information for implementing multivariate deep learning models for continuous, binary, categorical, count, and mixed outcomes. In all the examples given, the data came from plant breeding experiments including genomic data. The training process for binary, ordinal, count, and multivariate outcomes is similar to fitting DNN models with univariate continuous outcomes, since once we have the data to be trained, we need to (a) define the DNN model in Keras, (b) configure and compile the model, (c) fit the model, and finally, (d) evaluate the prediction performance in the testing set. In the next section, we provide illustrative examples of training DNN for binary outcomes in Keras R (Chollet and Allaire, Deep learning with R. Manning Publications, Manning Early Access Program (MEA), 2017; Allaire and Chollet, Keras: R interface to Keras’, 2019).

Highlights

  • Before starting with the examples, we explain in general terms the process to follow to train DNN for binary outcomes

  • There is no specific optimizer for each type of response variable, and as we studied in the previous chapter, there are at least seven optimizers available in the Keras library

  • We need to specify the number of epochs and the batch size because if the training data consist of 1000 observations and we use a batch size 1⁄4 50, we will need 20 iterations per epoch

Read more

Summary

12.1 Training DNN with Binary Outcomes

Before starting with the examples, we explain in general terms the process to follow to train DNN for binary outcomes. In a) are given the default flag values for some hyperparameters; the DNN is defined in b) and is very similar to the definition of the DNN for binary outcomes, except that in the output layer the softmax activation function is used, which is appropriate for categorical or ordinal data, and instead of one unit, three are used in the output layer since this is the number of classes of the DTHD ordinal response variable Another important difference in the definition of the DNN model is that we used the layer_batch_normalization() function just after specifying each hidden layer. These results give evidence that the prediction performance depends considerably on the value of the hyperparameter called learning rate

12.3 Training DNN with Count Outcomes
12.4 Training DNN with Multivariate Outcomes
12.4.1 DNN with Multivariate Continuous Outcomes
12.4.2 DNN with Multivariate Binary Outcomes
12.4.3 DNN with Multivariate Ordinal Outcomes
12.4.4 DNN with Multivariate Count Outcomes
Findings
12.4.5 DNN with Multivariate Mixed Outcomes
Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.