Options to improve the extrapolation power of the neural network designed using the SchNetPack package with respect to top docking scores prediction are presented. It is shown that hyperparameter tuning of the atomistic model representation (in the schnetpack.representation) improves the prediction of the top scoring compounds, which have characteristically a low incidence in randomized data sets for training of machine learning models. The prediction robustness is evaluated according to the mean square error (MSE) and the entropy of the average loss landscape decrease. Admittedly, the improvement of the top scoring compounds' prediction accuracy comes with the penalty of worsening the overall prediction power. It is revealed that the most impactful hyperparameter is the cutoff (5 Å is reported as the optimal choice). Other parameters (e.g., number of radial basis functions, number of interaction layers of the neural network, feature vector size or its batch size) are found to not affect the prediction robustness of the top scoring compounds in any comparable way relative to the cutoff. The MSE of the best docking score prediction (below -13 kcal/mol) improves from ca. 3.5 to 0.9 kcal/mol, while the prediction of less potent compounds (-13 to -11 kcal/mol) shows a lesser improvement, i.e., a decrease of MSE from 1.6 to 1.3 kcal/mol. Additionally, oversampling and undersampling of the training set with respect to the top scoring compounds' abundance is presented. The results indicate that the cutoff choice performs better than over- or undersampling of the training set, with undersampling performing better than oversampling.
Read full abstract