Abstract

MotivationConvolutional neural networks (CNNs) have been tremendously successful in many contexts, particularly where training data are abundant and signal-to-noise ratios are large. However, when predicting noisily observed phenotypes from DNA sequence, each training instance is only weakly informative, and the amount of training data is often fundamentally limited, emphasizing the need for methods that make optimal use of training data and any structure inherent in the process.ResultsHere we show how to combine equivariant networks, a general mathematical framework for handling exact symmetries in CNNs, with Bayesian dropout, a version of Monte Carlo dropout suggested by a reinterpretation of dropout as a variational Bayesian approximation, to develop a model that exhibits exact reverse-complement symmetry and is more resistant to overtraining. We find that this model combines improved prediction consistency with better predictive accuracy compared to standard CNN implementations and state-of-art motif finders. We use our network to predict recombination hotspots from sequence, and identify binding motifs for the recombination–initiation protein PRDM9 previously unobserved in this data, which were recently validated by high-resolution assays. The network achieves a predictive accuracy comparable to that attainable by a direct assay of the H3K4me3 histone mark, a proxy for PRDM9 binding.Availability and implementation https://github.com/luntergroup/EquivariantNetworks Supplementary information Supplementary data are available at Bioinformatics online.

Highlights

  • We summarise the results by giving the explicit networks for case (3); the optimal networks for cases (1) and (2) turned out to be the same as those for case (3) except for dropping the relevant layers/features, and changing ”Equivariant MC dropout” into ”MC dropout” where relevant

  • We see from this figure considerable outperformance for the Exponential Linear Unit (ELU) activation function

  • The behaviour observed here is qualitatively similar to what we saw for the DeMo CNN model, and an implementation of a DeepBind like network, and suggest that this choice of activation function goes some way to explaining the variety of convergence accuracy seen here

Read more

Summary

Baseline Asymmetric Networks

We performed a hyperparameter search for the best asymmetric topologies for each of the two datasets, not allowing for equivariance or MC dropout layers. Networks obtained for the data augmentation optimization were identical, except that the optimal L2 regularization parameter was 0 for the recombination dataset, and the number of filters was doubled to 32. Note that when we followed the procedure in A.2, we used an equivariant Bayesian network with this 32 filters when comparing against augmented data

Bayesian Equivariant Networks
B ReLU activation function performs worse than ELU
C Custom Initialization helps ReLU to converge reliably in both datasets
E Bayesian Equivariant Network outperforms data augmentation
F Batch Norm in this regime
G Model comparison in the low-data regime
Findings
H Homer Motif Results
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call