Abstract

Promoters contribute to research in the context of many diseases, such as coronary heart disease, diabetes and tumors, and one fundamental task is to identify promoters. Deep learning is widely used in the study of promoter sequence recognition. Although deep models have fast and accurate recognition capabilities, they are also limited by their reliance on large amounts of high-quality data. Therefore, we performed transfer learning on a typical deep network based on residual ideas, called a deep residual network (ResNet), to solve the problem of a deep network's high dependence on large amounts of data in the process of promoter prediction. We used binary one-hot encoding to represent the promoter and took advantage of ResNet to extract feature representations from organisms with a large amount of promoter data. Then, we transferred the learned structural parameters to target organisms with insufficient promoter data to improve the generalization performance of ResNet in target organisms. We evaluated the promoter datasets of four organisms (Bacillus subtilis, Escherichia coli, Saccharomyces cerevisiae and Drosophila melanogaster). The experimental results showed that the AUCs of ResNet's promoter prediction after deep transfer were 0.8537 and 0.8633, which increased by 0.1513 and 0.1376 in prokaryotes and eukaryotes, respectively.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call