Abstract

Transformer models have been developed in molecular science with excellent performance in applications including quantitative structure-activity relationship (QSAR) and virtual screening (VS). Compared with other types of models, however, they are large and need voluminous data for training, which results in a high hardware requirement to abridge time for both training and inference processes. In this work, cross-layer parameter sharing (CLPS), and knowledge distillation (KD) are used to reduce the sizes of transformers in molecular science. Both methods not only have competitive QSAR predictive performance as compared to the original BERT model, but also are more parameter efficient. Furthermore, by integrating CLPS and KD into a two-state chemical network, we introduce a new deep lite chemical transformer model, DeLiCaTe. DeLiCaTe accomplishes 4× faster rate for training and inference, due to a 10- and 3-times reduction of the number of parameters and layers, respectively. Meanwhile, the integrated model achieves comparable performance in QSAR and VS, because of capturing general-domain (basic structure) and task-specific knowledge (specific property prediction). Moreover, we anticipate that the model compression strategy provides a pathway to the creation of effective generative transformer models for organic drugs and material design.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call