Abstract
Chemical language models (CLMs) can be employed to design molecules with desired properties. CLMs generate new chemical structures in the form of textual representations, such as the simplified molecular input line entry system (SMILES) strings. However, the quality of these de novo generated molecules is difficult to assess a priori. In this study, we apply the perplexity metric to determine the degree to which the molecules generated by a CLM match the desired design objectives. This model-intrinsic score allows identifying and ranking the most promising molecular designs based on the probabilities learned by the CLM. Using perplexity to compare “greedy” (beam search) with “explorative” (multinomial sampling) methods for SMILES generation, certain advantages of multinomial sampling become apparent. Additionally, perplexity scoring is performed to identify undesired model biases introduced during model training and allows the development of a new ranking system to remove those undesired biases.
Highlights
Generative deep learning has become a promising method for chemistry and drug discovery.[1−21] Generative models learn the pattern distribution of the input data and generate new data instances based on learned probabilities.[22]
Perplexity has been used to assess the performance of language models in natural language processing.[37−39] For a simplified molecular input line entry system (SMILES) string of length N, the perplexity score can be computed by considering the chemical language models (CLMs) probability of any ith character: N
The information on the overall character probabilities is captured into a single metric, which is normalized by the length of the SMILES string (N)
Summary
Generative deep learning has become a promising method for chemistry and drug discovery.[1−21] Generative models learn the pattern distribution of the input data and generate new data instances based on learned probabilities.[22] Among the proposed generative frameworks that have been applied to de novo molecular design,[2−19] chemical language models (CLMs) have gained attention because of their ability to generate focused virtual chemical libraries and bioactive compounds.[20,21,23] CLMs are trained on string representations of molecules, e.g., simplified molecular input line entry system (SMILES) strings (Figure 1a),[24] to iteratively predict the next. Alternative generative approaches have been proposed for de novo design,[13,27−29] benchmarks have not shown these to outperform CLMs.[30,31] A feature of CLMs is their ability to function in low-data regimes,[25,29] i.e., with limited training data (typically in the range of 5−40 molecules).[2,3,25] One of the most widely employed approaches for low-data model training is transfer learning.[20,32] This method leverages previously acquired information on a related task for which more data are available (′′pretraining”) before training the CLM on a more specific limited dataset (′′fine-tuning”).[33]
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.