Abstract

Drug design and optimization are challenging tasks that call for strategic and efficient exploration of the extremely vast search space. Multiple fragmentation strategies have been proposed in the literature to mitigate the complexity of the molecular search space. From an optimization standpoint, drug design can be considered as a multi-objective optimization problem. Deep reinforcement learning (DRL) frameworks have demonstrated encouraging results in the field of drug design. However, the scalability of these frameworks is impeded by substantial training intervals and inefficient use of sample data. In this paper, we (1) examine the core principles of deep or multi-objective RL methods and their applications in molecular design, (2) analyze the performance of a recent multi-objective DRL-based and fragment-based drug design framework, named DeepFMPO, in a real-world application by incorporating optimization of protein-ligand docking affinity with varying numbers of other objectives, and (3) compare this method with a single-objective variant. Through trials, our results indicate that the DeepFMPO framework (with docking score) can achieve success, however, it suffers from training instability. Our findings encourage additional exploration and improvement of the framework. Potential sources of the framework’s instability and suggestions of further modifications to stabilize the framework are discussed.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call