Abstract

Protein-ligand interactions (PLIs) play important roles in cellular activities and drug discovery. Due to the technical difficulty and high cost of experimental methods, there is considerable interest in the development of computational approaches, such as protein-ligand docking, to decipher PLI patterns. One of the most important and difficult aspects of protein-ligand docking is recognizing near-native conformations from a set of decoys, but unfortunately, traditional scoring functions still suffer from limited accuracy. Therefore, new scoring methods are pressingly needed in methodological and/or practical implications. We present a new deep learning-based scoring function for ranking protein-ligand docking models based on Vision Transformer(ViT), named ViTRMSE. To recognize near-native conformations from a set of decoys, ViTRMSE voxelizes the protein-ligand interactional pocket into a 3D grid labeled by the occupancy contribution of atoms in different physicochemical classes. Benefiting from the Vision Transformer architecture, ViTRMSE can effectively capture the subtle differences between spatially and energetically favorable near-native conformations and unfavorable non-native decoys without needing extra information. ViTRMSE is extensively evaluated on diverse test sets including PDBbind2019 and CASF2016, and obtains significant improvements over existing methods in terms of RMSE, R and docking power.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.