Abstract

In the biomedical field, the efficacy of most drugs is demonstrated by their interactions with targets, meanwhile, accurate prediction of the strength of drug-target binding is extremely important for drug development efforts. Traditional bioassay-based drug-target binding affinity (DTA) prediction methods cannot meet the needs of drug R&D in the era of big data. Recent years we have witnessed significant success on deep learning-based models for drug-target binding affinity prediction task. However, these models only considered a single modality of drug and target information, and some valuable information was not fully utilized. In fact, the information of different modalities of drug and target can complement each other, and more valuable information can be obtained by fusing the information of different modalities. In this paper, we introduce a multimodal information fusion model for DTA prediction that is called FMDTA, which fully considers drug/target information in both string and graph modalities and balances the feature representations of different modalities by a contrastive learning approach. In addition, we exploited the alignment information of drug atoms and target residues to capture the positional information of string patterns, which can extract more useful feature information in SMILES and target sequences. Experimental results on two benchmark datasets show that FMDTA outperforms the state-of-the-art model, demonstrating the feasibility and excellent feature capture capability of FMDTA. The code of FMDTA and the data are available at: https://github.com/bestdoubleLin/FMDTA.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call