Abstract

Drug discovery is a complex process with many potential pitfalls. To go to market, a drug must undergo extensive preclinical optimization followed by clinical trials to establish its efficacy and minimize toxicity and adverse events. The process can take 10-15 years and command vast research and development resources costing over $1 billion. The success rates for new drug approvals in the United States are < 15%, and investment costs often cannot be recouped. With the increasing availability of large public datasets (big data) and computational capabilities, data science is quickly becoming a key component of the drug discovery pipeline. One such computational method, large-scale molecular modeling, is critical in the preclinical hit and lead identification process. Molecular modeling involves the study of the chemical structure of a drug and how it interacts with a potential disease-relevant target, as well as predicting its ADMET properties. The scope of molecular modeling is wide and complex. Here we specifically discuss docking, a tool commonly employed for studying drug-target interactions. Docking allows for the systematic exploration of how a drug interacts at a protein binding site and allows for the rank-ordering of drug libraries for prioritization in subsequent studies. This process can be efficiently used to virtually screen libraries containing over millions of compounds.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call