Abstract

The biomolecular modeling field has flourished since its early days in the 1970s due to the rapid adaptation and tailoring of state-of-the-art technology. The resulting dramatic increase in size and timespan of biomolecular simulations has outpaced Moore's law. Here, we discuss the role of knowledge-based versus physics-based methods and hardware versus software advances in propelling the field forward. This rapid adaptation and outreach suggests a bright future for modeling, where theory, experimentation and simulation define three pillars needed to address future scientific and biomedical challenges.

Highlights

  • The biomolecular modeling field has flourished since its early days in the 1970s due to the rapid adaptation and tailoring of state-of-the-art technology

  • Models trained with databases of ligand–protein complexes where ligands that bind weakly are underrepresented[59,60] can overestimate binding affinities. For some applications, such as deriving force fields by machine learning protocols, access to a large and diverse high-quality training dataset obtained by quantum mechanics calculations is essential to obtain reliable results for general applications[61]

  • Since the pioneering work of Behler and Parrinello on the use of neural networks to represent Density functional theory (DFT) potential energy surfaces and to describe chemical processes[111], machine learning (ML) has been applied to design all-atom and coarse-grained force fields, analyze molecular dynamics (MD) simulations, develop enhanced sampling techniques and construct Markov state models (MSMs), among others[112]

Read more

Summary

Biomolecular modeling thrives in the age of technology

The biomolecular modeling field has flourished since its early days in the 1970s due to the rapid adaptation and tailoring of state-of-the-art technology. The 1990s saw disappointments when, in addition to unmet high biomedical expectations[1], such as the failure of the human genome information to lead quickly to medical solutions[5], it was realized that force fields and limited conformational sampling could hold us back from successful practical applications This period was followed by many new approaches, using both software and hardware, to address these deficiencies. For example, the dramatic drop in the cost of gene sequencing technology, from US$2.7 billion for the Human Genome Project to US$1,000 today to sequence an individual’s genome[13] Can this information be used for personalized medicine, but we can sequence genomes such as of the severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) in hours and apply such sequence variant information to map the disease spread across the world in nearly real time[14]. Biomolecular modeling and simulation is thriving in this ever-evolving landscape, evidenced by many successes, and in experiments driven by modeling[15]

Nuclear pore complex
Nuclear core complex
The role of algorithms versus hardware
RNA polymerase
Algorithms for longrange interactions
Conclusions and outlook
Additional information
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call