Abstract
AbstractNew hardware, massively parallel and graphical processing unit‐based computers in particular, has boosted molecular simulations to levels that would be unthinkable just a decade ago. At the classical level, it is now possible to perform atomistic simulations with systems containing over 10 million atoms and to collect trajectories extending to the millisecond range. Such achievements are moving biosimulations into the mainstream of structural biology research, complementary to the experimental studies. The drawback of this impressive development is the management of data, especially at a time where the inherent value of data is becoming more apparent. In this review, we summarize the main characteristics of (bio)simulation data, how we can store them, how they can be reused for new, unexpected projects, and how they can be transformed to make them FAIR (findable, accessible, interoperable and reusable).This article is categorized under: Molecular and Statistical Mechanics > Molecular Dynamics and Monte‐Carlo Methods Computer and Information Science > Databases and Expert Systems
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.