Abstract
We investigated the application of consensus scoring using the freely available and open source structure-based virtual screening docking programs AutoDock Vina, smina and idock. These individual programs and several simple consensus scoring methods were tested for their ability to identify hits against 20 DUD-E benchmark targets using the AUC and EF1 metrics. We found that all of the consensus scoring methods, however normalized, fared worse, on average, than simply using the output from a single program, smina. Additionally, the effect of a significant increase in the run time of all three programs was tested to find if a longer run time yielded improved results. Our results indicated that a longer run time than the default had little impact on the performance of these three programs or on consensus scoring methods based on their output. Thus, we have found that using the smina program alone at default settings is the best approach for researchers that do not have access to a suite of commercial docking software packages.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.