Abstract

, the TPDS Reproducibility Initiative [2][3] has being exploring postpublication peer review of code associated with published articles for a few years. Authors who have published in TPDS can make their published article more reproducible and earn a reproducibility badge by submitting their associated code for post-publication peer review. To date, this pilot has largely focused on two badges: 1. Code Available: The code, including any associated data and documentation, provided by the authors is reasonable and complete and can potentially be used to support reproducibility of the published results. 2. Code Reviewed: The code, including any associated data and documentation, provided by the authors is reasonable and complete, runs to produce the outputs described, and can support reproducibility of the published results. While TPDS’ goal has always been to include badges for reproducing research results using the code and/or data provided, the nature of research in parallel and distributed systems covered by TPDS makes it challenging to evaluate code and data challenging for reproducibility. This is because such an evaluation may require access to specific hardware, system architectures and scales, OS configurations, and so on, which may not be feasible or practical. vel system/middleware services, is typically infeasible. Consequently, TPDS has piloted an alternate approach where members of the community can submit short, supplemental ‘critique’ papers that present their experiences in reproducing published results using the artifacts, and/or evaluations or experiences with published artifacts. These supplemental paper submissions are reviewed and, if accepted, are linked to the original publication and are citable, serving to help validate the reproducibility of the original publication. This approach was first implemented in a special section guest edited by Stephen Lien Harrell and Beth Plale and consisting of a primary paper and 6 critique papers that reproduce the results of the primary paper. This special section continues this effort, building on the SC20 Student Cluster Competition, which was part of the SC20 conference. It consists of 9 critique papers that reproduce the results of the primary paper.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call