Abstract

Data Management (DM), like many areas of computer science (CS), relies on empirical evaluation that uses software, data sets and benchmarks to evaluate new ideas and compare with past innovation. Despite the importance of these artifacts and associated information about experimental evaluations, few researchers make these available in a findable, accessible, interoperable and reusable (FAIR) manner, in this way hindering the scientific process by limiting open collaboration, credibility of published outcomes, and research progress. Fortunately, this problem is recognized and many CS communities, including the DM one, are advocating and providing incentives for software and analysis papers to follow FAIR principles and be treated equally to traditional publications. Some ACM/IEEE conferences adopted Artifact Evaluation (AE) to reward authors for doing a great job in conducting experiments with FAIR software and data. After half a decade since AE's inception, the question is whether the emerging emphasis on artifacts, is having a real impact in CS research.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call