Abstract
This and other specialized journals publish many papersthat describe computer software, including programs foranalyzing data (Duff et al. 2007; Srinivasan et al. 2007;Bagarinao et al. 2008; Condron 2008; Liu et al. 2008;Zhang et al. 2008; Glascher 2009; Goldberg et al. 2009;Gunay et al. 2009; Nowinski et al. 2009), assist in theacquisition or management of data (Brown et al. 2005;Bezgin et al. 2009), and for simulating computer models(Cannon et al. 2003; Ichikawa 2005; Versace et al. 2008;Koene et al. 2009). Like all papers submitted to the journalthe manuscripts are thoroughly refereed by two or threeindependent reviewers for scientific quality and clarity ofthe exposition. Usually, however, the reviewers have totrust that the authors gave a fair description of the software.The situation is somewhat similar to the review ofexperimental papers, where the referees have to trust thatthe authors describe the experiments accurately andcompletely. In experimental science, it would be impracti-cal to reproduce systematically the empirical claims. Forcomputer software, in contrast, this limitation only reflectsan old-fashioned approach, stemming from a time when itwas difficult to distribute code or executables, and whenprograms were often very platform-dependent. In this era ofsharing of resources and data (Kennedy 2004) and of web-based software distribution (Gardner et al. 2008; Luo et al.2009) it has become fairly easy to make the software itselfalso accessible to reviewers, opening possibilities fordeeper review of software related papers. This opportunityis particularly meaningful for the field of neuroinformaticsand its leading (and namesake) journal.Over the last year our journal has been running a pilotprogram in which it asked reviewers of papers describingneuroinformatics programs to also evaluate the softwareitself. Often this required no extra work on the side of theauthors because they were already making the softwareavailable for anonymous download. Otherwise we arrangedthat the action editor could make the software available tothe anonymous reviewers. The results of this pilot programwere interesting and encouraging. The most commonproblem, reported for several papers, was that the reviewerscould simply not run the software due to installation orcompilation problems. This is not entirely surprising:anybody who has distributed software that needs to becompiled or that depends on the presence of specificlibraries (typically programs written in java or python)knows that installation problems are among the mostfrequent complaints of users. Whenever reviewers encoun-tered this type of problem the response of authors wasimmediate and they clearly saw this feedback as beneficialfor their software distribution effort. Other issues that aroseduring software review were processing speed and access tobenchmarking results. From an editorial viewpoint thisinformation was seminal in deciding whether the softwarewas eligible for description as an Original Article or moresuited for a News Item.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.