Abstract

This issue has two papers with results on real industrial software projects. The first, On the testing of user-configurable software systems using firewalls, by Robinson and White, presents the ‘just-in-time’ testing strategy for user-configurable software and includes results on commercial software. The second, A case study in model-based testing of specifications and implementations, by Miller and Strooper, presents a case study of testing software specifications. Before discussing the h-index, I have the pleasure to make an announcement about the journal. In 2012, STVR will have eight issues rather than the four issues per year we have had for the last 21 years. This will allow us to clear the backlog of papers and support the increasing amount of research in the software testing field. My last editorial 1 discussed the reasons why scientists publish papers and emphasized publishing to influence either the research field or the industry. I also wrote an editorial 2 on the problems with the way journals are often evaluated by publishers and universities, the ‘journal impact factor’ 3. My opinion of this deeply flawed criterion has not changed, but I recently learned about another measure that looks more promising. The index h, defined as the number of papers with citation number higher or equal to h. The h-index and the journal impact factor have an essential difference. The journal impact factor has a 2-year ‘window’, that is, it only counts papers published in the last 2 years. The h-index has no window. It counts all papers published by an individual over a lifetime. The h-index also has some other interesting characteristics. It omits papers that are ignored by other scientists, thus encouraging scientists to publish papers on topics that matter and in places that are read. The h-index also rewards longevity and productivity in numbers of papers, but only the papers that other scientists read and cite. This means that the h-index cannot directly compare scientists who have been working for different lengths of time. A derivative measure might be the h-index divided by the years since the first publication, although I have not seen that used or proposed. Our promotion committee was told that a general rule of thumb is that successful scientists should expect to have an h-index approximately the number of years they have been working, excellent scientists should have an h-index of about 1.5 times, and h-indexes of 2 times the years working are very rare. Of course, for a measure to be successful, we need to calculate it. Luckily, the web makes this easy. And not surprisingly, free calculator tools are available on the web, most notably Google Scholar. The h-index is designed for individuals, not journals, so it cannot directly replace the journal impact factor. But it could certainly be adapted. Appropriate modifications would have to be made to account for the age of the journal and the number of papers published per year. So for me, this is the first measure of research productivity that I can support. One thing is missing, though. The fourth reason to publish from my last editorial was to influence practice … the h-index does not measure this. Can we find a measure that does?

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call