Abstract

We present main results from the bibliometric part of a recent evaluation of two different postdoctoral (postdoc)-funding instruments used in Denmark. We scrutinize the results for robustness, stability, and importance, and eventually come out questioning the official conclusions inferred from these results. Acknowledging the deficiencies of non-randomized designs and modelling of such data, we apply matching procedures to establish comparable groups and reduce systematic bias. In the absence of probability sampling, we refrain from using statistical inference. We demonstrate the usefulness of robustness analyses and effect size estimation in non-random, but carefully designed, descriptive studies. We examine whether there is a difference in long-term citation performance between groups of researchers funded by the two instruments and between the postdocs and a control group of researchers that has not received postdoc funding, but are otherwise comparable with the postdoc groups. The results show that all three groups perform well above the database average impact. We conclude that there is no difference in citation performance between the two postdoc groups. There is, however, a difference between the postdoc groups and the control group, but we argue that this difference is ‘trivial’. Our conclusion is different from the official conclusion given in the evaluation rapport, where the Research Council emphasizes the success of their funding programmes and neglects to mention the good performance of the basically tenure-tracked control group.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call