Abstract

How can organizations structure their selection of innovation projects to reduce errors in the form of false positives (investments that should not have been made) and false negatives (investments that should have been made but were not)? Although simulations and case studies exist, our theoretical understanding of the effects of selection regimes on both types of errors has been limited due to a lack of decision and outcome data over a large set of projects. We use 121 interviews and secondary material from an accelerator targeting mobile application developers to understand how it implemented three different selection regimes over time and map these to existing literature. We complement this with unique data of 3,580 innovation projects submitted to the accelerator where we collected the outcomes for both funded and rejected projects to measure false positives and false negatives at the project-level. Our findings suggest that despite efforts to improve selection regimes, there are remarkable similarities between them in the tendency to select false positives and false negatives. Considering differences in the pools of projects submitted for selection, as the accelerator strove to tighten the quality distribution in the last selection regime, our evidence suggests that they instead became more likely to make false positive and false negative decisions. This finding holds despite a range of different controls, and aligns with the mechanism that the selectors focused too much on the team’s past track record in a process that is more random than they assumed.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call