Abstract

This paper investigates how item response models can be used to select efficient items for concept testing and assess differential item functioning among major innovations and minor innovations. The results indicate that the six-item scale utilized in the on-line testing performs differently across the performance continuum. Surprisingly the scale as a whole is not effective at identifying the most promising and the least attractive concepts. As for the single item efficiency, believability and importance provide the most information for identifying the poor concepts. Purchase intention, problem solving and uniqueness are super at selecting good concepts. In addition, all six items display large DIF among major and minor innovations, which indicate that items perform significantly different for concept tests of major and minor innovations. Minor innovations are disadvantaged on average across six items. Liking, importance, uniqueness and believability discriminate better for major innovations, while problem solving item is more effective for minor innovations. This research provides an alternative methodology for item analysis other than classical test theory.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call