AbstractOnline environments have the potential to disrupt traditional orderings of expertise and allow a wider audience to engage as experts in the process of knowledge creation. Many online environments use crowdsourced metric-based systems, such as upvotes and reputation scores, to help identify experts. While these online systems provide opportunities to recognize expertise in new ways, they continue to reproduce and reconfigure existing issues around what is considered expertise and who is considered an expert. To explore these issues, we conduct an ethnographic engagement with Stack Overflow, a community knowledge sharing platform for programmers and coders. Our empirical material comprises 14 interviews with Stack Overflow users and documentary material collected from the platform.We find that while platform owners often present their systems as arbiters of expertise, crowdsourced metric-based systems like Stack Overflow may not locate subject matter experts, but rather may measure community trust and engagement. Reward mechanics may also tempt users to play the game of increasing their metrics rather than developing expertise. Furthermore, our findings suggest that certain conditions in online environments may actively prevent those from under-represented groups from being recognized as experts. This presents an ethical challenge to the premise that online platforms may democratize the sharing of knowledge and recognition of expertise. Our findings suggest that while users may be aware of such issues, metrics are so intertwined with the functioning of the platforms they use that they may accept the premise that those metrics are relevant measures of expertise regardless.
Read full abstract