Abstract

Artificial intelligence (AI) algorithmic tools that analyze and evaluate recorded meeting data may provide many new opportunities for employees, teams, and organizations. Yet, these new and emerging AI tools raise a variety of issues related to privacy, psychological safety, and control. Based on in-depth interviews with 50 American, Chinese, and German employees, this research identified five key tensions related to algorithmic analysis of recorded meetings: employee control of data versus management control of data, privacy versus transparency, reduced psychological safety versus enhanced psychological safety, learning versus evaluation, and trust in AI versus trust in people. More broadly, these tensions reflect two dimensions to inform organizational policymaking and guidelines: safety versus risk and employee control versus management control. Based on a quadrant configuration of these dimensions, we propose the following approaches to managing algorithmic applications to recording meeting data: the surveillance, benevolent control, meritocratic, and social contract approaches. We suggest the social contract approach facilitates the most robust dialog about the application of algorithmic tools to recorded meeting data, potentially leading to higher employee control and sense of safety.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call