Abstract

Machine learning (ML) has become a crucial component in software products, either as part of the user experience or used internally by software teams. Prior studies have explored how ML is affecting development team roles beyond data scientists, including user experience designers, program managers, developers and operations engineers. However, there has been little investigation of how team members in different roles on the team communicate about ML, in particular about the quality of models. We use the general term quality to look beyond technical issues of model evaluation, such as accuracy and overfitting, to any issue affecting whether a model is suitable for use, including ethical, engineering, operations, and legal considerations. What challenges do teams face in discussing the quality of ML models? What work practices mitigate those challenges? To address these questions, we conducted a mixed-methods study at a large software company, first interviewing15 employees in a variety of roles, then surveying 168 employees to broaden our understanding. We found several challenges, including a mismatch between user-focused and model-focused notions of performance, misunderstandings about the capabilities and limitations of evolving ML technology, and difficulties in understanding concerns beyond one's own role. We found several mitigation strategies, including the use of demos during discussions to keep the team customer-focused.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call