Abstract

Machine behavior that is based on learning algorithms can be significantly influenced by the exposure to data of different qualities. Up to now, those qualities are solely measured in technical terms, but not in ethical ones, despite the significant role of training and annotation data in supervised machine learning. This is the first study to fill this gap by describing new dimensions of data quality for supervised machine learning applications. Based on the rationale that different social and psychological backgrounds of individuals correlate in practice with different modes of human–computer-interaction, the paper describes from an ethical perspective how varying qualities of behavioral data that individuals leave behind while using digital technologies have socially relevant ramification for the development of machine learning applications. The specific objective of this study is to describe how training data can be selected according to ethical assessments of the behavior it originates from, establishing an innovative filter regime to transition from the big data rationale n = all to a more selective way of processing data for training sets in machine learning. The overarching aim of this research is to promote methods for achieving beneficial machine learning applications that could be widely useful for industry as well as academia.

Highlights

  • When developing learning software, practitioners have additional ethical responsibilities beyond those of standard, non-learning software (Wolf et al, 2017)

  • With regard to supervised machine learning, meaning artificial neural networks, support vector machines, naive Bayes classifiers, regression algorithms etc., those “social environments” can, among others, be understood as different training stimuli that shape the behavior of a machine

  • Training data fed into supervised machine learning applications reflect, in case it is about behavioral data, people’s behavior, so people’s behavior has an indirect influence on machine behavior (Barocas & Selbst, 2016)

Read more

Summary

Introduction

Practitioners have additional ethical responsibilities beyond those of standard, non-learning software (Wolf et al, 2017). Training data fed into supervised machine learning applications reflect, in case it is about behavioral data, people’s (e.g., discriminative) behavior, so people’s behavior has an indirect influence on machine (discriminative) behavior (Barocas & Selbst, 2016) This influence cannot be described as a direct relationship, meaning as an equivalence between people’s behavior and machine behavior. When technology ethicists talk about “moral machines” (Wallach & Allen, 2009) in the context of machine learning applications, one has to ask for “moral people” and “moral people’s data”, to put it These “moral machines” are the result of engineering or design choices, they are dependent on the selection of hyperparameters or specific wirings of artificial neural networks, and the like. Today’s machine learning techniques are dependent on human participation In many cases, they harness human behavior that is digitized by various tracking methods. An extensive infrastructure for “extracting” (Crawford, 2021, p. 15) valuable personal data or “capturing” human behavior in distributed networks via user-generated content, expressed or implicit relations between people, as well as behavioral traces (Olteanu et al, 2019) builds the bedrock for a computational capacity called “artificial intelligence” (Mühlhoff, 2019)

Objectives
Discussion
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call