Abstract

This special issue of Information Systems Journal on Social Inclusion in the Information Systems Field was motivated by our desire to help the IS field to develop a greater understanding about aspects of human diversity in relation to the development, deployment, management, use, and impact of information systems and technologies. Consistent with a focus on human diversity, we were also interested in methodological diversity. Hence, we encouraged the submission of both quantitative and qualitative papers. We also requested papers representing positivist, interpretive, and critical epistemologies. Finally, we were interested in the use and extension of existing theories as well as the development of new ones. Our intention was to highlight rigorous IS research and theorizing about social inclusion in the information systems field. This, in turn, would support the larger goal of producing research that will contribute to a better understanding of the causes, manifestations and impacts of social exclusion, and subsequent interventions. Information Systems Journal is an appropriate venue for the publication of social inclusion research. We three special issue editors have all held editorial roles at ISJ. Eileen Trauth has served as Editor-in-Chief, K.D. Joshi has served as Senior Editor, and Lynette Kvasny has served as Associate Editor. ISJ also has a history of publishing social inclusion research, both through individual papers (eg, Quesenberry & Trauth, 2012; Windeler & Riemenschneider, 2016) and through special issues—such as the Special Issue on Women and IT (Von Hellens, Trauth, & Fisher, 2012). The process of producing this special began in 2015 with the call for papers. This was followed by a post-International Conference on Information Systems (ICIS) workshop in 2015 on conducting social inclusion research that was offered by Association for Information Systems SIG Social Inclusion. Prospective authors were encouraged to participate in this workshop and to engage with the special issue editors. In July 2016, we received 21 manuscripts to review. First round decisions were made in February 2017. Following the peer review process, four manuscripts were selected for publication, representing a 19% acceptance rate. Final versions of these manuscripts were accepted in December 2017 and January 2018. We editors would like to thank the reviewers who gave generously of their time to review the manuscripts and provide developmental feedback. Without their efforts, this special issue would not have been possible. The notions of exclusion and inclusion are currently in the zeitgeist of the information systems profession. But there have been IS scholars working in this area for some time now. Indeed, this special issue builds upon and contributes to a substantial body of work—individual papers, special issues of journals, and conferences—that has preceded it. In this editorial, we draw upon that history to provide some context about the evolution and scope of social inclusion research at the time that this special issue of Information Systems Journal is published. Our intention is to help the reader to situate and better understand the work presented here. Additionally, to help prospective scholars interested in conducting such research, we offer some thoughts about new frontiers of social inclusion in IS research. Social inclusion research in the information systems field began to be published at the end of the 20th century but came into its own in the 21st. In their introduction to the proceedings of the 2006 Conference of the International Federation for Information Processing (IFIP), Working Group 8.2, Trauth and Howcroft (2006, p. 3) address the question of why social inclusion was an important theme to explore in 2006. Their overall answer was that it was time to expand the boundaries of IS research beyond organizational and managerial impact to include societal influences as well. Their specific answer was that the 21st century has provided ample evidence of societal disparities which could be addressed, in part, through the use of information and communication technologies. They go on to point out that the information society of our century has produced both positive and negative, intended and unintended consequences, which demonstrate that people in certain regions of the world, and parts of other populations, are underrepresented in and underserved by information technology. Clearly, this is still the case more than 10 years later. Trauth (2017) traced the evolution of social inclusion in IS research from such conferences as well as early special issues of journals (eg, Adam, Howcroft, & Richardson, 2002) and the establishment of the Association for Information Systems Special Interest Group on Social Inclusion in 2009. The earliest focus of social inclusion research tended to be on issues related to the underrepresentation of women in the information systems profession. While this line of research has continued, the scope has also widened to include examination of issues and barriers associated with other identity characteristics such as race, ethnicity, nationality, geography, age, sexual orientation, disability, and socio-economic status. As the volume of research has grown so too have the venues within which this work is being published, moving this research from the sidelines towards the mainstream of the IS profession. Top journals in the IS field, such as ISJ, now regularly publish papers about barriers to full participation in the information systems profession and about those who are underserved by it. To date, social inclusion research has considered both information technology professionals and information technology users from the perspective of issues and barriers affecting individuals of certain identity characteristics including gender, sexual orientation, age, disability, race, ethnicity, nationality, geography, and socio-economic status. But where might social inclusion research go in the future? Where should it go? Below, we consider some of the new frontiers of social inclusion research. There are two distinct but interloped themes that pervade the broad social inclusion research in information technology (IT) and work (Joshi, Trauth, Kvasny, Morgan, & Payton, 2017; Trauth et al., 2010). One theme focuses on the disadvantaged or underserved individuals, groups, and communities that are systematically barred from access to opportunities and resources in IT (eg, Joshi & Kuhn, 2011; Kuhn & Joshi, 2009; Trauth, Quesenberry, & Huang, 2009). The underlying premise of this theme is that lowering the entry barriers (eg, correcting skill deficiencies, amplifying self-efficacy, access to more funds, and providing better mentoring) allows the excluded to be included by simply crossing the “boundaries” (Trauth & Howcroft, 2006). The second theme focuses on existing structures, such as digital, economic, social, political, cultural, and legal, that promote inequality (eg, Deng, Joshi, & Galliers, 2016; Kvasny & Keil, 2006; Kvasny & Trauth, 2003). In this theme, the focus is on the process of being “shut out” from these structures (Walker & Walker, 1997), which are essential to success in the digital workspaces. This theme also emphasizes the role of those who are engaged in exclusion. One new frontier of social inclusion research that falls under the second theme includes the new digital work environments, such as the digital platforms for crowdsourcing (hereafter referred to as CS platforms), where the conduct of work unfolds in (eg, MTurk) or through (eg, Uber) IT. We posit that these digital structures are ripe for studying the issues from a structural perspective, ie, the second theme. Specifically, in order to safeguard against the biases codified into the new digital work structures that are being designed, built, and used as a foundation for the digital economy, it is critical that we make them the subject of our investigations. These CS platforms are shown to simultaneously empower and marginalize workers (Deng & Joshi, 2013; Deng, Joshi, et al., 2016). On one hand, CS provides a platform that empowers workers to be entrepreneurial, but on the other hand, these digital structures create a sweatshop-like work environment where workers are completing fragmented tasks for minimal pay (Deng, Joshi, et al., 2016; Kittur et al., 2013). If we want to build and sustain these emerging digital structures as socially inclusive workspaces, which are touted as the wellsprings of entrepreneurial creativity, we need to also be vigilant about the institutional practices and societal impact of these digital structures. Arguably, we IS scholars have a moral obligation to proactively remedy the biases found in these digital platforms before they become an instrument of injustice. Designing equitable structures can help to ameliorate social exclusion. Hence, we call on IS researchers to critically investigate CS platforms to systematically reveal biases coded in their designs that promote exclusionary practices and prevent equitable work opportunities for all. It is not enough to just uncover biases, IS scholars also need to propose new and creative design solutions to remedy the revealed biases. It could be argued that in the developed world, at least, the traditional notion of digital divide is no longer a significant barrier due to ubiquitous high-speed internet and open access to jobs posted on the CS platforms. However, such access alone does not bridge the “new” digital divide (Deng, Galliers, & Joshi, 2016; Deng & Joshi, 2013). Even though the technology provides open, easy, and free access to CS work environments, the terms of engagement are controlled by the digital platform designers/owners and employers who hire the on-demand workforce. As a result, the CS platform designs privilege the platform owners who have the power to control how the digital work environments (such as the sourcing models, compensation models, and work policies) are built and operated. Such power asymmetry provides opportunities for abuse (Deng, Joshi, et al., 2016). We call for research that critically examines the current CS platform designs and proposes new designs that are sensitive to the values that foster social inclusion and deter exclusion. In addition to supporting power asymmetries between owners and employees, CS platform designs can also harbour more insidious forms of exclusion that are literally encoded in the algorithms themselves. The use of algorithmic decision-making tools by public and private organizations has grown significantly and has sparked deep concern that such automated and opaque choices may produce discriminatory outcomes (Zarsky, 2015). Algorithmic bias occurs when the machine learning models reproduce the intentional and unconscious biases of humans making decisions about collecting data, identifying data to be used in algorithms, and deciding how the data is to be used in the algorithm. Data inequality often results from human biases in measurement or other past wrongs that led to overrepresentation of some forms of negative data about minorities in the data sets (O'Neil, 2016; Zarsky, 2015). In addition to data and processing errors, algorithmic bias can occur from the algorithm's keen ability to understand and predict human conduct (Friedman et al, 1996; Zarsky, 2015). As algorithms play an increasingly important role in key domains of our lives, so does the potential for substantial harm, especially for women, racial and ethnic minorities, low-income communities, religious minorities, and other vulnerable populations. Even if these algorithms are not designed with the intent to discriminate, they can reproduce patterns of social exclusion and discrimination. Kilpatrick (2016) describes a number of instances of algorithmic unfairness and outright discrimination reported in the popular press and scholarly literature. For instance, algorithms that serve up advertisements can be viewed as inadvertently biased in their use of end users' demographic characteristics. Sweeney (2013) shows that advertisements for a background check service were more likely to be displayed after a search for names that are traditionally associated with African Americans. Lambrecht and Tucker (2017) conducted a field test of an ad on social media for STEM jobs. The ad was designed with the explicit intention to be gender-neutral in its delivery. However, women were far less likely to be shown the ad than men due to an ad delivery algorithm optimized for cost effectiveness. Because it cost more to display ads to young women, the algorithm delivered ads in a discriminatory way. A 2015 study at Carnegie Mellon University found that Google showed ads for high-paying jobs much more often to men than to women (Carpenter, 2015). Future research to identify sources of algorithmic bias are needed. Moreover, because algorithmic decision-making processes are susceptible to being examined, learned, and gamed, regulatory mechanisms are critically important for monitoring bias (Zarsky, 2015). A few approaches have been proposed. There have been calls by policy makers in New York, for instance, to increase transparency by making public the data and machine learning models in algorithmic decision-making systems used by government bodies. Other states, including Wisconsin and Texas, will require “warning labels” about the accuracy of crime prediction software (Ip, 2017). However, it will be impossible for humans to oversee every decision an algorithm makes. With deep neural networks, the software is teaching itself based on correlations that it finds in large training datasets. In this way, the software can transcend human understanding and we may not be able to determine how the software reaches its decisions (Ip, 2017). Greater transparency may make technology companies more accountable while also leaving them vulnerable to hacking (Ip, 2017). A major research question that emerges is how to remove these biases from our algorithms when these same biases exist in our world. The four papers presented in this special issue do not represent the entirety of social inclusion research. Rather, they represent a sample of high-quality social inclusion research at this moment in time. These papers focus on two identity characteristics related to social inclusion: gender and age. They not only tell us stories about particular aspects of social inclusion. These papers also serve as exemplars of high-quality information systems research. They represent a range of theories, epistemologies, and methodologies that can be used in social inclusion research. Hence, future research on social inclusion in IS can build on both the content and the research approaches taken in these papers. The paper by Armstrong, Riemenschneider, and Giddens (2018) updates an existing model to further nuance the barriers that women face in the information technology (IT) profession. Using qualitative data from three organizations, they show that perceptions of the structural and social factors affecting women's advancement and persistence in IT careers have changed over time. Social and structural factors influence women's advancement and persistence in IT, and in contrast to Ahuja's propositions, the influence of social factors on advancement appears to be increasing. Specifically, they reveal that both the social expectations (eg, due to gender differences) and institutional structures (eg, organizational and job structures) influence women's career advancement and persistence in IT. Work-family conflict (eg, family responsibilities and work schedule flexibility) and lack of informal networks (eg, the presence of good-old-boy networks) are associated with career advancement, and occupational culture, and shaped women's persistence in IT careers. Annabi and Lebovitz (2018) also consider social inclusion from the perspective of women in the IT field. However, their focus is not the analysis of gender issues but rather a critical examination of gender interventions. They did so through comparative case studies of gender diversity interventions in nine organizations. Their resulting framework integrates intervention characteristics along with barriers experienced by women in IT careers and the coping methods they employ. The authors conclude by offering propositions based on this theoretical framework that can serve as a guide to further research into gender diversity and inclusion interventions. This paper makes an important contribution to an under researched aspect of social inclusion: the identification and evaluation of interventions. Turning to social inclusion issues based around age, Fox and Connolly (2018) consider the use of IT by the elderly, specifically, mobile health (m-health) technologies. These include the use of mobile applications, wearable devices, and health record systems to improve health outcomes. However, they point out that the promise of m-health is limited by resistance on the part of some older adults, which is resulting in an age-based digital divide. The authors used protection motivation theory and social cognitive theory to uncover the factors leading to this resistance. Through a combination of quantitative (survey) and qualitative (interview) methods, they learned that resistance among older adults stems from mistrust, high risk perceptions and a strong desire for privacy, in relation to m-health use. Their recommendations for narrowing the m-health digital divide include inclusive design, improving self-efficacy, developing privacy literacy, and building trust. The final paper in this special issue, by Iivari, Kinnula, Molin-Juustila, and Kuure (2018), examines digital technology projects that educate and empower children. The authors contend that IS research should take children, along with their digital technology skills and competencies, into the focus of its study, as children will form the future IS workforce. Using nexus analysis as a theoretical and methodological approach, they show how children intentionally engage in behaviours and discourses that foster both inclusion and exclusion. Some students were eagerly engaged with the technology projects, while others resisted and even prevented other children form taking part. These exclusions are created by a complex mix of historical, structural, individual, and interactive factors and involve teachers, children, and parents.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call