Abstract

Theory is a crucial aspect of the information systems (IS) discipline. Authors draw from articles on how to develop theory and from the theories themselves to anchor knowledge contributions. Editors and reviewers expect to see novel theoretical insights in conjunction with empirical rigour and sophistication (cf. Hardin et al., 2022). The thinking of PhD students is shaped by discussions on the importance of theory through formal coursework and research seminars, as well as socialisation with peers, supervisors and senior scholars in the field. Journals often solicit submissions to special issues that champion particular kinds of theory or theories on specific topics, for example indigenous theory (Davison, 2021). Advice is given to authors in different ways that they can theorise (Hassan et al., 2022; Hong et al., 2014; Sandberg & Alvesson, 2021; Weick, 1989). The peer review process emphasises the importance of theory and tends to reject research articles that lack substantial theoretical contribution. However, assessing theoretical contributions is often a challenging task. IS scholars research a variety of topics with a pluralistic set of methods and epistemological approaches (Tarafdar et al., 2022), which have several implications for our engagement with theory. Traditionally, reference disciplines have informed the diversity of topics IS scholars investigate. The IS field is at a point in its disciplinary evolution where we are seeing an even greater ambit of the application and use of IS, which fosters new topics being investigated from different epistemological and methodological viewpoints as well as new types of contributions (Tarafdar & Davison, 2018). Consequently, IS theories take on different roles for different types of epistemologies and methods, and not understanding or respecting these differences can lead to unreasonable or unbalanced evaluation of papers. In addition to the diversity of theoretical approaches, we also perceive differences in the nature of engagement with theory. For example, papers that analyse large amounts of secondary data (textual and numerical, structured and unstructured) often focus on complex empirical techniques to analyse such datasets, often engaging minimally with theory (Miranda et al., 2022). We believe that sophisticated data analysis does not relieve IS researchers from the obligation to make a theoretical contribution. In this context, we believe, that we should take heed of the advice by Gurbaxani and Mendelson (1994) who warned, almost 30 years ago, about ‘the risks of ignoring the guidance of theory’ and recommended that IS researchers refrain from tinkering with ‘atheoretical “black box” extrapolation techniques’ (p. 180). In an earlier editorial in this journal, Davison and Tarafdar (2018) noted how baselines for what is an acceptable contribution in a discipline shift over time. However, it is our view that a robust theoretical contribution should be (and is) a consistent expectation, even if the nature of the theoretical contribution varies. Journals play a key role in establishing baselines and in that spirit, recent and emerging intellectual trends in IS and other disciplines have implications for how we apply and develop theory in IS and point to an evolving and multi-focused role of theory in IS research. Therefore, in this editorial, we revisit and explicate why theory is important at the Information Systems Journal (ISJ) in these emerging scenarios. Seven of the ISJ's regular senior editors (Andrew Hardin, Angsana Techatassanasoontorn, Antonio Díaz Andrade, Gerhard Schwabe, Monideepa Tarafdar, Paul Benjamin Lowry and Sutirtha Chatterjee) join the editor-in-chief (Robert Davison) to craft a position statement regarding the ISJ's view on theory. It is applicable, with sensitivity, to the empirical research articles that we consider for publication. Specifically, we provide a set of guidelines to help ISJ authors consider the role of theory in crafting papers of different genres and different epistemological and methodological approaches. Consistent with the journal's cultural values (Davison & Tarafdar, 2022), we lay out a pluralistic and inclusive view of theory and theoretical contributions. The guidelines are broadly indicative of what we believe are key points that authors should consider. We encourage authors submitting their research to the ISJ to consider these guidelines carefully, as we expect that reviewers will be aware of them, and senior and associate editors may also consider them as they craft their reports. However, these guidelines are not meant to serve as a comprehensive checklist, and least of all as a template for rejection. Theory lies at the heart of a scholarly discipline, supporting its scholarly relevance, identity and legitimacy. Without theory and the associated cumulative contribution to knowledge, the viability of a discipline is jeopardised because its scholarly distinctiveness is lost. As Suddaby (2014) puts it, ‘To cede theory means to give up legitimacy (of knowledge)’ (p. 409). Similarly, Van de Ven (1989, p. 486) states that ‘Good theory is practical precisely because it advances knowledge in a scientific discipline, guides research toward crucial questions, and enlightens the profession’. Weick (1989) emphasises that a good theory should be plausible and correspondent with reality. Thus, theory helps us ‘organise our thoughts, generate coherent explanations and improve our predictions’ (Hambrick, 2007, p. 1346). At the same time, there is recognition that theory can be performative (Burton-Jones et al., 2021), that is, theories influence practice as well as other theories. Because of this, we have the obligation to avoid making ‘excessive truth claims based on extreme assumptions and partial analysis of complex phenomena’ that can result in theories that mislead researchers and practitioners (Ghoshal, 2005; p. 87). Theories are employed to make sense of phenomena and are useful if they guide and structure both the research and the telling of the research story. In research designs that utilise a deductive and positivist approach with respect to data, theory guides the development of relationships to be tested in the form of hypotheses, analytical models, and so on. Campbell's (1990) definition of theory fits well under a deductive and positivist epistemological approach: ‘a collection of assertions, both verbal and symbolic, that identifies what variables are important and for what reasons, specifies how they are interrelated and why, and identifies the conditions under which they should be related or not’ (p. 65). In a deductive approach, theory plays a distinctive role in conceptualising concepts and constructs, thus defining the empirical benchmarks of what is measured and what data is collected. For inductive and interpretive research designs, the emphasis is on the process of generating theories or theoretical understanding (Strübing, 2007). Theories constitute ‘temporarily acceptable generalisations about the influences on and consequent variations in human action’ (Kearney, 2007, p. 148). Yet, existing theory can play the role of sensitising the data collection endeavour (i.e., guide the researcher toward what data to collect) or be applied toward sense-making and analysis of the data (i.e., help the researcher in anchoring the patterns and relationships emerging from the data). In both cases, theory gives meaning to the data (Illari et al., 2011). However, not understanding the respective roles of theory is likely to result in incorrect evaluation and review of the theoretical contribution of manuscripts. We illustrate with two examples. Consider research that collects primary data expressly for the purpose of the project (e.g., a theory-driven survey) versus that which utilises secondary data not collected specifically for the research (e.g. data scraped from user activity on social media websites or collected by organisations in anticipation of future functional value it may bring). The latter is not collected according to the rigorous standards essential to the conceptualisation and operationalisation of constructs in a theorising process and is thus subject to issues of incomplete observations and/or noisy data (Stieglitz et al., 2018). Consequently, theoretical concepts, constructs and propositions from such data may not be developed based on the theory that specifically informs the data collection; rather in many cases, theoretical engagement is somewhat eschewed, thus creating a more serious problem where such data is replete with issues such as endogeneity bias (Wooldridge, 2010). Quantitative research designs based on such data are thus subject to a slew of robustness tests to address the natural endogeneity bias that results from (1) omitted variables (missing portions of the nomological network of constructs), (2) measurement error, (3) simultaneity, and (4) selection bias (Wooldridge, 2010; Zaefarian et al., 2017). However, not understanding the role of theory and how it can dramatically reduce endogeneity bias, can lead reviewers and editors to unnecessarily and incorrectly ask authors using the first type of research design to conduct robustness checks only appropriate for the second type. Such requests, and any attempts to address them, frequently result in frustrations among authors, reviewers and editors. Relatedly, consider research that seeks to generate insights from secondary datasets through qualitative or computational analysis, for example ML-based pattern generation (Miranda et al., 2022). Our ability to analyse vast amounts of data in nearly all forms has spotlighted this second kind of research. The role of theory in such research is ideally to serve as a guiding light to understand the generated concepts and relationships and assess their novelty. However, the absence of understanding of this role of theory can lead to research designs that jettison theory altogether and focus on finding patterns in an exploratory way without building theoretical understanding in parallel with data analysis. Rigorous and essential conceptual understanding is not generated in these instances. We recognise that there are different types of theory (Gregor, 2006), different forms of theorising (Cornelissen et al., 2021; Sandberg & Alvesson, 2021) and different objects of theorising (Hassan et al., 2022; Rivard, 2014). However, for IS research we submit that theoretical engagement should follow the sociotechnical tradition. IS phenomena arise at the confluence of social and technical factors. Our discipline has, since its early days, described this fused approach as the sociotechnical approach (Mumford, 2006), one that has hues that can be described along a continuum (Sarker et al., 2019). Although the extent to which each component (the technical and the social) is present in a phenomenon varies qualitatively, each is present. The cumulative IS literature points to several typical and desirable characteristics of IS-centric, theoretical understanding. Such understanding is developed around the traditional IT artefact, and the greater IS artefact (Chatterjee et al., 2021; Lowry et al., 2020; Orlikowski & Iacono, 2001), and spans phenomena relating to their design, development and use. The theoretical insight includes both a social component (i.e., what happens and why when the artefact is designed, developed or used) and a technical component (i.e., the nature of the explicit influence of the artefact characteristics). IS scholars develop and advance theoretical understanding of IS phenomena through novel constructs, associations, processes, and design artefacts that adhere to these characteristics. Moreover, IS-centric theoretical understanding is critical to the transformation of social theories because of such advances. At ISJ, we expect authors to explicitly articulate theoretical insights that offer novel interpretations or challenge and problematise conventional understanding of the phenomenon under investigation (Sandberg & Alvesson, 2021), broadly adhering to the general criteria articulated above. In addition, given the wide range of phenomena, problems, methods, topics, data types and contexts in IS scholarship, we lay out practical guidelines for developing theoretical knowledge, based specifically on the particular focus of research. The guidelines are intended to help prospective ISJ authors frame and articulate the theoretical treatment of their work; they can also assist editors and reviewers in evaluating the theoretical merits of these works. Theorisation should precede primary data collection. It should include the conceptual development of anticipated relationships among concepts (i.e., hypotheses or analytical models) and the development or adaptation from literature, of appropriate operationalisation for measuring the concepts. All this involves logical and nomological argumentation based on engagement with theories and general engaged scholarship with industry and other researchers, such that it is not primarily driven by mere gap-spotting in literature reviews. Context-driven theoretical arguments are a crucial aspect of developing new theoretical insights or extending existing ones in novel directions. To give a simplified example, if the goal is to test an existing theory with a new population of IS users, the focus of theorisation should be to hypothesise new relationships (e.g., moderated and mediated relationships) based on the new population, which extend or alter the theory's predicted relationships rather than replicate them (Hong et al., 2014). Theoretical engagement after data is collected and during analysis has pitfalls in that it may lead to constructs and relationships that are not theoretically defensible or novel, even if significant statistical effects exist. In such a case, if published, there is the risk that readers will interpret the results as if the data were collected in a theoretically appropriate way and accept this imprecisely defined data as accurately representing the original conceptual and theoretical definitions in the literature, potentially propagating erroneous constructs through domino effects. Deductively analysing secondary data that is not collected specifically for the purpose of the research and generated without reference to theoretical concepts or established measurement criteria can result in serious data-related issues such as endogeneity, which in turn require the application of rigorous ex post facto robustness checks. As mentioned previously, such tests are not salient in the case of primary data collected based on established theory and measurement. Overall, the goal of this approach toward research is to test theoretically anticipated relationships among concepts. Irrespective of the empirical data, the concepts and relationships should be theoretically novel, and theoretically and empirically defensible. Inductive inference involves deriving a theoretically cohesive abstraction from particular observations. This bottom-up thinking should result in new knowledge expressed in the form of novel theoretical insights, which could be (but not necessarily so) a new theory. If the analysis is done under the grounded theory tradition, the theory-building exercise encompasses iterating between data collection and data analysis, and the constant comparison between emerging codes and new instances of data (Urquhart et al., 2010). Nonetheless, the theory still plays a role in this inductive process; not to force data into existing theories, but to theoretically sensitise the researcher in the analysis and conceptualisation of emergent findings. Theory also plays a role in guiding the researcher in choosing the subject of study. Dey (1993) forcefully illustrates the ‘difference between an open mind and empty head’ (p. 63) to highlight that knowledge of extant theories will prevent the researcher from engaging in futile research endeavours. The emergence of sophisticated data analytics techniques opens opportunities for inductive theorising from large datasets. In such cases, it is not just a matter of reporting patterns or associations found in large datasets. As the authors of a recent MIS Quarterly editorial on computationally intensive theory construction argue, ‘Computationally surfaced patterns are not themselves theories … Nor are new patterns that lead to new insight necessarily theoretical contributions’ (Miranda et al., 2022, p. vi). Although we do not expect that every research article will result in the creation of a new theory, we do expect a carefully articulated theoretical contribution toward understanding or explaining the phenomena under investigation. Merely hinting at such theoretical contributions is not enough. Overall, articles following this approach should provide the building blocks of theoretical understanding, such as novel concepts, measurement of concepts, relationships, propositional logic, taxonomies, typologies, metaphors or novel and provocative questions for future research (Hassan et al., 2022). The theoretical understanding should surface the idiosyncrasies and distinct aspects of the context. Unlike the deductive approach that follows a priori logic and the inductive approach that follows a particularising logic, abductive inference follows a posteriori logic in which a cause is inferred from its effect. Abductive inference involves a continuing back-and-forth movement between observations and theory (Díaz Andrade, 2023); the abductive researcher is constantly recalibrating their theoretical frames of reference (Steiner Sætre & van de Ven, 2021) to find the best explanation that ‘fits the surprising facts’ (Reichertz, 2007, p. 221, emphasis in the original). A typical instance of an abductive approach may involve an unusual empirical observation, typically one that does not fit into accepted theoretical frameworks, prompting a study, subsequent data collection and analysis. Ngwenyama and Nielsen's (2014) study offers an exemplary case. An unusual observation prompted their study: the successful implementation of a software process improvement initiative under conditions that made success a highly unlikely outcome. By comparing the data collected over 23 months, the authors developed a theoretical explanation of why purely technical-rational approaches alone do not explain the success of technology implementation projects. An abductive approach requires a solid understanding of relevant theoretical frames of reference and an inquisitive mind to look at the (puzzling) IS phenomenon with fresh eyes. Such a mindset fosters novel theoretical understandings while resolving the tension between the discovery of the unusual observation and the methodological justification for its analysis. Design research studies the construction principles of technological artefacts. The scientific logic of design research is thus in the tradition of engineering research. While causal theory states ‘a causes b’, engineering research says ‘if you want to achieve b, it is useful to try a’ (a = construction principles, b = effects) (Kornwachs, 2012, p. 103). This implies three critical parts of good design research: a) a relevant problem statement (making it worthwhile to achieve b), a helpful solution statement (describing the construction principles), and c) the demonstration of usefulness, typically through a sound evaluation (Venable et al., 2016). To qualify as research, design research needs to demonstrate novelty in at least one of these three parts, and it needs to abstract the insights from ‘instance problems’ and ‘instance solutions’ to generic problems and solutions (Lee et al., 2015). Most design research is framed as being ‘problem-driven’, that is, researchers first describe a problem and then propose an artefact that solves this problem. However, in reality, most design research follows the hypes of computer science and business interest. They ask: ‘What can I do with technology X?’ X being, for example, machine learning, blockchain or cloud computing. Authors of such research can reformulate such research as problem-driven (as it typically leads to a more interesting story) or make the solution-driven nature of their research explicit (Briggs et al., 2019). IS design research develops socio-technical solutions, that is, statements on technical artefacts, their use, and their context of use. In the tradition of engineering research, design research can contain a novel technical artefact. It can also focus on the use and context of the use of technical artefacts resulting in process models and other frameworks. Sociotechnical solutions mature from rough proof-of-concept prototypes to proof-of-value systems and proof-of-use systems (Nunamaker et al., 2015). Behavioural and other types of theory can and should be used in design research. For example, a good problem statement revisits behavioural research and makes a theoretical argument about the underlying causes of the surface problems. Suppose B can directly be matched to a design feature of an artefact. In that case, theories can also be used directly to inform good design (Briggs, 2006) and advance behavioural theorising. In more complex settings, B can only be implemented as an assemblage of features based on a set of construction principles. Computer science and prior IS design research should be used in a subsection of related work to inform the designer on the state of the art of building such a solution. However, it is also possible to reframe behavioural research into design research. This is particularly easy if the study focuses on use and context of use: If an established behavioural theory states ‘A causes B’, it is trivial to propose ‘Use B to achieve A’. Such design research is not only not novel but also frequently contains construction principles that are too simple or too abstract to help create a solution. For conceptual articles focused on developing theories or theoretical insight, the role of the IS artefact and the associated sociotechnical aspects should be clearly identified. Like inductive empirical articles, they should develop novel concepts, design-related artefacts or principles, measurement of concepts, relationships, propositional logic, taxonomies, typologies and metaphors for a specific phenomenon of interest. We suggest that the theoretical logic that is developed be cohesive around a set of connected concepts and visionary in its ability to inspire future studies. For articles focusing on literature reviews, conceptual frameworks and research agendas, theory or theoretical insight can typically be an organising framework. For research opinion articles, theory or theoretical insights are not necessary, but they can form the informing logic for argumentation development to stimulate new theoretical development. For research methods articles, there is no explicit requirement for a theoretical contribution, though one might be made according to the guidelines above depending on the epistemological assumptions of the author and the empirical evidence that supports the methods of development. Our motivation for writing this editorial is to share with the scholarly community our articulation of the different roles that theory plays in different approaches to IS research. The IS scholarly community is intellectually vibrant and diverse. It stands to reason that different approaches to research engage differently with theory. While the ISJ welcomes submissions from different approaches (whether deductive, inductive, abductive, design or conceptual), we expect prospective authors to explicitly engage with theory as appropriate to the approach, so that fellow IS scholars can better appreciate different types of novel theoretical insights. What we do not want to see is a groundswell that legitimises theory-impoverished contributions. In this issue of the ISJ, we present eight papers. In the first paper, Pandey and Zheng (2023) argue that existing research on technology affordance often overlooks the influence of social structures on human-technology interactions. They draw upon Giddens' concept of social positioning, which refers to the ways in which individuals' social identities and roles shape their experiences, to examine the adoption of mHealth devices by community health workers in India. The case study shows that mHealth technology can have differential socialised affordances that are contingent on the pluralistic social positionings of CHWs in their respective structural complexes. Socialised affordance becomes the junction where technology meets the structural properties. The social positioning lens also magnifies the delicate interconnections between social actors and social institutions and links the broader macro-structural conditions with the micro-level enactment of technology affordances through human actors at the ground level. The study generates theoretical implications for research on technological affordances by integrating the broader social arrangements and power relations in the analysis of digital practice and digital work. In the second paper, Melville et al. (2023) are motivated by the rapid emergence of new machine capabilities such as ChatGPT, in what many are referring to as the fourth industrial revolution (4IR). Such capabilities, their scoping literature review of the 4IR reveals, have a narrow framing of technologies that advance business objectives. In response, their application of sociotechnical theory expands this framing by developing four sets of affordances, or affordance assemblages that describe the core action possibilities of machines that emulate human capabilities. The four assemblages are related to human cognition (expansive decision-making and creativity automation), and human communication (relationship with humans and intermachine teaming). Two in-depth examples in the context of human-machine co-working and AI safety regulations illustrate how action possibilities leveraging 4IR machine capabilities are co-created with humans, may cause physical and mental damage to humans, and, may benefit humans and organisations, sometimes simultaneously. Shifting to a sociotechnical lexicon of 4IR affordance assemblages may generate new research questions that value individual humanness while advancing societal and organisational objectives. In the third paper, Alam and Sun (2023) explore how system-use practices influence participants’ sustained participation which is key to crowdsourcing success. Participants are frequently demotivated by technical difficulties and the incorrect use of CS systems. They develop a process model of sustained motivation to demonstrate the role of system-use practices in transforming participants' motivation from initiation to progression to sustention through the lens of technology-in-practice. Using an in-depth case study of a large-scale ongoing crowdsourcing project, their findings suggest that crowdsourcing participants' motivation is shaped by an evolving combination of three basic components (i.e., contextual condition, outcome, and action intensity) and mediated by two types of system-use practice (i.e., passive, and active). Further, passive-use practices facilitate sustaining motivation from initiation to progression, whereas active-use practices have a key role in sustention. Their findings also offer actionable insights into improving the viability of crowdsourcing systems in retaining and motivating continuous and increased contributions from participants. In the fourth paper, Mady et al. (2023) present a threat-construal model to examine how information security knowledge depth, breadth and finesse can enable employees to successfully respond to dynamic emerging security threats in agile and creative ways. Using two online experiments with (1) clever animated video manipulations and (2) threats tailored to each respondent's personal experiences, they tested how users' construals of security messages are influenced by the differential portrayal of the psychological distance across all four of its dimensions. The findings reinforce recent research demonstrating how personally relevant security messages can be more persuasive. (You may click the links in Appendix E to see their animations). In the fifth paper, Pillet et al. (2023) take a stance on scale adaptation practices (modifying a psychometric scale to make it suitable for a given research project) in IS research. After gathering evidence from the literature, they challenge some of the fallacious beliefs that pertain to the purposeful alteration of item wording and make the case for more explicit and transparent scale adaptation standards. Their contribution is two-fold: first, they offer an operational definition of the concept of cognitive validity, inviting us to examine specific features of item wording that could bias or distort the response process; second, they introduce a new method to assess the extent to which a given scale meets cognitive validity requirements. This work is important to us at a time when the organisation and management research communities are starting to question their measurement practices, calling for a shift of emphasis to the front end of the measurement process. In the sixth paper, Ens et al. (2023) examine how digital platforms, which are novel organisational forms, use technology to facilitate the dynamic interaction between diverse actors. Research on platforms has so far struggled to capture the dynamic character of control on platforms and instead often relied on static depictions of platform control. In a hybrid ethnographic study of the social commerce platform Poshmark, the authors demonstrate how control on digital platforms changes due to the aggregate effects arising from the operator and participants interacting with each other through the digital features deployed on the platform. This study makes two important contributions. First, by tracking changes in the means and sources of control over time, this work lays the foundation for a systematic study of the dynamics of control on digital platforms. Second, the authors underline the strength of hybrid ethnography's ability to generate nuanced insights into novel phenomena in a digital world. In the seventh paper, Struijk et al. (2023) explore information quality (IQ) challenges and opportunities during digital transformation (DT). While digital technologies increase the availability in volume, velocity and variety of data that organisations can collect and analyse, IQ issues may arise when these are not governed appropriately. Pre-digital organisations may be particularly susceptive to such challenges because of their limited experience with digital technologies and data governance. The authors adopt a theory-infused interventionist research approach and draw upon organisational information processing theory to develop and implement an IQ strategy at a multinational military organisation engaged in DT. Their findings stress the importance of IQ in the digital era by showcasing how it can affect the balance between information processing requirements and capacity. In doing so, they further delineate how pre-digital organisations can navigate DT by strategically addressing IQ. In the eighth paper, Shi et al. (2023) find that technostressors play a dual role in work–family conflicts. Based on the transactional perspective of stress and the challenge-hindrance stressor framework, the authors developed a research model explaining how chronic challenge and hindrance technostressors affected employees' job and family satisfaction through work–family conflict. The model was tested using a three-wave time-lagged longitudinal survey with 268 employees. The results show that challenge and hindrance technostressors had different effects on the time- and strain-based work–family conflict and further induced negative effects on both job and family satisfaction. This research contributes to the literature by demonstrating the dual nature and various effects of technostressors at the interface of work and the home. It also provides guidance for practitioners and suggests various promising future research directions.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call