Abstract

First of all, many thanks to the AICPA, the Management Accounting Section, the selection committee, and its chair, Shane Dikolli. This award was a big surprise to me, and I am honored to be in the company of the distinguished award winners of previous years. I am also grateful to the conference organizers for moving this session to a later time slot than originally planned, so that it was not so very long before the breakfast hour of rational utility maximizers in many U.S. time zones. The session still began very early, however—5 a.m., for U.S. West Coast colleagues—hence the title of this talk. It is inspired by a passage in Alice in Wonderland, in which Alice is talking to the White Queen:I'm going to ask you to believe six impossible things about management accounting when you've barely started on your coffee, so let me explain why. This last year (2020) has been a difficult one, and the next few years could be difficult as well, so it might be valuable to take a look at how we have done impossible-looking things before, as a way of encouraging ourselves to believe that we can handle some impossible-looking tasks that lie ahead.The program of this conference is the first of my six impossible things we have achieved. The first time I came to this conference, more than 25 years ago, there was a grand total of six or eight papers on the program, which means that the program is now ten or 12 times as large as it once was. Although the attendees at that long-ago event hoped the conference would survive and even grow, I did not hear anyone then predicting growth on anything like the scale we have actually seen.In retrospect we might say that growth is not so surprising: after all, most American Accounting Association sections have increased their activities over the years, and research publications in all areas have grown. After all, look at how much bigger The Accounting Review is than it was 25 years ago. But in fact some other sections that started small have stayed small. Many section conferences are not ten or 12 times bigger now than they were 25 years ago. The Accounting Review is not ten or 12 times bigger.So how did this remarkable growth happen? I think that, like most worthwhile events in academia, it took contributions from people doing two different kinds of activities, which we might call institution-building and content provision. On the institution-building side, people had to make the conferences happen, keep the journal coming out, keep the section from going broke, and so on—and older folks like me will remember some, shall we say, very lively times on all these fronts.I want to take a minute now to thank some of the great institution-builders in the section—by no means all, but some that particularly mattered to me. I have fond memories of the late Bill Ferrara, who was the first president of the section and the first editor of JMAR. Not only was he someone who stood up and got things done when things needed doing, he was also a tremendous encourager of other people—which was probably why he could get things done. He made a lot of effort to get to know younger people in the field and give them a hand.Two other people who were tremendous encouragers of others, indispensable builders of the section and the field, and life-preservers for me personally were my senior colleagues at Michigan State, Sue Haka and Mike Shields. I probably cannot convey to you how clueless I was when I started at Michigan State. I don't think they let you out of PhD programs these days with that degree of cluelessness; I think they keep you locked in the basement until you can show you have some common sense. Without Sue taking me firmly by the scruff of the neck and pointing me in the right direction, saying, “Go to this conference,” “Write this paper,” “Try this journal,” “Be on this committee and not that one,” I do not know what would have happened to me. (I didn't always go in the direction she pointed me, but I couldn't have managed without the guidance.) Some years later, Mike joined us, to the great benefit of our group at MSU. Mike was an important bridge-builder between people and institutions, a tremendously hard worker, an enthusiast for the world of management accounting, and a great support for me personally. I continue to be amazed and impressed by the amount of time both Sue and Mike put into—often stressful—university business and AAA business. Like Bill, they were institution-builders who were endlessly generous to younger colleagues.Institution-building is necessary but not sufficient in academia. It does not get us very far if we are not also undertaking the kind of intellectual activity that provides content for the conferences and the journals and eventually the classroom. When I look at the papers that are being presented at this conference, I remember that when I was a PhD student, I heard that a lot of this research would be impossible to do, at least at a level of quality that would support successful careers for many people. But here it is.So my next three impossible things all have to do with the creation of a credible body of empirical management accounting research. Thirty years ago, a number of distinguished theoretical studies in management accounting had already been published, but empirical research was still limited. In 1989, when the planning for the first section research conference was getting underway, and the first volume of our section journal appeared, JMAR published three empirical studies; in its most recent year, 2020, it published 25 such studies. This kind of growth was not confined to the section journal only. In 1989, The Accounting Review published two empirical studies in management accounting and in 2020 it published 18. (The Accounting Review published about twice as many total research articles in 2020 as in 1989, so the overall increase in publications does not explain the increase in empirical management accounting studies.)Many intelligent people did not expect this kind of increase to occur. In the 1980s, when I told my PhD advisor, Bob Libby, that I wanted to do some kind of empirical research on management accounting, he said that was not going to work, and I should think again. (As you can see, I do not have a good history of following the advice of my excellent mentors.)I am very grateful to Bob for all sorts of reasons—he is one of the smartest and most effective people I have ever met, and you always gain something just by being around people like this—but in this matter of management accounting research I also have two very specific reasons for being grateful to him. First, although what I wanted to do was not his field and he did not think it was a particularly good idea, he let me do it anyway and was very supportive. Second, he gave some sensible reasons why empirical management accounting research was unlikely to succeed, and it was helpful—not only then, but also later—to think about these reasons and think about ways of getting around the roadblocks they represented.Bob said, “Look at the research that's out there now in high-quality journals. There's a match between accounting sub-field and method. Financial accounting research uses archival data, auditing research uses experiments, and management accounting research uses analytical modeling. This is not an accident.” At the time, this was a plausible summary of the situation; many people believed it was somewhere between difficult and impossible to do much quality research any other way. So let's look at these impossibilities and how we got past them.Many people believed, 30 years ago, that it was not feasible to get credible archival data for any substantial body of management accounting studies, and they had good reasons for thinking so. First, archival management accounting data was internal data, which firms often would not allow to become public. Second—and this was one of Bob's concerns—internal data, unlike external financial reporting, was relatively unconstrained by regulation and therefore varied greatly across firms. In consequence, it would not be possible to pool it across firms for large-sample hypothesis testing, or to generalize from it satisfactorily, in the way that financial-statement data could be pooled. These seemed like very compelling arguments at the time.We get past many problems, not by solving them in the terms originally stated, but by redefining the problems out of existence. In the 1980s it seemed impossible to develop a large body of credible archival research in MA because we were thinking about the problem in the wrong way. If we defined archival data—as people often did then—as something that looked just like Compustat but contained internal firm data rather than external reporting, then it was indeed impossible to get such data. But this definition was an unnecessary obstacle in a couple of ways.First, researchers came to make more use of the fact that external financial accounting numbers are used in organizations for a variety of internal decisions that are traditionally the domain of management accounting research, such as performance evaluation and reward systems. Hence clever researchers could use these publicly available data to address classic management accounting questions.Second, and more importantly, researchers—not only in management accounting but in other areas of accounting—began to think differently about issues of large samples and generalizability. Early capital markets research aimed at making statements about the universe of firms publicly traded in the U.S., and perhaps in other moderately efficient capital markets. But as this research matured, it became increasingly clear that trying to stay at this very high level of generality limited what we could learn.Even in the financial accounting world, there proved to be too much diversity in the data at this level. As Bernard and Stober (1989, 648) noted in a study of a basic issue in financial accounting, the value-relevance of earnings components: “It is possible that the links between detailed earnings components and valuation are so highly contextual that no parsimonious model could ever capture more than a small portion of the story.” Accounting information was likely to have different meanings and uses in different industries and different economic-historical settings, and thus we might need to dial down our ambitions for very broad-scale generalizations. It gradually became clear that—as our colleagues in the social sciences might have told us long ago—much of what we want to learn is particular, and the search for widely valid “laws” of human behavior is not always a promising project. As Max Weber wrote in the 1920s:By the later 1990s, with this more nuanced approach to generalizability issues, it had become possible (for example) for a premier journal to publish an archival management-accounting study in which two of the three major parts of the analysis relied on data from a single firm each (Ittner and Larcker 1998). Moreover, it had become possible to publish such a paper with virtually no explicit defense of the single-firm samples and no anxious acknowledgments of possible lack of generalizability. The data analyses in the study were, in effect, their own defense: it was evident that we were learning something of value from these analyses about the magnitude and timing and shape of the relation of customer satisfaction to financial performance, and about the sensitivity of these relations to industry and to measurement and modeling choices. It was also evident from the study itself that some of these relations could have been more difficult to see clearly in a larger and more diverse sample.Of course we do want to generalize, in the sense of learning something in one context that may apply to others. We can in fact learn like this, because no organization or period is unique in all respects. But I think that if we are honest with ourselves, we must acknowledge that we are often uncertain about the contexts to which our learning from existing studies might apply. After all, even with the large-sample studies of firms in the U.S. securities market that are common in finance and financial-accounting research, it is far from clear what the population is to which researchers might generalize. Are we trying to generalize to securities markets in other countries? To firms not listed on the markets? To U.S. markets in historical periods before or after the period from which the data were taken? None of the above? Similarly with management accounting studies: we can hardly make a fetish of generalizability without having a very clear concept of the population to which we wish to generalize, and to the best of my knowledge that clear concept has not been forthcoming.I can remember a time when the question, “How can you generalize from this sample?” was treated as a devastating critique of all archival management accounting studies and made it difficult to publish such research. A more mature view of these issues has added significantly to our understanding of accounting by letting us learn from data provided by single organizations or small numbers of organizations.Experimental research in auditing flourished in the 1970s and 1980s, in part because the audit firms were generous in providing “real auditors” as participants. It seemed impossible to use “real management accountants” or managers as participants, not only because their firms were unlikely to volunteer many of them for such tasks, but also, as Bob Libby reminded me, because their experiences and understandings would be so diverse—they would understand and respond to any experimental task so differently—that the noise in experimental data would probably overwhelm the information. The expansion of executive MBA programs at many universities over the last decades, as well as the intensive cultivation of alumni relations at a variety of business education programs, increased access to experienced managers, who might be willing to volunteer for an experiment out of goodwill toward their alma mater. But this did not solve the potential problem of diversity in experience that led to diverse responses to the same material.In fact, people's understanding and behavior do differ greatly across settings in some respects: but in other respects, they do not differ so much. On the one hand, I did a couple of pilot experiments that never saw the light of day because, when participants with significant managerial experience did the task, they rocketed off in all directions. They came from different firms that understood and performed these particular tasks differently, and they tended to recast the experimental materials in their own minds to conform to their own experiences.On the other hand, a couple of my experiments that did see the light of day were replicated in a wide variety of settings with a variety of participants (Luft 1994; Luft and Libby 1997). The results were robust to many changes in the characteristics of participants and settings and task details—though not robust to a few others. There was clearly something we could learn from an experiment with one set of participants that would carry over to experiments with other types of participants and to non-experimental environments. For example, Van der Stede, A. Wu, and S. Wu (2020) provide archival evidence of employees' unwillingness to accept penalty contracts compared to economically similar bonus contracts, consistent with the experimental evidence in Luft (1994), and Anderson and Simester (2010) provide field evidence of economically significant reactions to “unfair” pricing, consistent with the arguments underlying Luft and Libby (1997). However, I think it has remained fairly difficult to predict ex ante exactly what will carry over from one set of participants to another and what will not.The “Limitations and future research” sections of experimental studies sometimes suggest that the experiments should be replicated with participants who differ with respect to some possibly relevant characteristics such as training, culture, experience, or gender. I think, however, that replication with different participant groups is a brute-force approach that will not get us very far very fast in the absence of better theory development. We need persuasive causal explanations of why particular individual characteristics would make a difference to the production or use of the accounting information we are studying. “More experienced people would behave differently because they are more experienced” does not qualify as a persuasive causal explanation. Specific kinds of experience or other characteristics might make a difference to what people want (utility functions, in a very broad sense), how they conceptualize their environment, their tasks, or the information they have, and what prior knowledge they can bring to bear. Effective theories would describe and explain these processes clearly.One factor that I think assisted in overcoming the initial problems related to choice of experimental participants in management accounting research was a turn in the 1990s and subsequently, away from experiments based on individuals' information-processing capabilities (e.g., their decision style—hierarchic versus integrative [Driver and Mock 1975], or their ability to learn Bayesian versus more conservative decision models [Uecker 1978])—and toward experiments based on preferences (e.g., gain versus loss asymmetry [Luft 1994], fairness norms [Luft and Libby 1997; Kachelmeier and Towry 2002], or self-aggrandizement in motivated reasoning [Bloomfield and Luft 2006; Tayler 2010]). Although information-processing factors are presumably as important as preferences, there were certain advantages to investigating preference variables at the time. Behavioral economics researchers had already done considerable work on modeling alternative preferences, which assisted in conceptualization and measurement. They had also undertaken extensive empirical research on these preferences, which enabled management accounting researchers to present evidence that the preferences which drove their experimental results also drove behavior by actors in a variety of economically significant settings outside the laboratory (e.g., Camerer [2001] for prospect theory, Cornelißen, Himmler, and Koenig [2011] for fairness, and Meier [2007] for a variety of “pro-social” preferences). Variables employed earlier, like decision style, had not always been so well defined or well documented yet in field settings. Over time, information-processing variables have come to be more clearly modeled in psychology (and sometimes behavioral economics) research, and field evidence about them has increased. However, a great variety of these variables have been proposed in psychology, and there is much uncertainty about when and where we might expect to see each of these, either in the laboratory or in natural environments. Hence we still face difficulties in judging which individuals will behave similarly or differently in particular circumstances.Although behavioral economics has provided usable models of individual preferences, it has often not provided theoretical models of the key decisions made by producers and users of management accounting information. Psychology theories, of course, have not done so either: psychologists do not create theoretical models of inventory management or control system design. Economic theory does so; but another of the apparent impossibilities of a generation ago was that many people believed the dominant economic theory was one that was virtually impossible to use as a basis for empirical tests, especially with archival data.A variety of good theories can help us to understand management accounting, and I have always argued for making use of this variety. But in research on control systems, agency theory has been a leading force for a generation or more, and as long as it seemed untestable with “real-world” data—as it once did—we had a big roadblock in the way of satisfactory studies of control systems. Empirical data do not do us a lot of good without theory that helps us understand what we're seeing. Having data without theory is like having a heap of Legos without any notion in your mind of what you might build with them.To give you an idea of researchers' reservations about testing agency theory a few decades ago, let me cite some concerns that Bob Magee raised as late as 2001 about the difficulty of concluding anything whatever from comparisons of agency-model predictions with empirical data. If the data match observed behavior, he said, then we may worry that the model has been rigged so that the math comes out predicting exactly the behavior that we already know exists (Magee 2001, 94). But this does not give us much assurance that the model in fact provides the right explanation for this behavior; and if certain model-parameter values were to change in the future in the empirical world, the model might make very inaccurate predictions.On the other hand, Magee said that if observed behavior does not match the predictions of the model, then:This is a daunting array of explanations for theory-versus-data discrepancies, and it can be difficult to be sure which of these explanations might actually account for what we see. This is particularly the case because:Twenty years after Magee's observations, his concerns still have some validity; but here again we have been able to make progress by thinking differently about the problem—in this case, by thinking differently about how we might want to bring model and data together.Some decades ago, researchers often talked about “testing the model,” as if it were like testing a machine, checking every part to make sure it matches the blueprint, finding settings that exactly match the assumptions of the model, trying to measure or manipulate all the unobservables, and so on. This was hardly possible with archival data (Antle and Smith [1986] provided a careful and vivid illustration of the difficulties of drawing clear-cut conclusions from a small-sample archival study inspired by an agency model). Laboratory data could come closer (for a heroic but not altogether successful1 effort of this kind, see Baiman and Lewis [1989]), but even with some success in laboratory tests, it remained unsatisfactory that there was so little connection between “real-world” observations and the agency models that played such a large role in management accounting research beginning in the 1970s.In recent decades, many successful archival studies have moved away from attempts to provide rigorous and complete tests of a particular principal-agent model. We have made more progress, I think, by not trying to match the model exactly from beginning to end, but instead, documenting interesting patterns of behavior in organizations and asking whether, or in what respect, principal-agent models can cast some light on reasons these patterns occur. As Dikolli et al. (2013, 132) observe about such studies: “If the evidence is consistent with the predictions of the model, this does not confirm the model, but rather merely suggests that the main economic forces in the model may be important in explaining the observed results.”In successful studies of this kind, the focus is not on making some use of the world to help us understand more about a particular model, but making some use of models to help us understand more about the world. Successful studies often make use of insights that are common to multiple principal-agent models with somewhat different assumptions (thus making it more likely that these insights might also apply to the researchers' setting, which does not exactly match the assumptions of the models). They may also cite practitioner literature and studies with other theoretical bases that provide comparable or somewhat varying explanations of observed patterns of behavior. Successful studies provide relevant, often novel information about the organizational setting, the economic environment, the construction and use of multiple accounting measures, and how these factors relate to each other, thus giving readers a good sense of what the research setting is like and how it works. (See Bouwens and van Lent [2007]; Grabner and Moers [2013]; and Matsumura and Shin [2006], among many others, as examples of these strategies.)Our European and, more recently, Asian colleagues have been particularly active in doing this kind of work and this degree of internationalization is something else that really was not expected a generation or more ago.Some years ago I was a member of a large international team that was brought in to evaluate the business school (economics faculty) at a European university. One of the criteria on which we were supposed to evaluate departments was the international impact of the department's research. I was startled when one of the team leaders said that of course this criterion didn't apply to accounting, because accounting was a very nationally bounded subject and it wasn't possible to have international relevance or impact in accounting, comparable to what we might see in, for example, marketing or information systems.There was a time when this was fairly true: the American Accounting Association was very American indeed, and comparable associations in other countries were also very focused within their own borders. It is easy to forget how natural and inevitable this seemed to many people a generation or two ago.It is also easy to forget how much work went into changing this “natural and inevitable” condition. I have had a chance recently to observe another academic association trying to internationalize itself, and I am reminded that this is not a quick or easy process. The opportunities for misunderstanding and impatience and consequent failure are real and substantial, and we are all indebted to the people who worked, and continue to work, to give our section such a broad international perspective. The late Anthony Hopwood was a key figure in this process, to whom we all owe a great deal; and here again Mike Shields was a major contributor for a long time as well.I can still remember the section conference at which, suddenly, a whole group of colleagues from The Netherlands appeared: it seemed quite a novelty at the time. In that year, they were rather quiet; the next year, they were back and more actively participating, and the year after that, they were presenting interesting papers, as they have continued to do. The international breadth of the section has given us a diversity of perspectives on common research themes, and I think that the section has benefited greatly from this.As I mentioned in the beginning, 2020 has been a difficult one for universities, and the next few years will bring challenges as well, in the form of budget pressures and calls for rethinking the instruction models and business models of our institutions. What will become of management accounting in these circumstances, and what it will contribute—well, that's up to you. I have been retired for a number of years now, and I believe you understand the current challenges better than I do. Let me just suggest that—as with other seemingly impossible tasks in the past—the solutions may lie in reframing the problems rather than trying to solve them in the forms in which they first appear to us.I hope you have all now been able to believe five impossible things before breakfast, and I hope that you now have confidence that you can do the sixth one after breakfast, so that our field comes out of these challenging times stronger than ever.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call