Abstract

If someone could take only one course in business, what would it be? Throughout much of modern history, the answer would likely have been accounting—managerial accounting in particular, because only a handful of financiers invest in other people's businesses, while all merchants must manage their own organizations. To paraphrase Galileo Galilei: the world of business cannot be read until we have learnt the language and become familiar with the characters in which it is written. It is written in reports, and the letters are debits, credits, and other performance measures, without which means it is humanly impossible to comprehend whether something has performed well or poorly.But managerial accounting is losing its position as an essential course in business education. As Merchant (2013) describes the situation:Why such a rapid downward spiral? For one thing, accounting has lost the advantage of its 500-year head start. Accounting was systematized so effectively by renaissance merchants that most of its current mechanics and principles are summarized well enough for introductory business students by Fra Luca Pacioli's Summa Mathematica in 1492. As other business disciplines were systematized over the last 50 years, they quite reasonably have come to command a more prominent position in a complete business curriculum.But Merchant's (2013) description suggests that our field also suffers from problems of our own making. We have carved out no clear distinction between our domain and the rest of the business curriculum, too readily viewing management accounting as a haphazard collection of unrelated topics that would be equally at home in marketing, operations, or management. And we have accepted a view, pushed by our far more numerous colleagues in financial accounting, that managerial accounting is a niche subfield whose small following reflects its limited scope and importance.My goal in this paper is to clarify why managerial accounting remains distinct from (and complementary to) other business disciplines, why it includes financial accounting as a subfield, rather than the other way around, and why it remains the one course every business student should take. I begin by defining Managerial Reporting (which I believe is a more appropriate name, for reasons that will soon become obvious) and discussing its relation to the rest of accounting and to other business disciplines. I then describe some key lessons that should be taught in an introductory managerial reporting course. I close by discussing the research implications of my perspective.I define managerial reporting as follows:The definition includes many terms worth elaborating on to clarify how general they are:This definition of management reporting is broad enough to include all topics commonly considered to be part of an accounting curriculum. Accounting is the subfield of managerial reporting that focuses on financial reports, most of which rely on account balances created with double-entry bookkeeping, and supplemented with explanatory disclosures. The subfields of information systems, internal controls, and auditing are primarily concerned with designing and defending reporting systems. Managerial and cost accounting courses focus on designing and deploying reporting systems, with deployment often framed as “decision-making.” The subfields of financial reporting and taxation are primarily concerned with how those operating within the systems devised by standard setters and legislators (such as the Financial Accounting Standards Board [FASB] and Internal Revenue Service [IRS]) will employ those systems to fulfill the designers' intent (compliance), and exploit them to look more successful or lower their tax liability (earnings management).At the same time, this definition of management reporting is narrow enough to draw a clear distinction from other business disciplines. The boundary is baked into the definition of improvement: progress toward an objective. Once a student is provided with an objective, management reporting will help them design, deploy, and defend a reporting system that will help them achieve it. But the objectives come from our companion disciplines. For example, marketing courses should help students develop appropriate objectives for identifying products that will be of value to their customers, making sure customers understand that value, and creating business arrangements that will encourage customers to pay for that value. Management reporting courses should help students spell out that strategy specifically enough that they can design reports that will convey the most useful information about whether the strategy is working, and deploy those reports to inform and motivate employees.As I say in my recent eBook on managerial reporting, What Counts and What Gets Counted (Bloomfield 2015, 10):The complementary nature of managerial reporting allows us to avoid the zero-sum mentality described by Merchant (2013), which leads business schools to say that topics taught must be taught either in managerial reporting courses or in another discipline, but not both. Students of marketing must learn Cost-Volume-Profit (CVP) analysis because it guides their objectives; they should only launch offerings if volume will be high enough for the resulting contribution margin to exceed upfront investment and other fixed costs. Students of managerial reporting must also learn CVP, but for a different reason: to design a reporting system that will encourage employees to pursue the objectives they learn in marketing, and interpret the financial measures that rely heavily on the distinction between fixed and variable costs. While marketing and managerial reporting courses both incorporate CVP, the former use the analysis to define objectives, while the latter use it to report more effectively on them.I have drawn the scope of managerial reporting so broadly that it encompasses the entire accounting curriculum. What, then, is the focus of a single introductory course that would reasonably be titled Managerial Reporting? I see two primary goals: first, to teach students the most important lessons in reporting system design, deployment, and defense; and second, to apply those lessons within organizations.These goals serve as a useful complement to financial accounting, which is by far the most common introductory accounting course. Introductory courses in financial accounting teach students how to interpret financial statements that are prepared in accordance with Generally Accepted Accounting Principles (GAAP) or another set of standards (like International Financial Reporting Standards [IFRS]); they also introduce students to a variety of institutions in modern financial systems, such as equity markets, informational and financial intermediaries, private banking, and so on. Managerial reporting courses can build on this introductory knowledge by emphasizing that GAAP is but one set of rules for reporting financial performance, and that financial performance is but one aspect of business performance. Armed with their understanding of this narrow slice of reporting, students are ready to explore the many ways financial performance can be reported, and the science of reporting on any aspect of performance. By applying this science within the organization, managerial reporting courses also introduce students to a variety of institutions within modern organizations, such as organizational hierarchies, responsibility centers, and systems for performance evaluation and compensation.Following are some key lessons that I would (and do) include in such a course.The most important lesson in managerial reporting is that we must distinguish between what counts and what gets counted. To an investor, increasing owners' wealth through business operations counts; net income is what gets counted. To a product manager, customer satisfaction counts; the score on a two-question survey is what gets counted. To a teacher, learning is what counts; exams scores are what get counted.The distinction between what counts and what gets counted is as old as Plato's “Allegory of the Cave,” which argued that our perceptions show us merely the shadows of underlying reality. Social scientists have refined this distinction through the predictive validity framework, best known to many accounting faculty through its visual representation in “Libby Boxes” (Libby 1981).Figure 1 shows how Libby Boxes can be used to represent what counts and what gets counted. The top boxes are the underlying constructs that theory posits have a cause-and-effect relationship: hard work causes more learning. This is what counts. The bottom boxes are the proxy measures for each construct. We measure hard work by time spent studying, and we measure the amount learned by exam score. This is what gets counted. Because other factors are likely to affect both learning and exam scores, Figure 2 includes a fifth box to capture variables that we have omitted from our simple model, even though they may influence what gets counted as an effect (grades). Omitted variables might influence the amount actually learned (e.g., more intelligent people learn more), or might simply influence measurement error (e.g., stress biases test scores downward, so grades are lower even though learning is unchanged).The Libby Box framework, a staple of doctoral education in accounting, is an exceptionally useful tool for anyone who wishes to understand managerial reporting:Libby Boxes might seem more appropriate for doctoral students than aspiring practitioners. But my recent E.M.B.A. students, who average about 15 years of work experience, routinely tell me that this is one of the most important lessons they have learned in their program. Why? One reason is that they have first-hand experience with the problems that arise when people think that what gets counted is what counts (a bias that Choi, Hecht, and Tayler [2012, 2013] call “surrogation”). Many have been paid or denied bonuses on the basis of a measure that bears little relation to what it purports to capture. Some have mistakenly launched or canceled projects based on reported margins that grossly misstate the true performance, or invested in firms that report high income even though they are truly declining in value. All are grateful for a clear way to explain these problems to their colleagues.A second reason executives find Libby Boxes so valuable is that conducting research is part of their job. They hypothesize causal relationships, manipulate actions, and measure results to find out whether their strategy is working, and adjust their behavior accordingly. Academic researchers have the privilege to opt out of projects that do not promise good data and clean research designs, while managers are often forced to take action even when results are muddy. A clear understanding of constructs and proxies is even more valuable when a good research sensibility must substitute for rigorous methods.The second lesson that must be taught in managerial reporting is that no reporting system is perfect. Some imperfections spring from the measurement error that lies between what counts and what gets counted, as discussed above. Other imperfections arise from the fact that performance is a high-dimensional construct, which must be reduced to just a handful of measures if we are to avoid overloading managers with information. As most of us learn in primary school, you cannot represent the three-dimensional earth with a two-dimensional map without stretching or tearing it somewhere.Other imperfections arise in the competing tensions that guide deployment. For example:A third class of imperfections arises from the difficulty of defending reporting systems. Joseph T. Campbell, who provided the structure underlying Libby Boxes, also identified the challenge of keeping performance measures useful. The following sentence is now commonly referred to as “Campbell's Law”:Campbell's Law states the problem, but implies only one solution: do not use measures for social decision-making. This advice is surely too extreme, since few organizations embrace it. An introductory course in managerial reporting must provide students with a better understanding of the ways in which measures are distorted, and arm them with more flexible and effective ways of defending those measures, while still deploying them to influence decisions.My approach has been to reformulate Campbell's Law to fit the predictive validity framework described above, to define measure management: improving measured performance relative to the underlying performance construct the measure is supposed to capture. People can manage measures by distorting their operational choices or how outcomes of those choices are reported. For example, assume that a doctor is evaluated on the basis of serious post-surgical complications. The doctor who improves measured performance by improving sterilization and other techniques is engaging in performance management, because she is actually improving the performance that the measure is intended to capture. But she might also improve the performance measure by declining to operate on patients at high risk of complication due to age or health. In this case, she is distorting her operational decisions to make it look like she is a better surgeon, even though her actual performance has not improved at all. Alternatively, she might discourage her hospital from reporting post-surgical complications as “severe,” in which case, she is distorting reporting.By defining measure management and its two forms—operational distortion and reporting distortion—we help students identify natural ways to make reporting systems less imperfect. First, we can eliminate the inherent measurement error in the proxy; if the proxy measure captures the underlying performance construct with great fidelity, then even operational and reporting distortion will not allow the proxy to deviate from the construct. For example, the proxy measure of surgical performance could be expanded to incorporate measures of whether the doctor followed best practices in sterilization and surgical technique, how long an incision remained open, exactly what complications arose, and how likely they were given the patient's medical history.Second, we can limit the doctor's motivation to improve measured performance. While this could entail simply not using the measure to reward good performance, it could also mean not letting the doctor know the precise proxy being used, or when performance is being measured. Such techniques are applied to customer service through the use of “secret shoppers,” or by choosing not to commit to a particular reward or punishment based on predetermined measures. For example, universities limit the motivation of researchers to manage the measures of their research performance by relying on the opinions of anonymous peers. If researchers knew that the number of top-tier publications was all that mattered, then they would have strong incentives to address simple problems and “cut the bologna too thinly” by making every paper the minimum publishable unit and spreading one contribution over many marginal papers. Such behaviors are much less effective when they do not know exactly how peers will evaluate their record as a whole, and are likely to draw their own subjective views of the true quantity and quality of research output.Finally, we can limit the doctor's discretion over operational and reporting decisions. Someone else can make decisions on which patients are operated on, and how complications are reported.Because no reporting system is perfect, organizations must decide which flaws are least problematic. This point is already well ingrained in managerial reporting courses, often headed with the bromide “different costs for different purposes.” We teach our students to distinguish process shops (which do the same thing over and over) from job shops (which customize what they do for each customer) and teach them which costing method will best suit each situation.However, the discussions above show that this lesson is far more general. We can apply it to the above-described methods of addressing measure management. They can choose to suffer from measure management as the lesser of evils, or suffer instead by developing better proxy measures of performance (with the data collection, processing, reporting, and analysis that entails); concealing how and when they measure performance (with the reduced ability to communicate performance goals that entails); or limiting discretion (with the increased bureaucracy and limited autonomy and agility that entails). The choice depends on the needs and capabilities of the organization.Similar lessons are found in incentive compensation, as described by the standard models in which a principal has imperfect information about a risk-averse agent's actions. The principal can motivate the agent by paying for measured performance, but because the performance measure is an imperfect measure of the agent's actual actions, the principal must also raise pay to compensate the agent for the risk that her pay will vary due factors she cannot control. Thus, the principal must choose how much to suffer from three problems: workers who are unmotivated because they receive little incentive compensation, workers who are paid higher wages to compensate them for the risk of uncontrollable measures, and accountants who are paid to reduce measurement error.In all of these cases, the right choice of reporting system depends on the needs and capabilities of the organizations and the circumstances they face. This lesson, therefore, brings us back to the strong linkage between accounting and other business disciplines, as well as the clear distinction between them: good reporting system design requires a clear understanding of organizations' needs, which comes from the other disciplines, but the nature of the imperfections and possible responses lie clearly within our domain.Double-entry bookkeeping is often viewed as too old-fashioned, technical, and boring to teach most students. But this essential foundation of accounting needs to be taught somewhere, and it is increasingly rare to see it taught in introductory financial accounting courses. I find this change understandable, as financial accounting teachers have their hands full getting students to understand other material, and many find they do not need to delve into debits and credits to achieve their primary goal: to help students interpret the reports generated by a single system of financial reporting (GAAP).Double-entry mechanics serve a more direct role in managerial reporting, which must help students choose the method of financial reporting that is best matched to their organization's needs. In particular, the reason we need different costs for different purposes is frequently driven by the specifics of double-entry bookkeeping. For example:The vision of managerial accounting presented above indicates several directions for accounting research. First, and most broadly, we must redouble our efforts to validate the claims we present to our students as fact. Even though I rely heavily on published research in What Counts, in almost every essay, I find myself making claims about phenomena that have not been rigorously documented, and about “best practices” that have not been persuasively shown to be optimal. Are standard costing systems more common in process shops than in job shops? Do they actually lead to better decisions? Does the fraud triangle actually predict fraud? Do absorption costing systems actually lead to death spirals, or are managers savvy enough to anticipate mistakes that we teach college freshman to avoid? Even the claims that are based on extensive economic and psychological research still leave many open questions in need of more rigorous exploration to be reliable knowledge about managerial reporting, because it is not always obvious how economic and psychological predictions will play out in complex reporting institutions. Labro (2015) has a thorough discussion of similar issues in the realm of cost accounting, and there is clearly much to be done.Second, and more specific to my arguments above, many research opportunities may lie in thinking of managers as researchers. Research is hard enough for full-time scholars, who make rather consistent mistakes despite their training, resources, and ability to select projects likely to result in clean inferences. Such mistakes should be even more likely among practicing managers who lack our advantages. Some of these mistakes are already well-studied, such as the biases that populate any text on behavioral decision theory: motivated reasoning, availability biases, escalation biases, and moderated confidence.Three classes of errors are not only rarely discussed, but also bear close relations to Libby Boxes (and, therefore, to the design, deployment, and defense of reporting systems). I see three errors that suggest particularly promising directions for future work:To close this section on research, I reinforce its close relationship to practice by pointing out that essentialism is a tempting, but dangerous, belief for accountants and for standard setters in particular. The basic elements of double-entry bookkeeping lack essential natures, as is evident from the difficulty of defining terms like “asset,” “liability,” and “equity.” The FASB has working definitions, but as I learn every year I attend their Financial Reporting Issues Conference, there are always items that cannot be classified without creating inconsistencies. For example, the 2014 Conference was titled “Improving the Conceptual Basis for Setting Standards: Where to Start?”1 The cases listed on the conference website show how the definitions can fail to guide classification even in very simple situations. Consider the definition of a liability in Statement of Financial Accounting Concepts (SFAC) 6 (FASB 1985):If a firm purchases goods from a supplier and promises to pay in stock, do they have a liability? Most participants were uncomfortable answering “yes,” because they do not see equity as an asset of the firm (and a liability requires transferring an asset or providing services, and issuing shares hardly seems a service), but they were also uncomfortable answering “no,” because the firm is clearly obligating themselves at a cost to their current shareholders.One conference attendee remarked that we should not be using definitions if we cannot agree on how to apply them to every situation. But this position was strongly rejected by Ludwig Wittgenstein (2009), who believed it a fool's errand to construct rules by which words must be used. Instead, “the meaning of a word is its use in the language game” (Wittgenstein 2009, 43) and the meaning of “liability” must change as firms come up with new commercial arrangements (like promising equity to suppliers). To believe otherwise is to cling to an essentialist view akin to the psychological bias of functional fixation: using a report in the same way even though its meaning has changed (Ashton 1976). Accounting researchers can see the world more clearly by shedding the illusion of essentialism and exploring the problems that arise when practitioners do not.Managerial reporting remains an essential part of the business curriculum and research literature. We can clarify its value to our nonaccounting colleagues by emphasizing the distinction between reporting and other fields, while clearly delineating our realm of expertise. But other commonly made distinctions generate more heat than light. Many studies classified as managerial reporting have powerful implications for external financial reporting, limiting the value of distinguishing between managerial and financial reporting other than to say that the latter is a subfield of the former. And the most important tools in managerial reporting apply with as much force to the conduct of academic research as they do to the workplace.Figures_1_2: http://dx.doi.org/10.2308/jmar-51061.s01

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call