Expertise Ill-Defined: A Preliminary Exploration of Its Variability in Definition and Use in Research

  • Abstract
  • Literature Map
  • References
  • Similar Papers
Abstract
Translate article icon Translate Article Star icon
Take notes icon Take Notes

While “expertise” is frequently used as a variable in human factors research, the criteria for defining this construct often lack clarity and consistency. This article briefly reviews common definitions of expertise and how it has been operationalized in research, highlighting the need for more nuanced categorization of expertise. We posit that expertise is multifaceted and propose a dichotomy that distinguishes “system expertise” from “task expertise,” with recency and frequency of task performance playing crucial roles alongside traditional metrics.

ReferencesShowing 10 of 20 papers
  • Cite Count Icon 79
  • 10.1177/001872089203400402
Selecting measures for human factors research.
  • Aug 1, 1992
  • Human Factors: The Journal of the Human Factors and Ergonomics Society
  • Barry H Kantowitz

  • Open Access Icon
  • Cite Count Icon 1050
  • 10.1017/cbo9780511816796.038
The Influence of Experience and Deliberate Practice on the Development of Superior Expert Performance
  • Jun 26, 2006
  • K Anders Ericsson

  • Cite Count Icon 499
  • 10.1109/tsmc.1985.6313353
The role of hierarchical knowledge representation in decisionmaking and system management
  • Mar 1, 1985
  • IEEE Transactions on Systems, Man, and Cybernetics
  • Jens Rasmussen

  • Open Access Icon
  • Cite Count Icon 80
  • 10.1037/rev0000161
Frequency effects on memory: A resource-limited theory.
  • Jan 1, 2020
  • Psychological Review
  • Vencislav Popov + 1 more

  • Cite Count Icon 34
  • 10.1037/10394-011
Recency and recovery in human memory.
  • Jan 1, 2001
  • Robert A Bjork

  • Open Access Icon
  • Cite Count Icon 33
  • 10.1093/pq/pqy044
On What it Takes to be an Expert
  • Oct 10, 2018
  • The Philosophical Quarterly
  • Michel Croce

  • Cite Count Icon 528
  • 10.1016/j.apergo.2006.04.011
Human factors of complex sociotechnical systems
  • Jun 6, 2006
  • Applied Ergonomics
  • Pascale Carayon

  • Cite Count Icon 2931
  • 10.1109/tsmc.1983.6313160
Skills, rules, and knowledge; signals, signs, and symbols, and other distinctions in human performance models
  • May 1, 1983
  • IEEE Transactions on Systems, Man, and Cybernetics
  • Jens Rasmussen

  • Cite Count Icon 1249
  • 10.1201/b12457
Cognitive Work Analysis
  • Apr 1, 1999
  • Kim J Vicente

  • Cite Count Icon 99
  • 10.2307/2026984
Epistemic Paternalism: Communication Control in Law and Society
  • Mar 1, 1991
  • The Journal of Philosophy
  • Alvin I Goldman

Similar Papers
  • Research Article
  • Cite Count Icon 1
  • 10.1177/154193128502900508
Is Human Factors Ready for the Automobile?
  • Oct 1, 1985
  • Proceedings of the Human Factors Society Annual Meeting
  • Lyman M Forbes

This paper examines the role of human factors in the design of automobiles. A prime objective of our human factors profession is to improve the design of machines, thereby benefiting users in terms of comfort, convenience, operating speeds, accuracy and safety. Although the purpose of an automotive human factors program may be to achieve all of these objectives by improving vehicle design, the mechanisms for doing so probably cannot be discovered by focusing research attention on the vehicle element of the driver/vehicle/road system. In fact, the nonvehicle parts of this system are probably by far the most productive topics for future human factors research. The abilities of drivers, their limitations, and the tasks imposed upon them by the traffic environment should indicate how vehicles can be designed to best serve the drivers' needs. After twenty years of automotive study, the human factors research community is surprisingly unprepared to participate in vehicle design projects. The vehicle has too often ended up the subject of human factors research and researchers have been faced with the job of finding ways to improve the vehicle or a vehicle component without knowing enough about the intended user or the job the user must perform. The research community has only rudimentary and often incomplete background information about drivers and their traffic environments. The meager data base which is available suggests that traditional empirical approaches for evaluating machine design may be too cumbersome and time consuming to keep pace with other aspects of automotive technological evolution. The tradition of developing alternative versions of hardware and subjecting the alternatives to human performance tests may not be a viable methodology in the future. A look at the total automotive system shows why. Drivers in the United States accumulate about 1.6 trillion miles of travel each year. During the year, a typical driver makes over 60,000 discrete control operations not counting steering wheel movements. The immensity of the automotive system means that very small driver error rates in control usage quickly accumulate into large numbers of error events nationwide. The best information available suggests that the D. S. driving public uses their turn signals 854 billion times a year. This amounts to a nationwide rate of 27,000 times per second. If the generic human error rate in using the turn signal can assumed to be one error per 1000 operations, then turn signal errors are being made at the rate of 27 per second nationwide. Human factors research has tended to avoid error rate as a principal measure of performance in research programs. The reason becomes apparent when the number of tests which must be conducted to detect changes in rare events such as turn signal errors is computed. If two turn signal designs are to be compared and the researcher wants to be able to detect with 95 percent certainty (at the 5% level of significance) that the error rate has been cut in half by one of the two designs, then a large experiment is required. At a base human error rate of 1 per thousand, about 130,000 observations must be conducted to reliably detect the desired reduction in errors. If the base human error rate is only 1 in 100, then only 13,000 observations will be needed. Unfortunately, information on the frequency of driver control usage is sketchy, and data on driver error rates when using controls under the natural loading of the driving task is all but nonexistent. Other measures of human performance, such as speed of operation and accident involvement rates, have limited application in automotive design for reasons that are discussed in this paper. Some of the data bases which have been accumulated for human factors evaluations by Ford Motor Company are described in this paper. It is concluded that, if the human factors profession is to keep pace with automotive technological evolution, more research effort is going to have to be devoted to the study of drivers and driving-environment factors. For the sake of research efficiency, human factors principles and systems models which can be reliably generalized across vehicle designs must be developed. Several systems models that are under development at Ford are briefly described.

  • Research Article
  • Cite Count Icon 6
  • 10.1177/107118137902300102
The Influence of Government on Human Factors Research and Development
  • Oct 1, 1979
  • Proceedings of the Human Factors Society Annual Meeting
  • David Meister

In order to examine the interrelationships among participants in the Human Factors (HF) Research and Development (R&D) process, questionnaires soliciting information about research practices were sent to contractors, government laboratory managers and HF practitioners in industry. Although a majority of respondents appear to be satisfied with the way in which HF research is conducted, a sizeable minority have serious reservations about that process.

  • Research Article
  • 10.1518/001872097778543868
COMMENTARY Toward a Valid View of Human Factors Research: Response to Vicente (1997)
  • Jun 1, 1997
  • Human Factors: The Journal of the Human Factors and Ergonomics Society
  • David G Payne + 1 more

INTRODUCTION Vicente (1997) presents several interesting ideas regarding human factors research and the relationship between basic and applied research. Many of the issues regarding basic and applied work have been discussed recently textbooks (e.g., Payne & Conrad, 1997), chapters (e.g., Payne, Conrad, & Hager, 1997), and articles (e.g., Koriat & Goldsmith, 1994), and we feel that both these sources and Vicente's commentary provide important reminders of the importance of both of research. Although we agree with some of Vicente's views, we differ on at least two important points. First, we question the utility and accuracy of Vicente's four-type categorization scheme for human factors research. Second, we disagree with his characterization of the work of Payne, Lang, and Blackwell (1995) and the alleged between their work and that of Hansen (1995). The first point concerns Vicente's prescriptions regarding human factors research. Vicente characterizes human factors research as including four types of research. We find this characterization problematic two ways. First, the criteria used to classify research into these four are vague, insofar as no operational definitions are given. The definitions Vicente provides are all relative; for instance, 1 experiment highly controlled laboratory experiment, and experiment less controlled but more complex experiment (p. 324). Such definitions make it difficult to categorize any single study objectively. For example, Vicente cites the Gould et al. (1987) study of reading from paper versus CRT as 1 research. However, if one were to compare the work of Gould et al. with tightly controlled experiment examining eye movements reading, the work of Gould et al. might be considered research. Second, we take issue with the picture Vicente paints concerning the relative strengths and weaknesses of each type of research. For example, he argues that research more likely to generalize than 1 research because the former more representative of operational settings. Vicente states that this assertion a fact (p. 326). We agree that the more closely an experimental setting emulates specific real-world setting, the more likely the results are to apply to that specific setting. However, the extent to which research findings can or should be generalized across settings depends on the extent to which critical factors controlling behavior are common across the original setting and the setting to which one generalizes. If basic laboratory study identifies factors that influence performance, then these factors will allow one to make predictions about the real world. Vicente misses the point that knowing the extent to which Type 2 experiment representative requires knowing which factors determine representativeness and the setting for which one wishes to generalize. It not statistical fact. For applied researchers it absolutely essential that the results of studies generalize beyond the original research setting. If studies lack generalizability, then with each new operational setting one forced to conduct research specific to that setting. Such an approach expensive and inefficient. Our second major point concerns the characterization of the research by Hansen (1995) and Payne et al. (1995) that Vicente uses to motivate his arguments. Vicente quotes statements from these two articles that, when taken at face value and out of context, appear to be at odds. Vicente asserts that there the assertions made these two and that in fact one of the two papers is incorrect (p. 324). As authors of one of the papers question, we think it important to set the record straight. In our opinion, the glaring contradiction between Hansen and Payne et al. simply does not exist. …

  • Research Article
  • 10.1177/21695067231192531
Rethinking Human Factors Methodologies: User Needs & Applied Domain Characteristics
  • Sep 1, 2023
  • Proceedings of the Human Factors and Ergonomics Society Annual Meeting
  • Ruixuan Li + 1 more

Technologies and systems have become more complex with the advancement of modern digitalization. Human factors practitioners and researchers face challenges in designing products for everyday activities and complex domains. Learning one human factors methodology at a time is the most common approach, and finding complementary methodologies is sometimes difficult. In this paper, we summarize achievements needed in human factors research in three categories: motivation-related needs, task-related needs, and applied domain assumptions and characteristics. Some common methodologies are discussed, and we briefly introduce how to implement them in general resilient and cyber-resilient systems.

  • Conference Article
  • 10.1145/800049.801759
Human relations, scientific management, and human factors research
  • Jan 1, 1982
  • Philip Kraft + 1 more

Human Factors research is concerned primarily with minimizing unpredictable behavior in computer-based systems. Much Human Factors research stresses simplification of computer-based work into discrete, standard, and measurable sub-tasks. The performance of these elemental work-fragments can then be compared against “expert” performance times. In addition to increased worker output, simplified and standardized jobs allow managers to control work more completely. Similarly, standardized jobs usually allow the use of less-skilled labor.This aspect of Human Factors research is an outgrowth of Scientific Management (“time and motion” studies) and, ironically, the management theories of Charles Babbage, the 19th-century inventor of the computer. Scientific Management and Human Factors research share a number of important assumptions. For the most part, these assumptions have not been subjected to careful scrutiny. In time, they may prove the source of significant problems for both systems designers and users.

  • Research Article
  • 10.1177/154193121005400406
Multidisciplinary Perspectives on Simulations and Games in Human Factors Research
  • Sep 1, 2010
  • Proceedings of the Human Factors and Ergonomics Society Annual Meeting
  • Mark S Pfaff + 5 more

Much has been already been said about what simulations and games can provide that other research methodologies do not. But the complexity and richness of the results they afford in human factors research is matched by the complexity and cost of their conception, design, implementation, and validation. Though this may seem a daunting challenge to those considering such platforms for their own research, this panel aims to air the promises and pitfalls of simulations and games by sharing historical exemplars, lessons learned, and current issues in their use for human factors research. The panelists represent decades of experience in military, medical, and civilian research domains and have worked through abundant successes and failures in this area. Key issues of discussion will include cases which stand out as exemplary instances of using simulations and games in human factors research, particularly those that produced results that would have been unattainable by other methods, the challenges and constraints of participant pools (e.g. naïve subjects, access to domain experts, and suitable compromises), development of viable and engaging simulations (e.g., the problem of software written by grad students, for grad students), collection of accurate and meaningful data, and the generalizability of such game and simulation platforms as well as the adaptability of off-the-shelf solutions.

  • Research Article
  • Cite Count Icon 82
  • 10.1197/jamia.m1229
Human Factors Research in Anesthesia Patient Safety: Techniques to Elucidate Factors Affecting Clinical Task Performance and Decision Making
  • Nov 1, 2002
  • Journal of the American Medical Informatics Association
  • M B Weinger

Patient safety has become a major public concern. Human factors research in other high-risk fields has demonstrated how rigorous study of factors that affect job performance can lead to improved outcome and reduced errors after evidence-based redesign of tasks or systems. These techniques have increasingly been applied to the anesthesia work environment. This paper describes data obtained recently using task analysis and workload assessment during actual patient care and the use of cognitive task analysis to study clinical decision making. A novel concept of “non-routine events” is introduced and pilot data are presented. The results support the assertion that human factors research can make important contributions to patient safety. Information technologies play a key role in these efforts.

  • Research Article
  • 10.1177/154193121005401201
Human Factors Contributions toward Medication Safety
  • Sep 1, 2010
  • Proceedings of the Human Factors and Ergonomics Society Annual Meeting
  • Ben-Tzion Karsh + 5 more

Preventable patient harm due to errors in medication ordering, transcribing, dispensing and administration is a significant problem as discussed in the Institute of Medicine's 2007 report “Preventing Medication Errors”. Additionally, the report states that there are “enormous gaps in the knowledge base with regard to medication errors” and that the current methods available to solve this problem are inadequate (IOM, 2007, p2). Consequently, human factors research can contribute to the solution for this national problem by addressing the complexity in current medication systems and by designing user-centered solutions that support the real complex cognitive work of the clinicians. Panelists in this session, who have been funded by the federal government, private industry, and fellowships, will briefly share their human factors research on medication systems and then discuss how human factors researchers and practitioners can contribute to medication safety goals.

  • Research Article
  • Cite Count Icon 2
  • 10.1089/dia.2015.1513
Diabetes technology and the human factor.
  • Feb 1, 2015
  • Diabetes technology & therapeutics
  • Alon Liberman + 2 more

The impressive progress achieved in recent years in diabetes technologies has made diabetes technological devices such as continuous subcutaneous insulin infusion (CSII) and continuous glucose monitoring (CGM) a significant part of diabetes treatment. Many studies conducted in recent years emphasized the advantages of using these technologies. The concept of the “human factor” in diabetes technologies as discussed in this chapter has several different aspects. First, it can refer to the way patients are satisfied with the use of the device and whether it is perceived convenient or inconvenient. For example, is the device perceived as “user friendly” (easy to learn and to operate, comfortable, does not cause many hassles). Second, there is the issue of effectiveness of the technology as it relates to their day-to-day diabetes management. For example, there is an improvement in glycemic control when one diabetes treatment regimen is compared to another (i.e., CSII vs. multiple daily injections (MDI)). Those two fundamental aspects may have different meanings for different groups. For example, different age groups (toddlers, children, adolescents, young adults, adults, and older people) can see different advantages and disadvantages in technological devices. The feasibility and utility of technological devices also need to fit the environments in which they will be used, such as school, the work place, and/or home. Specific subgroups such as diabetic youth with eating disorders can have unique interactions with diabetes technologies. In addition, diabetes technologies can be used as a measurement device, providing more rich and accurate data about patients' self-care that can contribute to our understanding of concepts such as adherence and satisfaction, and they can provide measurement tools to assess how glycemic control can effect cognition and intelligence. The present chapter will review articles published in the last year that have studied some of these issues.

  • Research Article
  • Cite Count Icon 37
  • 10.1518/001872005775570970
Bibliometric Analysis of Human Factors (1970-2000): A Quantitative Description of Scientific Impact
  • Dec 1, 2005
  • Human Factors: The Journal of the Human Factors and Ergonomics Society
  • John D Lee + 2 more

Bibliometric analyses use the citation history of scientific articles as data to measure scientific impact. This paper describes a bibliometric analysis of the 1682 papers and 2413 authors published in Human Factors from 1970 to 2000. The results show that Human Factors has substantial relative scientific influence, as measured by impact, immediacy, and half-life, exceeding the influence of comparable journals. Like other scientific disciplines, human factors research is a highly stratified activity. Most authors have published only one paper, and many papers are cited infrequently, if ever. A small number of authors account for a disproportionately large number of the papers published and citations received. However, the degree of stratification is not as extreme as in many other disciplines, possibly reflecting the diversity of the human factors discipline. A consistent trend of more authors per paper parallels a similar trend in other fields and may reflect the increasingly interdisciplinary nature of human factors research and a trend toward addressing human-technology interaction in more complex systems. Ten of the most influential papers from each of the last 3 decades illustrate trends in human factors research. Actual or potential applications of this research include considerations for the publication and distribution policy of Human Factors.

  • Research Article
  • 10.1002/fsat.3301_5.x
Cultural revolution
  • Mar 1, 2019
  • Food Science and Technology

Cultural revolution

  • Research Article
  • Cite Count Icon 2
  • 10.1177/154193129003401414
Smart Vehicles: New Directions for Human Factors Safety Research
  • Oct 1, 1990
  • Proceedings of the Human Factors Society Annual Meeting
  • Michael Perel + 5 more

Advances in technology are being incorporated in motor vehicles at an increasing pace. These technologies have applications that may improve driver safety, comfort, and convenience. Among the “Smart Vehicle” applications that have been proposed are navigation systems, near obstacle detection systems, drive-by-wire, and active suspensions. In order for these systems to be effective, they need to be designed in consonance with driver needs, capabilities, and limitations. One concern is that unless human factors issues are addressed, new technologies not only may fall short of their potential for improving safety, but also may confuse or overload the driver and reduce safety. Because of this concern, a major focus of the crash avoidance research program at the National Highway Safety Administration (NHTSA) is on human factors issues associated with new vehicle technologies. The panel will feature presentations of NHTSA perspectives and research programs followed by discussants from the private sector describing their views on human factors research needs for improving vehicle safety in the 90′s. The introductory paper will present background on current and near future research directions at NHTSA. The next presentation will describe the status of the NHTSA program to develop an advanced research simulator. The final presentation will discuss potential applications of the simulator to human factors research. The discussants will provide reactions to the research envisioned by NHTSA as well as their own perspectives on research from the private sector point of view. Two of the discussants are from the motor vehicle manufacturing industry and one has an academic/private consulting background. It is hoped that this panel will provide a broad perspective on the challenges of vehicle safety research and will stimulate new interest by human factors researchers to become involved in this important field.

  • Conference Article
  • Cite Count Icon 9
  • 10.1145/2897586.2897588
Useful statistical methods for human factors research in software engineering
  • May 14, 2016
  • Lucas Gren + 1 more

In this paper we describe the usefulness of statistical validation techniques for human factors survey research. We need to investigate a diversity of validity aspects when creating metrics in human factors research, and we argue that the statistical tests used in other fields to get support for reliability and construct validity in surveys, should also be applied to human factors research in software engineering more often. We also show briefly how such methods can be applied (Test-Retest, Cronbach's {\alpha}, and Exploratory Factor Analysis).

  • Research Article
  • Cite Count Icon 1
  • 10.1177/154193129203601309
Using Transportation Accident Databases in Human Factors Research
  • Oct 1, 1992
  • Proceedings of the Human Factors Society Annual Meeting
  • David L Mayer + 1 more

Accident databases commonly contain factual information about the time and date of each accident, vehicle characteristics, number of persons killed and injured, and other kinds of factual data. These attributes of the environment and equipment are usually directly represented in databases. In contrast, detailed analysis of accident causes, including human factors information, are frequently not represented because they are much more difficult to obtain and code. This paper explores the suitability of transportation accident databases for use in human factors research. Given the goal of reducing the number and severity of transportation accidents, it is useful to use accident data as a tool to understand the common causes of accidents. Problems arise, however, because existing databases were typically not created explicitly for research purposes, and coding systems and file structures often omit or obscure useful information. Improved coding schemes and file structures that promote the use of databases for human factors research are discussed. Accident investigation methodologies that can improve the quality of human factors information in databases are also considered. Finally, problems associated with the use of existing databases are noted.

  • Research Article
  • Cite Count Icon 1
  • 10.1177/154193120504901015
Making ‘Human Factors’ Truly Human: Cultural Considerations in Human Factors Research and Practice
  • Sep 1, 2005
  • Proceedings of the Human Factors and Ergonomics Society Annual Meeting
  • Katherine Lippa + 1 more

Traditionally, human factors research has been conducted in Western nations to answer the questions of Western practitioners. This approach was appropriate in the past and still works well in many situations. However, as the world of work is becoming more international it is important to consider how national differences affect human factors applications. We review recent issues of the Human Factors journal to see how cultural differences are being addressed in research. Five domains where important cultural difference may influence research findings are reviewed. These areas are physical design, visual displays, symbolic communication, information technology and managing complex processes. We present recommendations for incorporating greater cultural variation into Ergonomic and Human Factors work.

More from: Ergonomics in Design: The Quarterly of Human Factors Applications
  • New
  • Research Article
  • 10.1177/10648046251395017
In the news
  • Nov 2, 2025
  • Ergonomics in Design: The Quarterly of Human Factors Applications

  • Research Article
  • 10.1177/10648046251384012
Effectiveness of Eye-Tracking Metrics in Human-Centric Design of Human-Machine Interface: Cases on Process Control Operations
  • Oct 20, 2025
  • Ergonomics in Design: The Quarterly of Human Factors Applications
  • Asher Ahmed Malik + 5 more

  • Research Article
  • 10.1177/10648046251381883
Implementing a Fatigue Risk Management Program in Chilean Transport: A Multifactorial and Participatory Case Study
  • Sep 26, 2025
  • Ergonomics in Design: The Quarterly of Human Factors Applications
  • Héctor Ignacio Castellucci + 3 more

  • Research Article
  • 10.1177/10648046251377959
Expertise Ill-Defined: A Preliminary Exploration of Its Variability in Definition and Use in Research
  • Sep 25, 2025
  • Ergonomics in Design: The Quarterly of Human Factors Applications
  • Hyun-Gee Jei + 3 more

  • Research Article
  • 10.1177/10648046251374943
The Design and Evaluation of Passive Shoulder Exoskeleton in Reducing Physical Demands
  • Sep 7, 2025
  • Ergonomics in Design: The Quarterly of Human Factors Applications
  • Nithisate Petju + 3 more

  • Research Article
  • 10.1177/10648046251366008
User Expertise and Manual Materials Handling Risk Assessment: A Study of Chile’s 2018 Guide
  • Aug 13, 2025
  • Ergonomics in Design: The Quarterly of Human Factors Applications
  • Héctor Ignacio Castellucci + 4 more

  • Research Article
  • 10.1177/10648046251367187
Prioritizing Tractor-Driver Safety: An AHP-Based Analysis of Key Factors in Northwestern India’s Agriculture
  • Aug 12, 2025
  • Ergonomics in Design: The Quarterly of Human Factors Applications
  • Chander Prakash + 4 more

  • Research Article
  • 10.1177/10648046251361605
Does Expertise Matter? A Study of Chile’s Ergonomic Risk Assessment Tool
  • Aug 5, 2025
  • Ergonomics in Design: The Quarterly of Human Factors Applications
  • Héctor Ignacio Castellucci + 5 more

  • Research Article
  • 10.1177/10648046251356305
What Can You See Over the Bonnet: A Detailed Assessment of Forward Visibility in Cars
  • Jul 17, 2025
  • Ergonomics in Design: The Quarterly of Human Factors Applications
  • Sriram Rajakumaran + 2 more

  • Research Article
  • 10.1177/10648046251357542
In the news
  • Jul 11, 2025
  • Ergonomics in Design: The Quarterly of Human Factors Applications

Save Icon
Up Arrow
Open/Close
  • Ask R Discovery Star icon
  • Chat PDF Star icon

AI summaries and top papers from 250M+ research sources.

Search IconWhat is the difference between bacteria and viruses?
Open In New Tab Icon
Search IconWhat is the function of the immune system?
Open In New Tab Icon
Search IconCan diabetes be passed down from one generation to the next?
Open In New Tab Icon