Expertise Ill-Defined: A Preliminary Exploration of Its Variability in Definition and Use in Research
While “expertise” is frequently used as a variable in human factors research, the criteria for defining this construct often lack clarity and consistency. This article briefly reviews common definitions of expertise and how it has been operationalized in research, highlighting the need for more nuanced categorization of expertise. We posit that expertise is multifaceted and propose a dichotomy that distinguishes “system expertise” from “task expertise,” with recency and frequency of task performance playing crucial roles alongside traditional metrics.
79
- 10.1177/001872089203400402
- Aug 1, 1992
- Human Factors: The Journal of the Human Factors and Ergonomics Society
1050
- 10.1017/cbo9780511816796.038
- Jun 26, 2006
499
- 10.1109/tsmc.1985.6313353
- Mar 1, 1985
- IEEE Transactions on Systems, Man, and Cybernetics
80
- 10.1037/rev0000161
- Jan 1, 2020
- Psychological Review
34
- 10.1037/10394-011
- Jan 1, 2001
33
- 10.1093/pq/pqy044
- Oct 10, 2018
- The Philosophical Quarterly
528
- 10.1016/j.apergo.2006.04.011
- Jun 6, 2006
- Applied Ergonomics
2931
- 10.1109/tsmc.1983.6313160
- May 1, 1983
- IEEE Transactions on Systems, Man, and Cybernetics
1249
- 10.1201/b12457
- Apr 1, 1999
99
- 10.2307/2026984
- Mar 1, 1991
- The Journal of Philosophy
- Research Article
1
- 10.1177/154193128502900508
- Oct 1, 1985
- Proceedings of the Human Factors Society Annual Meeting
This paper examines the role of human factors in the design of automobiles. A prime objective of our human factors profession is to improve the design of machines, thereby benefiting users in terms of comfort, convenience, operating speeds, accuracy and safety. Although the purpose of an automotive human factors program may be to achieve all of these objectives by improving vehicle design, the mechanisms for doing so probably cannot be discovered by focusing research attention on the vehicle element of the driver/vehicle/road system. In fact, the nonvehicle parts of this system are probably by far the most productive topics for future human factors research. The abilities of drivers, their limitations, and the tasks imposed upon them by the traffic environment should indicate how vehicles can be designed to best serve the drivers' needs. After twenty years of automotive study, the human factors research community is surprisingly unprepared to participate in vehicle design projects. The vehicle has too often ended up the subject of human factors research and researchers have been faced with the job of finding ways to improve the vehicle or a vehicle component without knowing enough about the intended user or the job the user must perform. The research community has only rudimentary and often incomplete background information about drivers and their traffic environments. The meager data base which is available suggests that traditional empirical approaches for evaluating machine design may be too cumbersome and time consuming to keep pace with other aspects of automotive technological evolution. The tradition of developing alternative versions of hardware and subjecting the alternatives to human performance tests may not be a viable methodology in the future. A look at the total automotive system shows why. Drivers in the United States accumulate about 1.6 trillion miles of travel each year. During the year, a typical driver makes over 60,000 discrete control operations not counting steering wheel movements. The immensity of the automotive system means that very small driver error rates in control usage quickly accumulate into large numbers of error events nationwide. The best information available suggests that the D. S. driving public uses their turn signals 854 billion times a year. This amounts to a nationwide rate of 27,000 times per second. If the generic human error rate in using the turn signal can assumed to be one error per 1000 operations, then turn signal errors are being made at the rate of 27 per second nationwide. Human factors research has tended to avoid error rate as a principal measure of performance in research programs. The reason becomes apparent when the number of tests which must be conducted to detect changes in rare events such as turn signal errors is computed. If two turn signal designs are to be compared and the researcher wants to be able to detect with 95 percent certainty (at the 5% level of significance) that the error rate has been cut in half by one of the two designs, then a large experiment is required. At a base human error rate of 1 per thousand, about 130,000 observations must be conducted to reliably detect the desired reduction in errors. If the base human error rate is only 1 in 100, then only 13,000 observations will be needed. Unfortunately, information on the frequency of driver control usage is sketchy, and data on driver error rates when using controls under the natural loading of the driving task is all but nonexistent. Other measures of human performance, such as speed of operation and accident involvement rates, have limited application in automotive design for reasons that are discussed in this paper. Some of the data bases which have been accumulated for human factors evaluations by Ford Motor Company are described in this paper. It is concluded that, if the human factors profession is to keep pace with automotive technological evolution, more research effort is going to have to be devoted to the study of drivers and driving-environment factors. For the sake of research efficiency, human factors principles and systems models which can be reliably generalized across vehicle designs must be developed. Several systems models that are under development at Ford are briefly described.
- Research Article
6
- 10.1177/107118137902300102
- Oct 1, 1979
- Proceedings of the Human Factors Society Annual Meeting
In order to examine the interrelationships among participants in the Human Factors (HF) Research and Development (R&D) process, questionnaires soliciting information about research practices were sent to contractors, government laboratory managers and HF practitioners in industry. Although a majority of respondents appear to be satisfied with the way in which HF research is conducted, a sizeable minority have serious reservations about that process.
- Research Article
- 10.1518/001872097778543868
- Jun 1, 1997
- Human Factors: The Journal of the Human Factors and Ergonomics Society
INTRODUCTION Vicente (1997) presents several interesting ideas regarding human factors research and the relationship between basic and applied research. Many of the issues regarding basic and applied work have been discussed recently textbooks (e.g., Payne & Conrad, 1997), chapters (e.g., Payne, Conrad, & Hager, 1997), and articles (e.g., Koriat & Goldsmith, 1994), and we feel that both these sources and Vicente's commentary provide important reminders of the importance of both of research. Although we agree with some of Vicente's views, we differ on at least two important points. First, we question the utility and accuracy of Vicente's four-type categorization scheme for human factors research. Second, we disagree with his characterization of the work of Payne, Lang, and Blackwell (1995) and the alleged between their work and that of Hansen (1995). The first point concerns Vicente's prescriptions regarding human factors research. Vicente characterizes human factors research as including four types of research. We find this characterization problematic two ways. First, the criteria used to classify research into these four are vague, insofar as no operational definitions are given. The definitions Vicente provides are all relative; for instance, 1 experiment highly controlled laboratory experiment, and experiment less controlled but more complex experiment (p. 324). Such definitions make it difficult to categorize any single study objectively. For example, Vicente cites the Gould et al. (1987) study of reading from paper versus CRT as 1 research. However, if one were to compare the work of Gould et al. with tightly controlled experiment examining eye movements reading, the work of Gould et al. might be considered research. Second, we take issue with the picture Vicente paints concerning the relative strengths and weaknesses of each type of research. For example, he argues that research more likely to generalize than 1 research because the former more representative of operational settings. Vicente states that this assertion a fact (p. 326). We agree that the more closely an experimental setting emulates specific real-world setting, the more likely the results are to apply to that specific setting. However, the extent to which research findings can or should be generalized across settings depends on the extent to which critical factors controlling behavior are common across the original setting and the setting to which one generalizes. If basic laboratory study identifies factors that influence performance, then these factors will allow one to make predictions about the real world. Vicente misses the point that knowing the extent to which Type 2 experiment representative requires knowing which factors determine representativeness and the setting for which one wishes to generalize. It not statistical fact. For applied researchers it absolutely essential that the results of studies generalize beyond the original research setting. If studies lack generalizability, then with each new operational setting one forced to conduct research specific to that setting. Such an approach expensive and inefficient. Our second major point concerns the characterization of the research by Hansen (1995) and Payne et al. (1995) that Vicente uses to motivate his arguments. Vicente quotes statements from these two articles that, when taken at face value and out of context, appear to be at odds. Vicente asserts that there the assertions made these two and that in fact one of the two papers is incorrect (p. 324). As authors of one of the papers question, we think it important to set the record straight. In our opinion, the glaring contradiction between Hansen and Payne et al. simply does not exist. …
- Research Article
- 10.1177/21695067231192531
- Sep 1, 2023
- Proceedings of the Human Factors and Ergonomics Society Annual Meeting
Technologies and systems have become more complex with the advancement of modern digitalization. Human factors practitioners and researchers face challenges in designing products for everyday activities and complex domains. Learning one human factors methodology at a time is the most common approach, and finding complementary methodologies is sometimes difficult. In this paper, we summarize achievements needed in human factors research in three categories: motivation-related needs, task-related needs, and applied domain assumptions and characteristics. Some common methodologies are discussed, and we briefly introduce how to implement them in general resilient and cyber-resilient systems.
- Conference Article
- 10.1145/800049.801759
- Jan 1, 1982
Human Factors research is concerned primarily with minimizing unpredictable behavior in computer-based systems. Much Human Factors research stresses simplification of computer-based work into discrete, standard, and measurable sub-tasks. The performance of these elemental work-fragments can then be compared against “expert” performance times. In addition to increased worker output, simplified and standardized jobs allow managers to control work more completely. Similarly, standardized jobs usually allow the use of less-skilled labor.This aspect of Human Factors research is an outgrowth of Scientific Management (“time and motion” studies) and, ironically, the management theories of Charles Babbage, the 19th-century inventor of the computer. Scientific Management and Human Factors research share a number of important assumptions. For the most part, these assumptions have not been subjected to careful scrutiny. In time, they may prove the source of significant problems for both systems designers and users.
- Research Article
- 10.1177/154193121005400406
- Sep 1, 2010
- Proceedings of the Human Factors and Ergonomics Society Annual Meeting
Much has been already been said about what simulations and games can provide that other research methodologies do not. But the complexity and richness of the results they afford in human factors research is matched by the complexity and cost of their conception, design, implementation, and validation. Though this may seem a daunting challenge to those considering such platforms for their own research, this panel aims to air the promises and pitfalls of simulations and games by sharing historical exemplars, lessons learned, and current issues in their use for human factors research. The panelists represent decades of experience in military, medical, and civilian research domains and have worked through abundant successes and failures in this area. Key issues of discussion will include cases which stand out as exemplary instances of using simulations and games in human factors research, particularly those that produced results that would have been unattainable by other methods, the challenges and constraints of participant pools (e.g. naïve subjects, access to domain experts, and suitable compromises), development of viable and engaging simulations (e.g., the problem of software written by grad students, for grad students), collection of accurate and meaningful data, and the generalizability of such game and simulation platforms as well as the adaptability of off-the-shelf solutions.
- Research Article
82
- 10.1197/jamia.m1229
- Nov 1, 2002
- Journal of the American Medical Informatics Association
Patient safety has become a major public concern. Human factors research in other high-risk fields has demonstrated how rigorous study of factors that affect job performance can lead to improved outcome and reduced errors after evidence-based redesign of tasks or systems. These techniques have increasingly been applied to the anesthesia work environment. This paper describes data obtained recently using task analysis and workload assessment during actual patient care and the use of cognitive task analysis to study clinical decision making. A novel concept of “non-routine events” is introduced and pilot data are presented. The results support the assertion that human factors research can make important contributions to patient safety. Information technologies play a key role in these efforts.
- Research Article
- 10.1177/154193121005401201
- Sep 1, 2010
- Proceedings of the Human Factors and Ergonomics Society Annual Meeting
Preventable patient harm due to errors in medication ordering, transcribing, dispensing and administration is a significant problem as discussed in the Institute of Medicine's 2007 report “Preventing Medication Errors”. Additionally, the report states that there are “enormous gaps in the knowledge base with regard to medication errors” and that the current methods available to solve this problem are inadequate (IOM, 2007, p2). Consequently, human factors research can contribute to the solution for this national problem by addressing the complexity in current medication systems and by designing user-centered solutions that support the real complex cognitive work of the clinicians. Panelists in this session, who have been funded by the federal government, private industry, and fellowships, will briefly share their human factors research on medication systems and then discuss how human factors researchers and practitioners can contribute to medication safety goals.
- Research Article
2
- 10.1089/dia.2015.1513
- Feb 1, 2015
- Diabetes technology & therapeutics
The impressive progress achieved in recent years in diabetes technologies has made diabetes technological devices such as continuous subcutaneous insulin infusion (CSII) and continuous glucose monitoring (CGM) a significant part of diabetes treatment. Many studies conducted in recent years emphasized the advantages of using these technologies. The concept of the “human factor” in diabetes technologies as discussed in this chapter has several different aspects. First, it can refer to the way patients are satisfied with the use of the device and whether it is perceived convenient or inconvenient. For example, is the device perceived as “user friendly” (easy to learn and to operate, comfortable, does not cause many hassles). Second, there is the issue of effectiveness of the technology as it relates to their day-to-day diabetes management. For example, there is an improvement in glycemic control when one diabetes treatment regimen is compared to another (i.e., CSII vs. multiple daily injections (MDI)). Those two fundamental aspects may have different meanings for different groups. For example, different age groups (toddlers, children, adolescents, young adults, adults, and older people) can see different advantages and disadvantages in technological devices. The feasibility and utility of technological devices also need to fit the environments in which they will be used, such as school, the work place, and/or home. Specific subgroups such as diabetic youth with eating disorders can have unique interactions with diabetes technologies. In addition, diabetes technologies can be used as a measurement device, providing more rich and accurate data about patients' self-care that can contribute to our understanding of concepts such as adherence and satisfaction, and they can provide measurement tools to assess how glycemic control can effect cognition and intelligence. The present chapter will review articles published in the last year that have studied some of these issues.
- Research Article
37
- 10.1518/001872005775570970
- Dec 1, 2005
- Human Factors: The Journal of the Human Factors and Ergonomics Society
Bibliometric analyses use the citation history of scientific articles as data to measure scientific impact. This paper describes a bibliometric analysis of the 1682 papers and 2413 authors published in Human Factors from 1970 to 2000. The results show that Human Factors has substantial relative scientific influence, as measured by impact, immediacy, and half-life, exceeding the influence of comparable journals. Like other scientific disciplines, human factors research is a highly stratified activity. Most authors have published only one paper, and many papers are cited infrequently, if ever. A small number of authors account for a disproportionately large number of the papers published and citations received. However, the degree of stratification is not as extreme as in many other disciplines, possibly reflecting the diversity of the human factors discipline. A consistent trend of more authors per paper parallels a similar trend in other fields and may reflect the increasingly interdisciplinary nature of human factors research and a trend toward addressing human-technology interaction in more complex systems. Ten of the most influential papers from each of the last 3 decades illustrate trends in human factors research. Actual or potential applications of this research include considerations for the publication and distribution policy of Human Factors.
- Research Article
- 10.1002/fsat.3301_5.x
- Mar 1, 2019
- Food Science and Technology
Cultural revolution
- Research Article
2
- 10.1177/154193129003401414
- Oct 1, 1990
- Proceedings of the Human Factors Society Annual Meeting
Advances in technology are being incorporated in motor vehicles at an increasing pace. These technologies have applications that may improve driver safety, comfort, and convenience. Among the “Smart Vehicle” applications that have been proposed are navigation systems, near obstacle detection systems, drive-by-wire, and active suspensions. In order for these systems to be effective, they need to be designed in consonance with driver needs, capabilities, and limitations. One concern is that unless human factors issues are addressed, new technologies not only may fall short of their potential for improving safety, but also may confuse or overload the driver and reduce safety. Because of this concern, a major focus of the crash avoidance research program at the National Highway Safety Administration (NHTSA) is on human factors issues associated with new vehicle technologies. The panel will feature presentations of NHTSA perspectives and research programs followed by discussants from the private sector describing their views on human factors research needs for improving vehicle safety in the 90′s. The introductory paper will present background on current and near future research directions at NHTSA. The next presentation will describe the status of the NHTSA program to develop an advanced research simulator. The final presentation will discuss potential applications of the simulator to human factors research. The discussants will provide reactions to the research envisioned by NHTSA as well as their own perspectives on research from the private sector point of view. Two of the discussants are from the motor vehicle manufacturing industry and one has an academic/private consulting background. It is hoped that this panel will provide a broad perspective on the challenges of vehicle safety research and will stimulate new interest by human factors researchers to become involved in this important field.
- Conference Article
9
- 10.1145/2897586.2897588
- May 14, 2016
In this paper we describe the usefulness of statistical validation techniques for human factors survey research. We need to investigate a diversity of validity aspects when creating metrics in human factors research, and we argue that the statistical tests used in other fields to get support for reliability and construct validity in surveys, should also be applied to human factors research in software engineering more often. We also show briefly how such methods can be applied (Test-Retest, Cronbach's {\alpha}, and Exploratory Factor Analysis).
- Research Article
1
- 10.1177/154193129203601309
- Oct 1, 1992
- Proceedings of the Human Factors Society Annual Meeting
Accident databases commonly contain factual information about the time and date of each accident, vehicle characteristics, number of persons killed and injured, and other kinds of factual data. These attributes of the environment and equipment are usually directly represented in databases. In contrast, detailed analysis of accident causes, including human factors information, are frequently not represented because they are much more difficult to obtain and code. This paper explores the suitability of transportation accident databases for use in human factors research. Given the goal of reducing the number and severity of transportation accidents, it is useful to use accident data as a tool to understand the common causes of accidents. Problems arise, however, because existing databases were typically not created explicitly for research purposes, and coding systems and file structures often omit or obscure useful information. Improved coding schemes and file structures that promote the use of databases for human factors research are discussed. Accident investigation methodologies that can improve the quality of human factors information in databases are also considered. Finally, problems associated with the use of existing databases are noted.
- Research Article
1
- 10.1177/154193120504901015
- Sep 1, 2005
- Proceedings of the Human Factors and Ergonomics Society Annual Meeting
Traditionally, human factors research has been conducted in Western nations to answer the questions of Western practitioners. This approach was appropriate in the past and still works well in many situations. However, as the world of work is becoming more international it is important to consider how national differences affect human factors applications. We review recent issues of the Human Factors journal to see how cultural differences are being addressed in research. Five domains where important cultural difference may influence research findings are reviewed. These areas are physical design, visual displays, symbolic communication, information technology and managing complex processes. We present recommendations for incorporating greater cultural variation into Ergonomic and Human Factors work.
- New
- Research Article
- 10.1177/10648046251395017
- Nov 2, 2025
- Ergonomics in Design: The Quarterly of Human Factors Applications
- Research Article
- 10.1177/10648046251384012
- Oct 20, 2025
- Ergonomics in Design: The Quarterly of Human Factors Applications
- Research Article
- 10.1177/10648046251381883
- Sep 26, 2025
- Ergonomics in Design: The Quarterly of Human Factors Applications
- Research Article
- 10.1177/10648046251377959
- Sep 25, 2025
- Ergonomics in Design: The Quarterly of Human Factors Applications
- Research Article
- 10.1177/10648046251374943
- Sep 7, 2025
- Ergonomics in Design: The Quarterly of Human Factors Applications
- Research Article
- 10.1177/10648046251366008
- Aug 13, 2025
- Ergonomics in Design: The Quarterly of Human Factors Applications
- Research Article
- 10.1177/10648046251367187
- Aug 12, 2025
- Ergonomics in Design: The Quarterly of Human Factors Applications
- Research Article
- 10.1177/10648046251361605
- Aug 5, 2025
- Ergonomics in Design: The Quarterly of Human Factors Applications
- Research Article
- 10.1177/10648046251356305
- Jul 17, 2025
- Ergonomics in Design: The Quarterly of Human Factors Applications
- Research Article
- 10.1177/10648046251357542
- Jul 11, 2025
- Ergonomics in Design: The Quarterly of Human Factors Applications
- Ask R Discovery
- Chat PDF
AI summaries and top papers from 250M+ research sources.