An implementation of argument-based validation for assessing college major preferences with a hybrid of Likert-rating and forced-choice formats

  • Abstract
  • Literature Map
  • Similar Papers
Abstract
Translate article icon Translate Article Star icon
Take notes icon Take Notes

ABSTRACT Choosing a college major is a significant career decision. The College Major Preference Assessment (CMPA) helps with this process by using three rounds of Likert-scale ratings to eliminate majors that are not preferred, followed by four rounds of forced-choice questions to narrow down an individual’s top three choices from a list of 50 options. This study used argument-based validation to evaluate whether the CMPA’s design effectively serves its purpose. Researchers examined the assessment’s claims, inferences, warrants, assumptions, supporting evidences, and rebuttals. Data collected for Psychology and Education majors were analyzed using latent trait models, revealing psychometric qualities that match the goals of each round of assessment. These findings were also confirmed by a separate, independent group. Additionally, the study demonstrates that argument-based validation can be flexibly applied to assessments with mixed formats.

Similar Papers
  • Research Article
  • Cite Count Icon 156
  • 10.1111/j.1745-3984.1977.tb00030.x
LATENT TRAIT MODELS AND THEIR USE IN THE ANALYSIS OF EDUCATIONAL TEST DATA1,2,3
  • Jun 1, 1977
  • Journal of Educational Measurement
  • Ronald K Hambleton + 1 more

A theory of latent traits supposes that in testing situations, examinee performance on a test can be predicted (or explained) by defining characteristics of examinees, referred to as traits, estimating scores for examinees on these traits, and using the scores to predict or test performance (Lord and Novick, 1968). Since the traits are not directly measurable and therefore unobservable, they are often referred to as latent traits or abilities. A latent trait model specifies a relationship between examinee test performance and the traits or abilities assumed to underlie performance on the test. The relationship between the observable and the unobservable quantities is described by a mathematical function. For this reason, latent trait models are mathematical models. Also, latent trait models are based on assumptions about the test data. When selecting a particular latent trait model to apply to one's test data, it is necessary to consider whether the test data satisfy the assumptions of the model. If they do not, different test models should be considered. Alternately, some psychometricians (for example, Wright, 1968) have recommended that test developers design their tests so as to satisfy the assumptions of the particular latent trait model they are interested in using. Recent work by Lord (1968, 1974a), Lord and Novick (1968), Wright (1968), Wright and Panchapakesan (1969), Samejima (1969, 1972), Bock and Wood (1971), and Whitely and Dawis (1974) has been helpful in introducing educational measurement specialists to the topic of latent trait models. Also, the work of these and other individuals has contributed substantially to the current interest among test practitioners in applying the models to a wide variety of educational and psychological testing problems. Latent trait models are now being used to explain examinee test performance as well as to provide a framework for solving test design problems and other important testing questions that have, to date, gone unresolved (Lord, 1977; Wright, 1977a, 1977b). Why has the use of latent trait models in practical testing situations been low? There are at least five reasons. For one, the topic of latent trait theory represents a complex branch of the field of test theory. The advanced mathematical skills required to study many of the papers published on the topic have probably discouraged many potential

  • Research Article
  • Cite Count Icon 78
  • 10.1111/j.1745-9125.1990.tb01325.x
A LATENT TRAIT APPROACH TO UNIFYING CRIMINAL CAREERS *
  • May 1, 1990
  • Criminology
  • David C Rowe + 2 more

We propose a latent trait model that simultaneously accounts for both participation in crime and the frequency of crimes, phenomena that the criminal career model attributes to different causal processes. The criminal career model is predicated on a categorical distinction between active offenders and nonoffenders, but the latent trait model assumes a continuous distribution of propensity to offend. Our specific statistical model relates a relatively stable and general latent propensity to engage in crime to the frequency of criminal behavior. The latent trait model successfully fit both the proportion of offenders (participation) and frequency of offending for several samples and several measures of offending. The model fit both samples of whites and nonwhites and both males and females. This shows that separate causal processes are not necessary to account for group differences in frequency and in participation, which disproves the major evidence in favor of the criminal career model. Finally, the latent trait model yielded evidence that disparate sex differences in rates of participation for different categories of offenses are consistent with a single difference on a latent trait. This demonstrates the latent trait model's potential for parsimoniously unifying knowledge about criminal careers.

  • Front Matter
  • Cite Count Icon 44
  • 10.1016/0160-2896(80)90010-0
Latent trait models in the study of intelligence
  • Apr 1, 1980
  • Intelligence
  • Susan E Whitely

Latent trait models in the study of intelligence

  • Research Article
  • Cite Count Icon 68
  • 10.1177/014662169401800304
Estimation of Reliability Coefficients Using the Test Information Function and Its Modifications
  • Sep 1, 1994
  • Applied Psychological Measurement
  • Fumiko Samajima

The reliability coefficient and the standard error of measurement in classical test theory are not properties of a specific test, but are attributed to both a specific test and a specific trait distribution. In latent trait mod els, or item response theory, the test information func tion (TIF) provides more precise local measures of accuracy in trait estimation than are available from the reliability coefficient. The reliability coefficient is still widely used, however, and is popular because of its simplicity. Thus, it is worthwhile to relate it to the TIF. In this paper, the reliability coefficient is predicted from the TIF, or two modified TIF formulas, and a spe cific trait distribution. Examples demonstrate the vari ability of the reliability coefficient across different trait distributions, and the results are compared with empiri cal reliability coefficients. Practical suggestions are given as to how to make better use of the reliability coefficient. Index terms: adaptive testing, bias, clas sical test theory, item information function, latent trait models, maximum likelihood estimation, reliability co efficieno, standard error of measurement, test informa tion function, trait estimation.

  • Research Article
  • Cite Count Icon 15
  • 10.1027/1015-5759/a000609
The Multidimensional Forced-Choice Format as an Alternative for Rating Scales
  • Jul 1, 2020
  • European Journal of Psychological Assessment
  • Eunike Wetzel + 2 more

When constructing a questionnaire to assess a psychological construct, one important decision researchers have to make is how to collect responses from test takers; that is, which response format to implement.We argued in a previous editorial published in the European Journal of Psychological Assessment (EJPA) that this decision deserves more attention and should be an explicit step in the test construction process (Wetzel & Greiff, 2018).The reason for this is that it can be a consequential decision that influences the validity of conclusions we draw about test takers' trait levels or about relations between constructs and criteria (Brown & Maydeu-Olivares, 2013; Wetzel & Frick, 2020).In this editorial, which can be considered a followup to the first one, we will take a closer look at two response formats 1 : rating scales (RS), the current default in most questionnaires, and the multidimensional forced-choice (MFC) format, an alternative that is currently the focus of a considerable body of research.We will first define the two formats and point out some of their advantages and disadvantages.Then, we will provide a summary and evaluation of research comparing RS and MFC.Third, we will draw some preliminary conclusions on the feasibility of applying MFC as an alternative to RS. Fourth, we will point out some open research questions.We will end with some recommendations and implications for readers and authors of EJPA.In this editorial, the overall goal is to give researchers and test users an overview of the current state of the research on RS versus MFC and to provide guidance on the feasibility of applying MFC in research on psychological assessment.1 The multidimensional forced-choice format is both an item and a response format.For simplicity in the comparison with rating scales, we refer to it as response format.

  • Research Article
  • 10.1080/15366367.2024.2365082
A Latent Trait Approach to the Measurement of Physical Fitness
  • Jul 31, 2024
  • Measurement: Interdisciplinary Research and Perspectives
  • Gerhard Tutz

A latent trait model for the measurement of physical fitness is proposed. It links the performance in competitions or the laboratory to a latent trait. In contrast to usual approaches that simply consider the sum of scores obtained from quite differently scaled tasks like time to run 100 m and jumping distance to obtain a measure of fitness, it distinguishes between the trait to be measured and the performance in the tradition of latent trait models. The latent trait model that is used is able to account for continuous observations, which distinguishes it from the classical latent trait models for binary or categorical data typically used in the measurement of mental abilities. Tools for the investigation of the contribution of single tasks and the link between task performances with respect to the latent trait are proposed and illustrated exemplarily by using decathlon data.

  • Research Article
  • Cite Count Icon 1
  • 10.1186/s12905-024-03465-6
Measuring domestic violence against Egyptian women and its consequent cost using a latent variable model
  • Dec 2, 2024
  • BMC Women's Health
  • Mai Sherif Hafez + 2 more

BackgroundDomestic Violence is a threatening worldwide problem. Its consequences against women can be dramatic, as it negatively affects women’s quality of life reflected in their general wellbeing including physical, mental, emotional and sexual health, in addition to the economic cost. Both domestic violence and its cost are multidimensional constructs that cannot be directly measured.MethodologyIn this study, a latent trait model is used by applying item response theory to measure both domestic violence and its consequent cost via thirty-five observed variables. Accordingly, the study fills a gap in the literature since it is the first attempt to examine the relationship between domestic violence and its consequent cost in Egypt using latent variable modelling rather than simple descriptive statistics. Each construct is considered as a multidimensional latent variable. The overall latent trait model also estimates the relationship between domestic violence and its consequent cost. The effect of a number of socioeconomic covariates on domestic violence is examined within the model. The proposed model is fitted to data from the 2015 Egypt Economic Cost of Gender-Based Violence Survey (ECGBVS) using Mplus software.ResultsThe study shows that psychological violence is equally important in measuring domestic violence, as physical violence. The cost resulting from domestic violence relies in its measurement both on the reduced quality of life and the monetary cost endured by the violated woman and children. For socioeconomic covariates, it is shown that domestic violence is affected by women’s and husband’s age, educational level, and husband’s occupational status.ConclusionDomestic violence is measured by summarizing four forms of violence: physical, psychological, sexual and economic violence, in a single continuous latent variable measuring “Domestic Violence”. Similarly, Cost is measured by summarizing three forms of consequent cost of violence: economic cost, cost on children and cost on women’s quality of life, in another a single continuous latent variable “Cost”. Each of these dimensions is measured by a number of aspects, reflecting the multidimensional nature of the variables. The fitted latent trait model ensured the positive relationship between Domestic Violence and its consequent multidimensional cost.

  • Research Article
  • Cite Count Icon 2
  • 10.1016/0883-0355(89)90006-2
Latent trait models as an information-processing approach to testing
  • Jan 1, 1989
  • International Journal of Educational Research
  • Susan E Embretson

Latent trait models as an information-processing approach to testing

  • Research Article
  • Cite Count Icon 142
  • 10.1207/s15327906mbr1901_3
An Empirical Study of Various Indices for Determining Unidimensionality.
  • Jan 1, 1984
  • Multivariate Behavioral Research
  • John Hattie

Following a review of indices proposed to assess the unidimensionality of a set of items, Hattie (Note 1) identified 87 indices. The purpose of the present paper is to describe a simulation that determines the adequacy of various indices as decision criteria for assessing unidimensionality. A three-parameter, multivariate, logistic latent-trait model was used to generate item responses. Levels of difficulty, guessing, and discrimination, as well as the number of factors underlying the data, were varied. Many of the indices evaluated were highly intercorrelated. Some resulted in estimates outside their theoretical bounds, and most were particularly sensitive to the intercorrelations between factors. Indices based on answer patterns, reliability, component analysis, linear and nonlinear factor analysis, and on the one-parameter latent trait model were ineffective. Using the sum of absolute residuals from the twoparameter latent trait model, indices were obtained that were able to discriminate between cases with one latent trait and cases with more than one latent trait.

  • PDF Download Icon
  • Research Article
  • Cite Count Icon 13
  • 10.1186/s12884-016-1190-7
Combining adverse pregnancy and perinatal outcomes for women exposed to antiepileptic drugs during pregnancy, using a latent trait model
  • Jan 6, 2017
  • BMC Pregnancy and Childbirth
  • Xuerong Wen + 8 more

BackgroundApplication of latent variable models in medical research are becoming increasingly popular. A latent trait model is developed to combine rare birth defect outcomes in an index of infant morbidity.MethodsThis study employed four statewide, retrospective 10-year data sources (1999 to 2009). The study cohort consisted of all female Florida Medicaid enrollees who delivered a live singleton infant during study period. Drug exposure was defined as any exposure to Antiepileptic drugs (AEDs) during pregnancy. Mothers with no AED exposure served as the AED unexposed group for comparison. Four adverse outcomes, birth defect (BD), abnormal condition of new born (ACNB), low birth weight (LBW), and pregnancy and obstetrical complication (PCOC), were examined and combined using a latent trait model to generate an overall severity index. Unidimentionality, local independence, internal homogeneity, and construct validity were evaluated for the combined outcome.ResultsThe study cohort consisted of 3183 mother-infant pairs in total AED group, 226 in the valproate only subgroup, and 43,956 in the AED unexposed group. Compared to AED unexposed group, the rate of BD was higher in both the total AED group (12.8% vs. 10.5%, P < .0001), and the valproate only subgroup (19.6% vs. 10.5%, P < .0001). The combined outcome was significantly correlated with the length of hospital stay during delivery in both the total AED group (Rho = 0.24, P < .0001) and the valproate only subgroup (Rho = 0.16, P = .01). The mean score for the combined outcome in the total AED group was significantly higher (2.04 ± 0.02 vs. 1.88 ± 0.01, P < .0001) than AED unexposed group, whereas the valproate only subgroup was not.ConclusionsLatent trait modeling can be an effective tool for combining adverse pregnancy and perinatal outcomes to assess prenatal exposure to AED, but evaluation of the selected components is essential to ensure the validity of the combined outcome.

  • Research Article
  • 10.1038/s41598-024-80145-3
On Bayesian estimation of a latent trait model defined by a rank-based likelihood
  • Nov 22, 2024
  • Scientific Reports
  • Daniel Biftu Bekalo + 2 more

Maximum likelihood estimation (frequentist) and Bayesian estimation are two common parameter estimation methods. However, maximum likelihood estimation faces limitations, including the effect of outliers, computational complexity, and issues with ordinal categorical data, leading to biased estimates and inaccurate coverage probabilities. To address these limitations, this study employed a latent trait model with a Bayesian marginal likelihood of rank-based estimation for parameter estimation. The simulation results demonstrated favorable performance of the proposed method. Trace plots of all parameters showed good distribution and quick convergence, with the potential scale reduction factor not exceeding 1, indicating no convergence issues. Furthermore, the posterior predictive check showed the simulated data closely resembled the observed data, indicating the method effectively captures within-region variation through a latent trait parameter. Performance metrics like mean absolute error, root mean square error, and 95% confidence interval coverage probability revealed the estimates from the proposed Bayesian method surpassed those from classical approaches. In conclusion, a latent traits model with Bayesian marginal likelihood and rank-based estimation is considered a superior parameter estimation technique compared to classical methods, particularly for dealing with ordinal categorical data.

  • Research Article
  • Cite Count Icon 24
  • 10.2174/1874350101609010168
The Logic of Latent Variable Analysis as Validity Evidence in Psychological Measurement
  • Dec 30, 2016
  • The Open Psychology Journal
  • Purya Baghaei + 1 more

Background:Validity is the most important characteristic of tests and social science researchers have a general consensus of opinion that the trustworthiness of any substantive research depends on the validity of the instruments employed to gather the data.Objective:It is a common practice among psychologists and educationalists to provide validity evidence for their instruments by fitting a latent trait model such as exploratory and confirmatory factor analysis or the Rasch model. However, there has been little discussion on the rationale behind model fitting and its use as validity evidence. The purpose of this paper is to answer the question: why the fit of data to a latent trait model counts as validity evidence for a test?Method:To answer this question latent trait theory and validity concept as delineated by Borsboom and his colleagues in a number of publications between 2003 to 2013 is reviewed.Results:Validating psychological tests employing latent trait models rests on the assumption of conditional independence. If this assumption holds it means that there is a ‘common cause’ underlying the co-variation among the test items, which hopefully is our intended construct.Conclusion:Providing validity evidence by fitting latent trait models is logistically easy and straightforward. However, it is of paramount importance that researchers appreciate what they do and imply about their measures when they demonstrate that their data fit a model. This helps them to avoid unforeseen pitfalls and draw logical conclusions.

  • Research Article
  • Cite Count Icon 47
  • 10.1007/bf02294762
Continuous and Discrete Latent Structure Models for Item Response Data
  • Sep 1, 1990
  • Psychometrika
  • Edward H Haertel

Relations are examined between latent trait and latent class models for item response data. Conditions are given for the two-latent class and two-parameter normal ogive models to agree, and relations between their item parameters are presented. Generalizations are then made to continuous models with more than one latent trait and discrete models with more than two latent classes, and methods are presented for relating latent class models to factor models for dichotomized variables. Results are illustrated using data from the Law School Admission Test, previously analyzed by several authors.

  • Research Article
  • Cite Count Icon 113
  • 10.1111/j.2044-8317.1996.tb01091.x
A latent trait and a latent class model for mixed observed variables
  • Nov 1, 1996
  • British Journal of Mathematical and Statistical Psychology
  • Irini Moustaki

Latent variable models are widely used in social sciences in which interest is centred on entities such as attitudes, beliefs or abilities for which there exist no direct measuring instruments. Latent modelling tries to extract these entities, here described as latent (unobserved) variables, from measurements on related manifest (observed) variables. Methodology already exists for fitting a latent variable model to manifest data that is either categorical (latent trait and latent class analysis) or continuous (factor analysis and latent profile analysis).In this paper a latent trait and a latent class model are presented for analysing the relationships among a set of mixed manifest variables using one or more latent variables. The set of manifest variables contains metric (continuous or discrete) and binary items. For the latent trait model the latent variables are assumed to follow a multivariate standard normal distribution. Our method gives maximum likelihood estimates of the model parameters and standard errors of the estimates by analysing the data as they are without using any underlying variables. The mixed latent trait and latent class models are fitted using an EM algorithm.To illustrate the use of the mixed model three data sets have been analysed. Two of the data sets contain five memory questions, the first on Thatcher's resignation and the second on the Hillsborough football disaster; these five questions were included in British Market Research Bureau International August 1993 face‐to‐face omnibus survey. The third data set is from the 1991 British Social Attitudes Survey; the questions which have been analysed are from the environment section.

  • Book Chapter
  • Cite Count Icon 3
  • 10.1007/978-94-011-0800-3_14
Roles of Fisher Type Information in Latent Trait Models
  • Jan 1, 1994
  • F. Samejima

In the present paper, recent development in latent trait models, or item response theory, will be reviewed, and roles of the Fisher type information will be discussed by introducing various information functions and the ways they are used in latent trait models. Weakly parallel tests and their usefulness will be discussed. It will be shown that the test information function can be used to link modern mental test theory with classical mental test theory, through so-called the reliability coefficient of a test and the standard error of measurement. It will be demonstrated that the square root of the test information function is useful in the transformation of the ability scale to provide us with a new scale with a constant amount of test information, or a equally discriminating ability scale, and the transformation will make mathematics simpler in certain nonparametric methods of estimating the operating characteristics, among others. Nonparametric estimation of the operating characteristics of discrete item responses will be introduced which includes Bivariate P.D.F. Approach and Conditional P.D.F. Approach, and, in particular, Simple Sum and Differential Weight Procedures of the Conditional P.D.F. Approach will be discussed. A certain constancy in the amount of information provided by a single dichotomous item will be observed.

Save Icon
Up Arrow
Open/Close
  • Ask R Discovery Star icon
  • Chat PDF Star icon

AI summaries and top papers from 250M+ research sources.