Affective Engagement in Information Visualization

  • Abstract
  • Literature Map
  • Similar Papers
Abstract
Translate article icon Translate Article Star icon
Take notes icon Take Notes

Evaluating the “success” of an information visualization (InfoVis) where its main purpose is communication or presentation is challenging. Within metrics that go beyond traditional analysis- and performance-oriented approaches, one construct that has received attention in recent years is “user engagement”. In this research, I propose Affective Engagement (AE)-- user's engagement in emotional aspects as a metric for InfoVis evaluation. I developed and evaluated a self-report measurement tool named AEVis that can quantify a user's level of AE while using an InfoVis. Following a systematic process of evidence-centered design, each activity during instrument development contributed specific evidence to support the validity of interpretations of scores from the instrument. Four stages were established for the development: In stage 1, I examined the role and characteristics of AE in evaluating information visualization through an exploratory qualitative study, from which 11 indicators of AE were proposed: Fluidity, Enthusiasm, Curiosity, Discovery, Clarity, Storytelling, Creativity, Entertainment, Untroubling, Captivation, and Pleasing; In stage 2, I developed an item bank comprising various candidate items for assessing a user's level of AE, and assembled the first version of survey instrument through target population and domain experts' feedback; In stage 3, I conducted three field tests for instrument revisions. Three analytical methods were applied during this process: Item Analysis, Factor Analysis (FA), and Item Response Theory (IRT); In stage 4, a follow-up field test study was conducted to investigate the external relations between constructs in AEVis and other existing instruments. The results of the four stages support the validity and reliability of the developed instrument, including: In stage 1, user's AE characteristics elicited from the observations support the theoretical background of the test content; In stage 2, the feedback and review from target users and domain experts provides validity evidence for the test content of the instrument in the context of InfoVis; In stage 3, results from Exploratory and Confirmatory FA, as well as IRT methods reveal evidence for the internal structure of the instrument; In stage 4, the correlations between total scores and sub-scores of AEVis and other existing instruments provide external relation evidence of score interpretations. Using this instrument, visualization researchers and designers can evaluate non-performance-related aspects of their work efficiently and without specific domain knowledge. The utilities and implications of AE can be investigated as well. In the future, this research may provide foundation for expanding the theoretical basis of engagement in the fields of human-computer interaction and information visualization.

Similar Papers
  • Research Article
  • Cite Count Icon 80
  • 10.1027/2698-1866/a000034
Confirmatory Factor Analyses in Psychological Test Adaptation and Development
  • Feb 1, 2023
  • Psychological Test Adaptation and Development
  • Kay Brauer + 2 more

The importance of providing structural validity evidence for test score(s) derived from psychometric test instruments is highlighted by several institutions; for example, the American Psychological Association (2014) demands that evidence for the validity of an instruments' internal structure and its underlying measurement model must be provided before it is applied in psychological assessment. The knowledge about the latent structure of data obtained with tests addressing the major question "What is/are the construct[s] being measured" by psychological tests under investigation (Ziegler, 2014 (Ziegler, , 2020)) . The study of structural validity is typically addressed with factor analyses when the test scores reflect continuous latent traits. As most submissions to Psychological Test Adaptation and Development (PTAD) deal with the adaptation and further development of existing measures, authors typically test a measurement model that is based on theoretical considerations and prior findings on original versions (or adaptations) of the test under investigation. Our literature review of PTAD's publications showed that more than 90% of the articles contain at least one confirmatory factor analysis (CFA). As editor and reviewers of PTAD, we appreciate that authors are rigorous in providing evidence on the structural validity of their tests' data. However, since PTAD's inception in 2019, we experience that one comment is frequently communicated to authors during the review process, namely, the request to adjust the analytic approach in CFA from maximum likelihood (ML) estimation toward using the mean-and variance-adjusted weighted least squares (WLSMV; Muthén et al., 1997) estimator to account for the ordinal nature of the data that psychological instruments typically generate on the item level. In this editorial, we discuss the rationale behind choosing the WLSMV estimator when analyzing test adaptations and developments that are based on ordinal categorical data and concisely illustrate the problems associated with using the ML estimator (potentially in combination with robust tests of model fit) for such data.

  • Research Article
  • Cite Count Icon 35
  • 10.1080/26408066.2021.1906813
Item Response Theory and Confirmatory Factor Analysis: Complementary Approaches for Scale Development
  • Jul 16, 2021
  • Journal of Evidence-Based Social Work
  • Gerald J Bean + 1 more

Purpose This article demonstrates the advantages of using both confirmatory factor analysis (CFA) and item response theory (IRT) in the development and evaluation of social work scales with dichotomous or polytomous items. Social work researchers have commonly employed CFA and internal consistency reliability tests to validate scales for use in research- and evidence-based practice; IRT has been underused. We report findings from CFA and IRT analyses of a short social isolation scale for elementary school students to demonstrate that scale development and validation can benefit from complementary use of the two methods. Results provided evidence that scores from the scale are statistically sound, and each method contributed valuable information. Method Data collected from 626 third- through fifth-grade students using the social isolation scale from the Elementary School Success Profile (ESSP) were examined with both CFA and IRT. Results Complementary CFA and IRT results provide detailed information about item and scale performance of the social isolation scale. Discussion Evidence-based practice requires scales with known properties; knowledge of those properties is more complete when researchers use both CFA and IRT. Conclusion Using IRT modeling in combination with CFA can help social work researchers ensure the quality of scales they recommend to practitioners and researchers.

  • Research Article
  • 10.56855/ijmme.v3i3.1505
Psychometric Validation of the Mathematics Attitude Questionnaire (MAQ): A Confirmatory Factor Analysis Approach
  • Oct 2, 2025
  • International Journal of Mathematics and Mathematics Education
  • Kazaik Benjamin Danlami

Purpose – Mathematics underperformance remains a global challenge, especially in low-resource and conflict-affected contexts where students often face affective barriers such as anxiety, low enjoyment, and self-doubt. Although the Mathematics Attitude Questionnaire (MAQ) has been widely used internationally, its structural validity has rarely been examined in sub-Saharan Africa. This study aimed to validate the MAQ among Nigerian senior secondary school students. Methodology – A cross-sectional quantitative design under a post-positivist paradigm was employed. Using multistage sampling, 204 students (mean age = 16.8 years; 55% male) from three educational zones in Kaduna State completed a culturally adapted 31-item MAQ. Exploratory Factor Analysis (EFA) was first conducted to identify the underlying structure, followed by Confirmatory Factor Analysis (CFA) in Mplus to evaluate model fit. Reliability was assessed using coefficient omega, while validity was examined through Average Variance Extracted (AVE) and Heterotrait-Monotrait ratio (HTMT). Findings – EFA supported a two-factor structure: Enjoyment of Mathematics and Perception of Incompetence. CFA indicated suboptimal model fit (CFI = .831; TLI = .808; RMSEA = .141; SRMR = .100), though factor loadings (.49–.80) were significant. Reliability was strong (ω = .933; .872), AVE exceeded .58, and HTMT (.67) supported discriminant validity. The results affirm the relevance of the two constructs but highlight the need for theoretical refinement and cultural adaptation. Novelty – This is the first empirical validation of the MAQ using CFA in Nigeria, addressing a critical methodological gap in sub-Saharan mathematics education research. Significance – The validated MAQ provides educators, curriculum developers, and policymakers with a reliable diagnostic tool to assess and strengthen students’ affective engagement, guiding interventions to enhance enjoyment, self-efficacy, and mathematics performance.

  • Research Article
  • Cite Count Icon 4
  • 10.1080/00952990.2021.2012185
Cocaine use disorder criteria in a clinical sample: an analysis using item response theory, factor and network analysis
  • Jan 19, 2022
  • The American Journal of Drug and Alcohol Abuse
  • M Sanchez-Garcia + 4 more

Background The conceptualization of substance use disorders (SUDs) was modified in successive editions of the DSM. Dimensionality and inclusion/exclusion of several criteria was studied using various analytic approaches. Objective The study aimed to deepen our knowledge of the interrelationships between the diagnostic criteria for cocaine use disorder (CUD), applying three different analytical techniques: factor analysis, Item Response Theory (IRT) models, and network analysis. Methods 425 (85.4% male) outpatients were evaluated for CUD using the Substance Dependence Severity Scale. Confirmatory Factor Analysis, 2-parameter logistic model (IRT) and network analysis were applied to analyze the relationships between the diagnostic criteria. Results The results show that “legal problems” criterion is not congruent with the CUD measure on three analyses. Also, network analysis suggests the usefulness of the “craving” criterion. The criterion “quit/control” is the one that presents the best centrality indices and expected influence, showing strong relationships with the criteria of “craving,” “tolerance,” “neglect roles” and “activities given up.” Conclusions Network analysis appears to be a useful and complementary technique to factor analysis and IRT for understanding CUD. The “quit/control” criterion emerges as a central criterion to understand CUD.

  • Research Article
  • 10.11144/604
Valoración psicométrica de la Psychological Entitlement Scale desde la Teoría clásica de los tests y la Teoría de respuesta al ítem
  • Dec 14, 2013
  • Pensamiento Psicológico
  • Débora Jeannete Mola + 3 more

Fil: Mola, Debora Jeanette. Universidad Nacional de Cordoba. Facultad de Psicologia. Laboratorio de Psicologia Cognitiva; Argentina. Consejo Nacional de Investigaciones Cientificas y Tecnicas; Argentina

  • Abstract
  • 10.1136/annrheumdis-2012-eular.2437
THU0472-HPR Calibration of a multidimensional item bank to measure fatigue in rheumatoid arthritis patients
  • Jun 1, 2013
  • Annals of the Rheumatic Diseases
  • S Nikolaus + 4 more

THU0472-HPR Calibration of a multidimensional item bank to measure fatigue in rheumatoid arthritis patients

  • Research Article
  • Cite Count Icon 13
  • 10.1007/s11136-014-0643-6
The assessment of publication pressure in medical science; validity and reliability of a Publication Pressure Questionnaire (PPQ)
  • Feb 13, 2014
  • Quality of Life Research
  • J K Tijdink + 4 more

To determine content validity, structural validity, construct validity and reliability of an internet-based questionnaire designed for assessment of publication pressure experienced by medical scientists. The Publication Pressure Questionnaire (PPQ) was designed to assess psychological pressure to publish scientific papers. Content validity was evaluated by collecting independent comments from external experts (n=7) on the construct, comprehensiveness and relevance of the PPQ. Structural validity was assessed by factor analysis and item response theory (IRT) using the generalized partial credit model. Pearson's correlation coefficients were calculated to assess potential correlations with the emotional exhaustion and depersonalization subscales of the Maslach Burnout Inventory (MBI). Single test reliability (lambda2) was obtained from the IRT analysis. Content validity was satisfactory. Confirmatory factor analysis did not support the presence of three initially assumed separate domains of publication pressure (i.e., personally experienced publication pressure, publication pressure in general, pressure on position of scientist). After exclusion of the third domain (six items), we performed exploratory factor analysis and IRT. The goodness-of-fit statistics for the IRT assuming a single dimension were satisfactory when four items were removed, resulting in 14 items of the final PPQ. Correlations with the emotional exhaustion and depersonalization scales of the MBI were 0.34 and 0.31, respectively, supporting construct validity. Single test administration reliability lambda2 was 0.69 and 0.90 on the test scores and expected a posteriori scores, respectively. The PPQ seems a valid and reliable instrument to measure publication pressure among medical scientists.

  • Book Chapter
  • Cite Count Icon 2
  • 10.1007/978-3-030-43469-4_14
Factor Score Estimation from the Perspective of Item Response Theory
  • Jan 1, 2020
  • David Thissen + 1 more

The factor scores of confirmatory factor analysis (CFA) models and the latent variables of item response theory (IRT) models are similar statistical entities, so one would expect that their estimation or characterization would follow parallel tracks in CFA and IRT. However, historically they have not. Different procedures have been used to derive factor score estimates and latent variable estimates in IRT, and different computational procedures have been the result. In this chapter we approach factor score estimation for some simple CFA models from the perspective of IRT, with the kinds of graphics that are used to explain IRT estimates of proficiency, and the computational procedures that are used in test theory. We compare traditional “regression” and “Bartlett” factor score estimates with alternative computational approaches to likelihood-based factor score estimates, referring to the expected a posteriori and maximum likelihood estimates of IRT latent variables to clarify relations among the scores. This provides insights into the ways in which the data are combined into factor score estimates. The results provide an alternative method to compute factor scores in some simple models in the presence of observations that may be missing at random for some variables.

  • Research Article
  • Cite Count Icon 16
  • 10.1108/ijchm-02-2021-0208
Reference effects and customer engagement in a museum visit
  • Nov 17, 2021
  • International Journal of Contemporary Hospitality Management
  • Noel Yee Man Siu + 2 more

PurposeBy extending the expectancy-disconfirmation theory and integrating the elaboration likelihood model, this study aims to explore the reference effects (i.e. disconfirmation and self-identity) and customer engagement that affect customer experience on satisfaction with a museum visit. The study is designed to test a dual-mediator mechanism involving disconfirmation and self-identity. The moderating role of cognitive, affective or behavioral engagements is also examined with the overall purpose to advance the understanding of customer experience in cultural consumption such as museum visits.Design/methodology/approachA self-administered field survey in two stages was carried out on visitors to the Hong Kong Museum of Art. A total of 465 valid response sets were used for analysis. Hypotheses were tested using confirmatory factor analysis, three-step mediation test, structural equation modeling and moderation regressions.FindingsDisconfirmation and self-identity are found to be dual mediators in the experience–satisfaction relationship. Cognitive engagement reduces the effect of knowledge experience on disconfirmation and self-identity but increases that of the entertainment experience on disconfirmation and self-identity. Affective engagement amplifies the effect of knowledge experience on self-identity but mitigates the importance of entertainment evaluations.Practical implicationsFindings highlight the importance of both perceived knowledge and entertainment experiences in visitors’ evaluation of a cultural experience. Managers are suggested to craft promotional messages with the psychological appeal that connects visitors with museum services. Appropriate engagement tactics for museums can be developed to avoid overloading visitors with information.Originality/valuePrevious studies treat disconfirmation as the dominant reference effect in the formation of customer satisfaction. This study shows both disconfirmation and self-identity as dual reference effects that link the customer experience to satisfaction in the museum context, serving as a pioneer in defining how the influence of experience on reference effects varies depending on how customers are cognitively and affectively engaged in such context.

  • Research Article
  • Cite Count Icon 92
  • 10.2196/jmir.6749
A Psychometric Analysis of the Italian Version of the eHealth Literacy Scale Using Item Response and Classical Test Theory Methods
  • Apr 11, 2017
  • Journal of Medical Internet Research
  • Nicola Diviani + 2 more

BackgroundThe eHealth Literacy Scale (eHEALS) is a tool to assess consumers’ comfort and skills in using information technologies for health. Although evidence exists of reliability and construct validity of the scale, less agreement exists on structural validity.ObjectiveThe aim of this study was to validate the Italian version of the eHealth Literacy Scale (I-eHEALS) in a community sample with a focus on its structural validity, by applying psychometric techniques that account for item difficulty.MethodsTwo Web-based surveys were conducted among a total of 296 people living in the Italian-speaking region of Switzerland (Ticino). After examining the latent variables underlying the observed variables of the Italian scale via principal component analysis (PCA), fit indices for two alternative models were calculated using confirmatory factor analysis (CFA). The scale structure was examined via parametric and nonparametric item response theory (IRT) analyses accounting for differences between items regarding the proportion of answers indicating high ability. Convergent validity was assessed by correlations with theoretically related constructs.ResultsCFA showed a suboptimal model fit for both models. IRT analyses confirmed all items measure a single dimension as intended. Reliability and construct validity of the final scale were also confirmed. The contrasting results of factor analysis (FA) and IRT analyses highlight the importance of considering differences in item difficulty when examining health literacy scales.ConclusionsThe findings support the reliability and validity of the translated scale and its use for assessing Italian-speaking consumers’ eHealth literacy.

  • Research Article
  • Cite Count Icon 21
  • 10.1177/0886260520987812
Development and Validation of the Economic Coercion Scale 36 (ECS-36) in Rural Bangladesh.
  • Jan 22, 2021
  • Journal of interpersonal violence
  • Kathryn M Yount + 3 more

Assessing progress toward Sustainable Development Goal (SDG) 5, to achieve gender equality and to empower women, requires monitoring trends in intimate partner violence (IPV). Current measures of IPV may miss women's experiences of economic coercion, or interference with the acquisition, use, and maintenance of financial resources. This sequential, mixed-methods study developed and validated a scale for economic coercion in married women in rural Bangladesh, where women's expanding economic opportunities may elevate the risks of economic coercion and other IPV. Forty items capturing lifetime and prior-year economic coercion were adapted from formative qualitative research and prior scales and administered to a probability sample of 930 married women 16-49 years. An economic coercion scale (ECS) was validated using exploratory factor analysis (EFA) and confirmatory factor analysis (CFA) with primary data from random-split samples (N1 = 310; N2 = 620). Item response theory (IRT) methods gauged the measurement precision of items and scales over the range of the economic-coercion latent trait. Multiple-group factor analysis assessed measurement invariance of the economic-coercion construct. Two-thirds (62.26%) of women reported any lifetime economic coercion. EFA suggested a 36-item, two-factor model capturing barriers to acquire and to use or maintain economic resources. CFA, multiple group factor analysis, and multidimensional IRT methods confirmed that this model provided a reasonable fit to the data. IRT analysis showed that each dimension provided most precision over the higher range of the economic coercion trait. The Economic Coercion Scale 36 (ECS-36) should be validated elsewhere and over time. It may be added to violence-specific surveys and evaluations of violence-prevention and economic-empowerment programs that have a primary interest measuring economic coercion. Short-form versions of the ECS may be developed for multipurpose surveys and program monitoring.

  • Research Article
  • Cite Count Icon 47
  • 10.1023/a:1026175112538
The feasibility of applying item response theory to measures of migraine impact: a re-analysis of three clinical studies.
  • Dec 1, 2003
  • Quality of Life Research
  • Jakob B Bjorner + 2 more

Item response theory (IRT) is a powerful framework for analyzing multiitem scales and is central to the implementation of computerized adaptive testing. To explain the use of IRT to examine measurement properties and to apply IRT to a questionnaire for measuring migraine impact--the Migraine Specific Questionnaire (MSQ). Data from three clinical studies that employed the MSQ-version 1 were analyzed by confirmatory factor analysis for categorical data and by IRT modeling. Confirmatory factor analyses showed very high correlations between the factors hypothesized by the original test constructions. Further, high item loadings on one common factor suggest that migraine impact may be adequately assessed by only one score. IRT analyses of the MSQ were feasible and provided several suggestions as to how to improve the items and in particular the response choices. Out of 15 items, 13 showed adequate fit to the IRT model. In general, IRT scores were strongly associated with the scores proposed by the original test developers and with the total item sum score. Analysis of response consistency showed that more than 90% of the patients answered consistently according to a unidimensional IRT model. For the remaining patients, scores on the dimension of emotional function were less strongly related to the overall IRT scores that mainly reflected role limitations. Such response patterns can be detected easily using response consistency indices. Analysis of test precision across score levels revealed that the MSQ was most precise at one standard deviation worse than the mean impact level for migraine patients that are not in treatment. Thus, gains in test precision can be achieved by developing items aimed at less severe levels of migraine impact. IRT proved useful for analyzing the MSQ. The approach warrants further testing in a more comprehensive item pool for headache impact that would enable computerized adaptive testing.

  • Research Article
  • Cite Count Icon 330
  • 10.1007/s11136-010-9654-0
Measuring social health in the patient-reported outcomes measurement information system (PROMIS): item bank development and testing.
  • Apr 25, 2010
  • Quality of Life Research
  • Elizabeth A Hahn + 9 more

To develop a social health measurement framework, to test items in diverse populations and to develop item response theory (IRT) item banks. A literature review guided framework development of Social Function and Social Relationships sub-domains. Items were revised based on patient feedback, and Social Function items were field-tested. Analyses included exploratory factor analysis (EFA), confirmatory factor analysis (CFA), two-parameter IRT modeling and evaluation of differential item functioning (DIF). The analytic sample included 956 general population respondents who answered 56 Ability to Participate and 56 Satisfaction with Participation items. EFA and CFA identified three Ability to Participate sub-domains. However, because of positive and negative wording, and content redundancy, many items did not fit the IRT model, so item banks do not yet exist. EFA, CFA and IRT identified two preliminary Satisfaction item banks. One item exhibited trivial age DIF. After extensive item preparation and review, EFA-, CFA- and IRT-guided item banks help provide increased measurement precision and flexibility. Two Satisfaction short forms are available for use in research and clinical practice. This initial validation study resulted in revised item pools that are currently undergoing testing in new clinical samples and populations.

  • Research Article
  • 10.52472/jci.v7i2.492
Konstruksi Alat Ukur Impulsivitas Pada Narapidana
  • Dec 31, 2024
  • Journal of Correctional Issues
  • Muh Yusrifal Usman + 2 more

The construction of an impulsivity measurement scale is one of the methods that can be used to assess the impulsive tendencies of prisoners in Correctional Institutions (LAPAS). This is because impulsivity is one of the factors that contribute to the formation of various behavioral and psychological problems in prisoners during their prison term. This study aims to produce a new impulsivity scale based on the characteristics of the impulsive behavior of prisoners. The sampling technique used is quota sampling. This research involved 227 respondents in study 1 and 375 respondents in study 2. The data analysis methods used were item response theory (IRT), exploratory factor analysis (EFA), and confirmatory factor analysis (CFA). The results of the development of the impulsivity scale produced 21 items with six factors forming impulsive behavior. The construct reliability of the six factors ranged from 0.74 - 0.84 and AVE ranged from 0.45 - 0.59. The goodness of fit results also show that the measurement model is acceptable. Keywords: Confirmatory factor analysis, exploratory factor analysis, impulsivity, item response theory, reliability

  • Single Book
  • Cite Count Icon 99
  • 10.4324/9781315869797
Latent Variable Modeling with R
  • Jun 26, 2015
  • W Holmes Finch + 1 more

This book demonstrates how to conduct latent variable modeling (LVM) in R by highlighting the features of each model, their specialized uses, examples, sample code and output, and an interpretation of the results. Each chapter features a detailed example including the analysis of the data using R, the relevant theory, the assumptions underlying the model, and other statistical details to help readers better understand the models and interpret the results. Every R command necessary for conducting the analyses is described along with the resulting output which provides readers with a template to follow when they apply the methods to their own data. The basic information pertinent to each model, the newest developments in these areas, and the relevant R code to use them are reviewed. Each chapter also features an introduction, summary, and suggested readings. A glossary of the text’s boldfaced key terms and key R commands serve as helpful resources. The book is accompanied by a website with exercises, an answer key, and the in-text example data sets. Latent Variable Modeling with R: -Provides some examples that use messy data providing a more realistic situation readers will encounter with their own data. -Reviews a wide range of LVMs including factor analysis, structural equation modeling, item response theory, and mixture models and advanced topics such as fitting nonlinear structural equation models, nonparametric item response theory models, and mixture regression models. -Demonstrates how data simulation can help researchers better understand statistical methods and assist in selecting the necessary sample size prior to collecting data. -www.routledge.com/9780415832458 provides exercises that apply the models along with annotated R output answer keys and the data that corresponds to the in-text examples so readers can replicate the results and check their work. The book opens with basic instructions in how to use R to read data, download functions, and conduct basic analyses. From there, each chapter is dedicated to a different latent variable model including exploratory and confirmatory factor analysis (CFA), structural equation modeling (SEM), multiple groups CFA/SEM, least squares estimation, growth curve models, mixture models, item response theory (both dichotomous and polytomous items), differential item functioning (DIF), and correspondance analysis. The book concludes with a discussion of how data simulation can be used to better understand the workings of a statistical method and assist researchers in deciding on the necessary sample size prior to collecting data. A mixture of independently developed R code along with available libraries for simulating latent models in R are provided so readers can use these simulations to analyze data using the methods introduced in the previous chapters. Intended for use in graduate or advanced undergraduate courses in latent variable modeling, factor analysis, structural equation modeling, item response theory, measurement, or multivariate statistics taught in psychology, education, human development, and social and health sciences, researchers in these fields also appreciate this book’s practical approach. The book provides sufficient conceptual background information to serve as a standalone text. Familiarity with basic statistical concepts is assumed but basic knowledge of R is not.

Save Icon
Up Arrow
Open/Close
  • Ask R Discovery Star icon
  • Chat PDF Star icon

AI summaries and top papers from 250M+ research sources.