• All Solutions All Solutions Caret
    • Editage

      One platform for all researcher needs

    • Paperpal

      AI-powered academic writing assistant

    • R Discovery

      Your #1 AI companion for literature search

    • Mind the Graph

      AI tool for graphics, illustrations, and artwork

    • Journal finder

      AI-powered journal recommender

    Unlock unlimited use of all AI tools with the Editage Plus membership.

    Explore Editage Plus
  • Support All Solutions Support
    discovery@researcher.life
Discovery Logo
Sign In
Paper
Search Paper
Cancel
Pricing Sign In
  • My Feed iconMy Feed
  • Search Papers iconSearch Papers
  • Library iconLibrary
  • Explore iconExplore
  • Ask R Discovery iconAsk R Discovery Star Left icon
  • Chat PDF iconChat PDF Star Left icon
  • Chrome Extension iconChrome Extension
    External link
  • Use on ChatGPT iconUse on ChatGPT
    External link
  • iOS App iconiOS App
    External link
  • Android App iconAndroid App
    External link
  • Contact Us iconContact Us
    External link
Discovery Logo menuClose menu
  • My Feed iconMy Feed
  • Search Papers iconSearch Papers
  • Library iconLibrary
  • Explore iconExplore
  • Ask R Discovery iconAsk R Discovery Star Left icon
  • Chat PDF iconChat PDF Star Left icon
  • Chrome Extension iconChrome Extension
    External link
  • Use on ChatGPT iconUse on ChatGPT
    External link
  • iOS App iconiOS App
    External link
  • Android App iconAndroid App
    External link
  • Contact Us iconContact Us
    External link

Related Topics

  • Socially Desirable Responding
  • Socially Desirable Responding
  • Social Desirability
  • Social Desirability
  • Desirable Responding
  • Desirable Responding

Articles published on Careless Responding

Authors
Select Authors
Journals
Select Journals
Duration
Select Duration
145 Search results
Sort by
Recency
  • New
  • Research Article
  • 10.1016/j.jbtep.2025.102057
On the multi-causal nature of jumping to conclusions in psychosis.
  • Dec 1, 2025
  • Journal of behavior therapy and experimental psychiatry
  • Steffen Moritz + 6 more

On the multi-causal nature of jumping to conclusions in psychosis.

  • New
  • Research Article
  • 10.33423/jmdc.v19i3.7957
The Critical Role of Response Time in Detecting Careless Respondents: A Case Study on Data Contamination
  • Nov 15, 2025
  • Journal of Marketing Development and Competitiveness
  • Xiao Zhang + 2 more

This study examines the critical role of response time in identifying careless respondents (CRs) and their impact on the integrity of online survey data. Using a survey design with time filters and bogus items, this study demonstrates that data from inattentive participants introduces significant contamination to datasets. The results indicate that data from flagged CRs distort construct structures, inflate inter-construct correlations, and compromise construct validity. This case study highlights the severe effects of data contamination and emphasizes the importance of measuring response time as a key detection method, providing practical recommendations for its application to ensure data quality.

  • Research Article
  • 10.1891/jfcp-2024-0103
Measuring Consumers’ Financial Well-Being: Uncovering the Causes and Consequences of Careless and Insufficient Effort in Self-Report Responses
  • Nov 12, 2025
  • Journal of Financial Counseling and Planning
  • Okan Bulut + 3 more

This study investigates the prevalence and causes of careless and insufficient effort responding (C/IER) in a self-report instrument assessing financial well-being. Using an explanatory mixture item response modeling approach, we analyze data from 6,394 respondents who responded to the Consumer Financial Protection Bureau’s financial well-being and financial skills scales. We identify different patterns of C/IER and examine factors that contribute to its occurrence in the two scales. The results indicate that 14% of the responses were impacted by C/IER, with question characteristics (e.g., negatively worded or frequency-based questions) and respondent demographics (e.g., age, gender, and education level) exhibiting significant relationships with attentiveness. Respondents with higher financial knowledge were more attentive, while younger individuals and those experiencing financial shocks were more prone to careless responding. These findings underscore the importance of carefully designing self-report instruments to minimize response biases and ensure data quality. By addressing different question designs and respondent-level characteristics, future instruments can yield more reliable and valid insights into individuals’ financial well-being.

  • Research Article
  • 10.1177/00332941251390460
Careless Responding May Reduce Data Quality in Self-Report Questionnaire Research: Evidence From Adult Temperament Samples From Two Cultures.
  • Oct 16, 2025
  • Psychological reports
  • Tomas Lazdauskas + 1 more

With the increasing publication of self-report online studies, concerns are growing about the quality of the data collected through these methods. This study focused on response bias, a major threat to data quality, by analyzing data from a real-world study on adult temperament conducted in two different countries. The sample included 1,497 participants aged 18-80 years from the United States (n = 598) and Lithuania (n = 899). The primary objectives were to determine the prevalence of response bias and to evaluate its impact on psychometric outcomes. Indicators of biased responding included patterns suggestive of potentially careless responding (e.g., invariant and random response patterns) and those flagged by internal validity checks or clinical controls (e.g., social desirability and ratings-perception discrepancies). Results indicated that the inclusion of data reflecting potentially careless responding reduced internal consistency and distorted factor structure, whereas its exclusion improved these psychometric indicators. In contrast, with regard to clinical controls, removing flagged data resulted in a decline in psychometric quality. Additionally, higher rates of careless responding were observed in the sample subjected to forced answering. These findings highlight the importance of mitigating response bias in online self-report research and raise broader questions about the integrity of data in existing survey-based datasets. By jointly evaluating careless responding and clinical threats in real-world, cross-national samples, this study extends prior work and demonstrates the applied value of post-hoc screening for improving psychometric quality.

  • Research Article
  • 10.3758/s13428-025-02797-x
Bayesian factor mixture modeling with response time for detecting careless respondents.
  • Sep 15, 2025
  • Behavior research methods
  • Lijin Zhang + 2 more

Careless respondents inject noise into data which can distort research findings and compromise model fit. To address this, factor mixture modeling (FMM) has been widely used to identify careless respondents. Traditionally, researchers have relied on reverse-worded questions in FMM to facilitate the detection of careless responding. With the rise of online data collection platforms, response time has appeal as a means for understanding careless behavior. We introduce a Bayesian FMM that leverages this rich source of information to identify careless respondents. By jointly modeling responses and response time, this approach effectively identifies careless individuals rushing through the questionnaire without providing responses that reflect the to-be-measured traits. Our simulation studies demonstrate that this model accurately estimates parameters and classifies respondents as either attentive or careless, while maintaining error rates within acceptable limits. Furthermore, integrating response time enhances model convergence and the precision of classification and estimation. Using mediation models as an example, we illustrate how social science researchers can use this FMM approach to address careless responding in substantive research. An empirical study further tests the applicability of the proposed model in real-world scenarios, comparing its conclusions with traditional methods. To support its use, we provide an R function to streamline implementation.

  • Research Article
  • 10.54103/2282-0930/29202
A Framework to Improve Data Quality and Manage Dropout in Web-Based Medical Surveys: Insights from an Ai Awareness Study among Italian Physicians
  • Sep 8, 2025
  • Epidemiology, Biostatistics, and Public Health
  • Vincenza Cofini + 9 more

Background Ensuring data quality in self-reported online surveys remains a critical challenge in digital health research, particularly when targeting healthcare professionals [1,2]. Self-reported data are susceptible to multiple biases, including careless responding, social desirability bias, and dropout-related attrition, all of which may compromise the validity of findings [3,4]. In web-based surveys where researcher oversight is limited, structured quality control measures are essential to detect low-quality responses, minimise sampling bias, and enhance data reliability [5]. Previous studies have demonstrated that inadequate quality checks can lead to inflated error rates, reduced statistical power, and misleading conclusions [6]. Objective This study presents a comprehensive methodological framework for optimising data quality in web-based medical surveys, applied to a national study on AI awareness among Italian physicians. Integrating pre-survey validation, real-time dashboards, response-time filtering, and post-hoc careless responding detection would address key challenges in digital research, while providing a replicable model for future studies. Methods We conducted a national web-based survey using a validated instrument (doi:10.1101/2025.04.11.25325592) via the LimeSurvey platform. The survey incorporated two main sections: (1) a core module assessing knowledge, attitudes and practices regarding AI in medicine; (2) clinical scenarios evaluating diagnostic agreement with AI-generated proposals. Multiple quality control strategies were implemented throughout the survey lifecycle. In terms of survey design and logic, the questionnaire employed an adaptive flow structure, whereby respondents were routed through clinical scenarios relevant to their medical speciality. To reduce the incidence of partial completions and missing data, key questions were marked as mandatory, and completion status was actively tracked. In the monitoring and recruitment phase, a real-time dashboard monitored participant distribution (gender/geographical areas/speciality); referral links were rotated to minimise snowball bias [7]. Time-based data quality checks excluded outliers (completion time <1st or >99th percentile) [8]. Completion time for the first section was analysed for all completers to assess correlations between response speed and quality indicators. Dropout patterns were analysed using Kaplan-Meier survival analysis and logistic regression, to identify systematic attrition predictors. Data quality assessments were performed on the outlier-cleaned dataset (n=587). Response quality was assessed using complementary careless responding indicators applied specifically to opinion scale items (Likert 1-5). Two detection methods were used: low response variance analysis, identifying respondents with insufficient variability (SD < 0.5), and excessive same-response detection, flagging participants using identical responses for >75% of items. Internal consistency analysis (Cronbach's α) evaluated scale reliability across different quality levels. Results A total of 736 accesses were recorded on the survey platform. As an initial inclusion criterion, only participants who indicated current registration with the Italian Medical Council were considered eligible: 79 (10.7%) were excluded, yielding a sample of 657 eligible participants (89.3%). Among eligible respondents, 597 completed the first section, yielding a dropout rate of 9.1% (n=60). A Kaplan-Meier survival analysis using total survey time revealed that most dropouts occurred early, with critical points at 45% after demographic, 51% after personal AI knowledge items, 71% after opinion items, and 100% before clinical scenarios. Logistic regression showed no significant predictors of completion (LR χ²(6)=3.46, p= 0.7497; pseudo-R²=0.014; AUC=0.60, 95%CI: 0.50–0.70). Completion time showed no correlation with response quality (Spearman's ρ = -0.019, p = 0.645). Following outlier removal, data quality assessment among 587 who completed the first section revealed two complementary patterns of careless responding: 8.52% (n=50) exhibited low response variance, while 32 (5.45%) demonstrated excessive same response patterns. Cross-classification analysis showed 23 participants (3.92%) flagged by both indicators, with 71.88% of excessive same responders also showing low variance. Overall, 50 participants (10.05%, 95% CI: 7.9%- 12.8%) exhibited careless responding detectable by at least one indicator. Internal consistency analysis showed robust scale reliability (Cronbach's α = 0.754) that remained stable across quality levels. Conclusion The integration of real-time monitoring, adaptive design, time-based validation, and systematic careless responding detection provides a robust methodological framework for web-based medical surveys, particularly for complex topics like AI adoption. Comprehensive data quality assessment revealed a 10.05% careless responding rate among completers, which aligns with the literature. The absence of correlation between completion time and response quality shows that careless responding could reflect attentional rather than temporal factors. Our findings suggest that both phenomena likely reflect situational or contextual factors rather than systematic participant characteristics or survey design flaws. This supports the validity and generalizability of the final dataset while providing a replicable quality control framework for future web-based medical research.

  • Research Article
  • 10.1080/08959285.2025.2552213
Correlates of Careless Responding: Trait and State Antecedents and Criteria
  • Sep 6, 2025
  • Human Performance
  • Nidhal Mazza + 1 more

ABSTRACT Careless responding may constitute a threat to research validity. To understand who careless responders are, previous research has primarily focused on the relationship between careless responding and personality namely the Big Five traits. In addition to replicating this relationship, the present work aimed to contribute to the limited literature on task performance and counterproductive behaviors as criteria of careless responding and expand on its nomological network by examining cognitive ability and test-taking states (i.e. motivation and fatigue) as antecedents. Using multiple careless responding indices and undergraduate student samples, both Studies 1 (N = 150) and 2 (N = 150) replicated the relationship between agreeableness, conscientiousness, and openness and careless responding. Cognitive ability and test-taking motivation also emerged as significant albeit inconsistent predictors. Pertaining to the criteria, support was found for the negative relationship between careless responding and task performance (in Study 2) and for the positive association with counterproductive behaviors (in both studies). The results provide additional evidence consonant with the view that careless responding is nonrandom and encourage further explorations of correlates beyond personality traits. They also highlight concerns about the potential bias introduced by the common removal of careless responders from study samples.

  • Research Article
  • Cite Count Icon 1
  • 10.1080/24732850.2025.2551646
The Cross-Cultural Ability of the Inventory of Problems-29-M (IOP-29-M) to Detect Feigned Symptom Presentations: A Replication of Akca et al. (2023) on a Romanian-Speaking Sample
  • Sep 4, 2025
  • Journal of Forensic Psychology Research and Practice
  • Iulia Crișan + 4 more

ABSTRACT This study replicated research on the ability of the Inventory of Problems-29 (IOP-29) to detect honest, feigning, and random responding, and investigated the accuracy of its memory module (IOP-M) in a Romanian-speaking sample. 127 participants, randomized into three groups, were assessed online with the IOP-29-M, in each responding condition. The standard cutoff of ≥.50 on the IOP-29 False Disorders Probability Scale accurately discriminated honest from feigned protocols. The Random Responding Scale showed promise in indicating careless responding. Combining the IOP-29 FDS and IOP-M improved classification accuracies. Results support the IOP-29-M’s cross-cultural validity and its utility in forensic and clinical settings.

  • Research Article
  • 10.1017/psy.2025.10041
A Beta Mixture Model for Careless Respondent Detection in Visual Analogue Scale Data
  • Sep 1, 2025
  • Psychometrika
  • Lijin Zhang + 3 more

Visual Analogue scales (VASs) are increasingly popular in psychological, social, and medical research. However, VASs can also be more demanding for respondents, potentially leading to quicker disengagement and a higher risk of careless responding. Existing mixture modeling approaches for careless response detection have so far only been available for Likert-type and unbounded continuous data but have not been tailored to VAS data. This study introduces and evaluates a model-based approach specifically designed to detect and account for careless respondents in VAS data. We integrate existing measurement models for VASs with mixture item response theory models for identifying and modeling careless responding. Simulation results show that the proposed model effectively detects careless responding and recovers key parameters. We illustrate the model’s potential for identifying and accounting for careless responding using real data from both VASs and Likert scales. First, we show how the model can be used to compare careless responding across different scale types, revealing a higher proportion of careless respondents in VAS compared to Likert scale data. Second, we demonstrate that item parameters from the proposed model exhibit improved psychometric properties compared to those from a model that ignores careless responding. These findings underscore the model’s potential to enhance data quality by identifying and addressing careless responding.

  • Research Article
  • 10.1080/15305058.2025.2554212
Identifying careless responding in personality assessment using item prediction approach
  • Aug 29, 2025
  • International Journal of Testing
  • Seyul Kwak + 2 more

Careless and inattentive responding pose a significant challenge to the validity of psychological tests. However, existing indicators often fail to fully utilize the available data and lack sensitivity to partial degradation. This study introduces a machine learning-based item prediction approach to assess the extent to which responses deviate from expected patterns by predicting each item’s response based on the responses to the remaining items. Data were collected from 941 participants who completed a 195-item scale measuring clinical symptoms and personality traits (Millon Clinical Multiaxial Inventory-IV). The dataset was divided into a training set, a test set, and a partially randomized set, and the effectiveness was examined using indicators such as the item prediction approach, Mahalanobis distance, and person-fit statistics. The results showed that the item prediction approach showed better performance in differentiating partially randomized datasets compared to other methods. Moreover, the item prediction approach showed enhanced performance within the ranges where the proportion of randomized data was relatively small, or the training dataset was sufficiently large. This study suggests that utilizing extensive item information leveraged through machine learning can effectively detect careless response patterns.

  • Research Article
  • 10.5334/ijic.nacic24142
So You Want to Conduct an Online Survey? Strategies for Identifying and Eliminating Fraudulent Responses
  • Aug 19, 2025
  • International Journal of Integrated Care
  • Isabelle Caven

Background: We conducted an online survey of Canadian healthcare providers through REDCAP to assess their level of recognition and support of caregivers under the age of 25. The survey was distributed through various channels (listservs, newsletters). Distribution through social media resulted in an immediate uptick of responses in fast succession, despite the use of institutional REDCAP-administered Captcha, raising the alarm about their validity. This project outlines an evidence-informed approach to filtering out fraudulent survey responses. Aimed at researchers and anyone interested in conducting surveys where recruitment may involve social media. The presentation will provide practical strategies and tools to help researchers identify and eliminate fraudulent responses to improve the reliability of their findings. Approach: We conducted a literature review to determine the best course of action after detecting potentially fraudulent responses. Our filter strategy involved several steps including ) identifying postal codes that were incorrect for listed province/territory or improbable age starting clinical practice, 2) clusters of survey responses completed within two minutes of each other, 3) speed bump' question to detect inattentive or careless respondents, 4) inconsistent responses to closed-ended questions (i.e., respondents identified that they do not encounter young caregivers, but do support them in clinical practice); and 5) verification through manual review of open-ended responses for those with AI-like structure such as "noun: description," or similar/duplicate answers. Results: The current number of completed survey responses is 656, with our algorithm identifying more than 283 (77.5%) of responses as fraudulent. A balanced approach between automated and manual processes was needed to deal with concerns of artificial intelligence-generated responses. As a result, we significantly narrowed down the pool of survey responses, the remaining data was reliable and valid for analysis. This algorithm builds on the work of several recent articles from research teams similarly navigating a rapid rise in fraudulent responses. Our work identifies that survey respondents may be using AI to complete open-ended questions, raising alarms for those considering online survey tools. Implication: The key learning from this project is the importance of an evolving strategy to filter out bots. A multi-faceted approach, combining automated filters and manual reviews, is essential for identifying and eliminating potentially fraudulent responses. Online survey research is an important avenue for reaching a wide audience of respondents; however, researchers and leaders interested in recruiting should consider incorporating these strategies into their questionnaires. Moving forward, we plan to publish the survey data, providing valuable insights into the recognition and support of young caregivers in Canada. Knowledge sharing and continuing collaboration with researchers across Canada will support the ongoing refinement of a bot detection strategy to maintain the integrity of research data. Researchers may also consider collaborating with their academic institutions to highlight necessary steps to prevent fraudsters from completing surveys hosted on institutional survey platforms (i.e., REDCap). Survey platforms hosted within institutions may be able to verify further respondents' validity, such as by implementing complex CAPTCHA features or tracking anonymized IP address duplicates.

  • Research Article
  • 10.1007/s10869-025-10055-2
The Effects of Careless Responding Warnings on the Construct Validity of Self-Report Measures
  • Aug 13, 2025
  • Journal of Business and Psychology
  • Mark A Roebke + 2 more

The Effects of Careless Responding Warnings on the Construct Validity of Self-Report Measures

  • Research Article
  • 10.2196/70451
The Impact of Individual Factors on Careless Responding Across Different Mental Disorder Screenings: Cross-Sectional Study
  • Jul 31, 2025
  • Journal of Medical Internet Research
  • Huawei Kuang + 5 more

BackgroundOnline questionnaires are widely used for large-scale screening. However, careless responding (CR) from participants can compromise the reliability of screening outcomes. Prior studies have focused on the effects of individual and environmental factors on CR, but the effect of questionnaire type remains underexplored.ObjectiveThis study investigates the individual factors influencing CR in online mental health screening and assesses how the effect of these factors varies across different psychological questionnaires.MethodsThis study analyzed data from 24,367 participants across 4 questionnaires (PHQ-9 [Patient Health Questionnaire-9], PSS [Perceived Stress Scale], ISI [Insomnia Severity Index], and GAD-7 [Generalized Anxiety Disorder-7 Scale]). CR was defined as the proportion of items completed in less than 2 seconds per item. We used a multiple linear regression model to examine the effect of individual factors (sex, age, education, smoking, and drinking) on CR across 4 questionnaires. In addition, response times were visualized to identify patterns between careless and careful responders.ResultsFemales demonstrate lower levels of CR than males when completing the PHQ-9 (β=−.172, 95% CI −0.104 to −0.089; P<.001), PSS (β=−.234, 95% CI −0.162 to −0.14; P<.001), ISI (β=−.207, 95% CI −0.13 to −0.114; P<.001), and GAD-7 (β=−.177, 95% CI −0.108 to −0.093; P<.001). Older participants demonstrated lower levels of CR on the PHQ-9 (β=−.036, 95% CI −0.007 to −0.003; P<.001), ISI (β=−.036, 95% CI −0.007 to −0.003; P<.001), and GAD-7 (β=−.053, 95% CI −0.009 to −0.005; P<.001), but their age was unrelated to CR on the PSS. Interestingly, compared with participants with an associate-level education, those with a high education (bachelor’s, master’s, or doctoral degree) demonstrated higher levels of CR, especially those with a master’s degree (PHQ-9: β=.098, 95% CI 0.136 to 0.188; P<.001 and GAD-7: β=.091, 95% CI 0.125 to 0.178; P<.001). Smokers exhibited varied patterns, with current smokers demonstrating lower levels of CR on the PHQ-9 (β=−.022, 95% CI −0.064 to −0.016; P=.001) and GAD-7 (β=−.014, 95% CI −0.051 to −0.002; P=.03), whereas occasional smokers demonstrated higher levels of CR on the PSS (β=.019, 95% CI 0.010 to 0.050; P=.003) than nonsmokers. Drinkers demonstrated lower levels of CR than nondrinkers, with the strongest effect among occasional drinkers on the PHQ-9 (β=−.163, 95% CI −0.103 to −0.087; P<.001). Analysis of response times revealed that participants tended to spend less time on PHQ-9 and GAD-7 surveys, and CR on PSS and ISI surveys was characterized by skipping questions.ConclusionsThe effect of individual factors on CR varies across questionnaire types. These findings offer valuable insights for questionnaire designers and administrators, highlighting the need for targeted intervention.

  • Research Article
  • 10.1080/00223891.2025.2531187
Improving the Measurement of the Big Five via Alternative Formats for the BFI-2
  • Jul 17, 2025
  • Journal of Personality Assessment
  • Xijuan Zhang + 3 more

The Big Five Inventory-2 (BFI-2; Soto & John, 2017a) was developed to improve on the limitations of the original BFI by balancing the number of positively and negatively worded items and establishing a hierarchical structure for the Big Five traits. However, as the BFI-2 employs a Likert format with agree–disagree options, it suffers from common problems of the Likert format, including acquiescence bias and method effects due to the negatively worded items. In this research, we converted the BFI-2 into three alternative formats: Expanded, Item-Specific-Full, and Item-Specific-Light. These formats have tailored response options for each item and avoid the use of negatively worded items, thereby addressing the issues associated with the Likert format. Across two studies (N = 1,335 and N = 1,451), we randomly assigned Canadian undergraduate students to complete the BFI-2 in the original Likert format or one of the three alternative formats. Results showed that the Likert and alternative formats exhibit similar predictive validity. However, the alternative formats—particularly the Expanded format—showed better psychometric properties, including enhanced factor structure, increased reliability, and possibly reduced careless responding. We recommend that researchers consider adopting the BFI-2 in these alternative formats and adapting other Likert scales to these alternative formats.

  • Research Article
  • 10.1177/25152459251343043
Does Truth Pay? Investigating the Effectiveness of the Bayesian Truth Serum With an Interim Payment: A Registered Report
  • Jul 1, 2025
  • Advances in Methods and Practices in Psychological Science
  • Claire M Neville + 1 more

Self-report data are vital in psychological research, but biases such as careless responding and socially desirable responding can compromise their validity. Although various methods are employed to mitigate these biases, they have limitations. The Bayesian truth serum (BTS) offers a survey scoring method to incentivize truthfulness by leveraging correlations between personal and collective opinions and rewarding “surprisingly common” responses. In this study, we evaluated the effectiveness of the BTS in mitigating socially desirable responding to sensitive questions and tested whether an interim payment could enhance its efficacy by increasing trust. In a between-subjects experimental survey, 877 participants were randomly assigned to one of three conditions: BTS, BTS with interim payment, and regular incentive (RI). Contrary to the hypotheses, participants in the BTS conditions displayed lower agreement with socially undesirable statements compared with the RI condition. The interim payment did not significantly enhance the BTS’s effectiveness. Instead, response patterns diverged from the mechanism’s intended effects, raising concerns about its robustness. As the second registered report to challenge its efficacy, this study’s results cast serious doubt on the BTS as a reliable tool for mitigating socially desirable responding and improving the validity of self-report data in psychological research.

  • Research Article
  • 10.1177/10944281251334778
Using Markov Chains to Detect Careless Responding in Survey Research
  • Jun 24, 2025
  • Organizational Research Methods
  • Torsten Biemann + 3 more

Careless responses by survey participants threaten data quality and lead to misleading substantive conclusions that result in theory and practice derailments. Prior research developed valuable precautionary and post-hoc approaches to detect certain types of careless responding. However, existing approaches fail to detect certain repeated response patterns, such as diagonal-lining and alternating responses. Moreover, some existing approaches risk falsely flagging careful response patterns. To address these challenges, we developed a methodological advancement based on first-order Markov chains called Lazy Respondents (Laz.R) that relies on predicting careless responses based on prior responses. We analyzed two large datasets and conducted an experimental study to compare careless responding indices to Laz.R and provide evidence that its use improves validity. To facilitate the use of Laz.R, we describe a procedure for establishing sample-specific cutoff values for careless respondents using the “kneedle algorithm” and make an R Shiny application available to produce all calculations. We expect that using Laz.R in combination with other approaches will help mitigate the threat of careless responses and improve the accuracy of substantive conclusions in future research.

  • Research Article
  • 10.1002/jocb.70034
Best Practices for Leveraging PISA CT Data to Understand Gender Differences in Creative Thinking
  • May 25, 2025
  • The Journal of Creative Behavior
  • Christa L Taylor

ABSTRACTAccess to data from the 2022 PISA creative thinking assessment (PISA CT) provides a unique opportunity to advance our understanding of gender differences in creativity. However, in addition to the general theoretical and methodological considerations discussed elsewhere in this special issue, there are several matters specific to gender differences in creativity that should be considered when interpreting the results of the PISA CT. First, the overall creative thinking index on the PISA CT may not provide much value to understanding gender differences in creativity, as differences may be domain and task specific. Second, gender differences in careless responding may bias results for gender differences in PISA CT scores, as effort is associated with enhanced creative performance. Third, the restricted age range of PISA participants may limit the generalizability of results, as the developmental dynamics of gender differences in creativity suggest that the size and direction of gender differences in creative performance may change during this stage of adolescence. Each of these issues, as well as potential solutions and directions for future research using PISA CT scores to examine gender differences in creativity, are discussed.

  • Research Article
  • Cite Count Icon 1
  • 10.1080/00273171.2025.2492016
Accounting for Measurement Invariance Violations in Careless Responding Detection in Intensive Longitudinal Data: Exploratory vs. Partially Constrained Latent Markov Factor Analysis
  • Apr 15, 2025
  • Multivariate Behavioral Research
  • Leonie V D E Vogelsmeier + 2 more

Intensive longitudinal data (ILD) collection methods like experience sampling methodology can place significant burdens on participants, potentially resulting in careless responding, such as random responding. Such behavior can undermine the validity of any inferences drawn from the data if not properly identified and addressed. Recently, a confirmatory mixture model (here referred to as fully constrained latent Markov factor analysis, LMFA) has been introduced as a promising solution to detect careless responding in ILD. However, this method relies on the key assumption of measurement invariance of the attentive responses, which is easily violated due to shifts in how participants interpret items. If the assumption is violated, the ability of the fully constrained LMFA to accurately identify careless responding is compromised. In this study, we evaluated two more flexible variants of LMFA—fully exploratory LMFA and partially constrained LMFA—to distinguish between careless and attentive responding in the presence of non-invariant attentive responses. Simulation results indicated that the fully exploratory LMFA model is an effective tool for reliably detecting and interpreting different types of careless responding while accounting for violations of measurement invariance. Conversely, the partially constrained model struggled to accurately detect careless responses. We end by discussing potential reasons for this.

  • Research Article
  • 10.1177/25152459251338041
On the Statistical Analysis of Studies With Attention Checks
  • Apr 1, 2025
  • Advances in Methods and Practices in Psychological Science
  • Maya B Mathur

Attention checks are often used to identify and exclude participants who may be responding carelessly. There has been little statistical guidance on the analysis of such studies and on when it is indeed valid to simply exclude inattentive participants. To address this, I first formalize attention checks as measures intended to identify participants whose responses are free of measurement error. Measurement error could arise not only because of careless responding but also if some participants fail to receive the experimental manipulation in its intended form because they did not attend to its contents. I discuss the statistical assumptions under which it is valid to simply exclude inattentive participants. In randomized experiments, this standard analysis may lead to bias if (a) the dependent variable affects attentiveness or (b) there are variables that affect both attentiveness and the dependent variable. The latter assumption is stringent and is likely to be violated in many studies. I suggest a straightforward modification to the standard approach, that is, controlling for variables that affect both attentiveness and the dependent variable. This covariate-adjusted approach requires considerably less stringent assumptions. In two worked examples, I reanalyze previously published experiments on (a) a documentary intended to reduce consumption of meat and animal products and (b) flag-priming effects on political conservatism.

  • Research Article
  • 10.1177/07342829251328132
Prevalence and Psychometric Implications of Careless Responses in an Online Student Survey
  • Mar 19, 2025
  • Journal of Psychoeducational Assessment
  • Başak Erdem-Kara + 1 more

Surveys are widely used data collection tools in empirical studies of human behaviour. Self-reporting plays a central role in exploring various psychological processes that are integral to human behaviour and learning, such as motivations or emotions. However, respondents can sometimes be the source of measurement error in survey research. In this context, we investigated the phenomenon of ‘careless responding’, in which respondents fail to read or adequately attend to the constructs measured by survey items, which can result in data that deviate from participants’ true levels of the constructs being measured. Specifically, 405 undergraduate students, predominantly female (81.7%) with an average age of 21, were asked to complete the HEXACO Personality Inventory online. The inventory included several control items: two instructed response items, two reverse response items, and a self-report item to determine the prevalence of careless responders. Results indicated that the reverse items flagged a higher percentage of participants as careless responders compared to the instructed response items. However, the consistency of the instructed response items was much better. Only a small number of participants were flagged by the self-report item. Females exhibited greater carefulness and bonus points had no significant impact on carelessness. Subsequent analyses included inter-factor correlations, reliability statistics, descriptive statistics, and exploratory structural equation modelling results for both careful and careless responder groups, with the results worsening for the careless groups. These statistics were followed by implications that are discussed according to the literature on survey methodology.

  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • .
  • .
  • 1
  • 2
  • 3
  • 4
  • 5

Popular topics

  • Latest Artificial Intelligence papers
  • Latest Nursing papers
  • Latest Psychology Research papers
  • Latest Sociology Research papers
  • Latest Business Research papers
  • Latest Marketing Research papers
  • Latest Social Research papers
  • Latest Education Research papers
  • Latest Accounting Research papers
  • Latest Mental Health papers
  • Latest Economics papers
  • Latest Education Research papers
  • Latest Climate Change Research papers
  • Latest Mathematics Research papers

Most cited papers

  • Most cited Artificial Intelligence papers
  • Most cited Nursing papers
  • Most cited Psychology Research papers
  • Most cited Sociology Research papers
  • Most cited Business Research papers
  • Most cited Marketing Research papers
  • Most cited Social Research papers
  • Most cited Education Research papers
  • Most cited Accounting Research papers
  • Most cited Mental Health papers
  • Most cited Economics papers
  • Most cited Education Research papers
  • Most cited Climate Change Research papers
  • Most cited Mathematics Research papers

Latest papers from journals

  • Scientific Reports latest papers
  • PLOS ONE latest papers
  • Journal of Clinical Oncology latest papers
  • Nature Communications latest papers
  • BMC Geriatrics latest papers
  • Science of The Total Environment latest papers
  • Medical Physics latest papers
  • Cureus latest papers
  • Cancer Research latest papers
  • Chemosphere latest papers
  • International Journal of Advanced Research in Science latest papers
  • Communication and Technology latest papers

Latest papers from institutions

  • Latest research from French National Centre for Scientific Research
  • Latest research from Chinese Academy of Sciences
  • Latest research from Harvard University
  • Latest research from University of Toronto
  • Latest research from University of Michigan
  • Latest research from University College London
  • Latest research from Stanford University
  • Latest research from The University of Tokyo
  • Latest research from Johns Hopkins University
  • Latest research from University of Washington
  • Latest research from University of Oxford
  • Latest research from University of Cambridge

Popular Collections

  • Research on Reduced Inequalities
  • Research on No Poverty
  • Research on Gender Equality
  • Research on Peace Justice & Strong Institutions
  • Research on Affordable & Clean Energy
  • Research on Quality Education
  • Research on Clean Water & Sanitation
  • Research on COVID-19
  • Research on Monkeypox
  • Research on Medical Specialties
  • Research on Climate Justice
Discovery logo
FacebookTwitterLinkedinInstagram

Download the FREE App

  • Play store Link
  • App store Link
  • Scan QR code to download FREE App

    Scan to download FREE App

  • Google PlayApp Store
FacebookTwitterTwitterInstagram
  • Universities & Institutions
  • Publishers
  • R Discovery PrimeNew
  • Ask R Discovery
  • Blog
  • Accessibility
  • Topics
  • Journals
  • Open Access Papers
  • Year-wise Publications
  • Recently published papers
  • Pre prints
  • Questions
  • FAQs
  • Contact us
Lead the way for us

Your insights are needed to transform us into a better research content provider for researchers.

Share your feedback here.

FacebookTwitterLinkedinInstagram
Cactus Communications logo

Copyright 2025 Cactus Communications. All rights reserved.

Privacy PolicyCookies PolicyTerms of UseCareers