Assessing the Predictive Validity of the Function Acquisition Speed Test in the Context of Voting Behavior

  • Abstract
  • Literature Map
  • Similar Papers
Abstract
Translate article icon Translate Article Star icon
Take notes icon Take Notes

Assessing the Predictive Validity of the Function Acquisition Speed Test in the Context of Voting Behavior

Similar Papers
  • Research Article
  • 10.1080/02640414.2024.2449316
Validity and normative scores of finger flexor strength and endurance tests estimated from a large sample of female and male climbers
  • Jan 5, 2025
  • Journal of Sports Sciences
  • Patrik Berta + 5 more

Recent reviews have highlighted conflicting findings regarding the validity of finger flexor strength and endurance tests in sport climbers, often due to small sample sizes and low ecological validity of the tests used. To address these gaps, 185 male and 122 female climbers underwent maximal finger flexor strength, intermittent and continuous finger flexor endurance, and the finger hang tests in a sport-specific setting to determine the predictive and concurrent validity of these tests. The finger hang test showed the strongest relationship to climbing ability for both sexes (R ≈ 0.75). However, despite its widespread use as an endurance test, the finger hang was found to be primarily determined by finger strength, explaining 65% and 80% of the variance in males and females, respectively. Finger strength emerged as the dominant factor, explaining the majority of variance in climbing ability (males 68%; females 64%), followed by intermittent endurance (males 28%; females 34%). These findings emphasize finger strength as the primary predictor of climbing ability and highlight the importance of intermittent endurance testing for assessing climbing-specific endurance of the finger flexors. No significant differences were found between male and female climbers in finger flexor strength and endurance when normalized to body mass.

  • Research Article
  • 10.1519/ssc.0000000000000862
Scientific Assessment of Agility Performance in Competitive Sports: Evolution, Application, Reliability, and Validity
  • Aug 23, 2024
  • Strength & Conditioning Journal
  • Jiachi Ye + 4 more

This systematic review aimed to analyze the evolution, reliability, and validity of agility testing in athletes. The results indicated the necessity of prioritizing reactive agility (RA) as the primary focus in the scientific assessment of athletes' agility performance. The cutting and “stop and go” tests were the most widely used agility tests, utilizing light or human random signals as stimuli. Overall, the agility tests demonstrated that high reliability and poor agility performance could be predictive indicators of higher sports injury rates. Convergent validity between the agility and change of direction speed (CODS) tests was moderate. Agility tests could also differentiate athletes with different performance levels and age groups. Future practitioners should focus on customizing the “gold standard” agility tests for specific sports, which includes evaluating the reliability and validity of these tests.

  • Research Article
  • Cite Count Icon 24
  • 10.1177/014662168500900206
Effects of Test Preparation on the Validity of a of a Graduate Admissions Test
  • Jun 1, 1985
  • Applied Psychological Measurement
  • Donald E Powers

Test score improvement has been the major concern in nearly all the extant studies of special preparation, or "coaching," for tests. Recently, however, logical analyses of the possible outcomes and implications of special test preparation (Anastasi, 1981; Cole, 1982; Messick, 1981) have suggested that the issue of test score effects is but one aspect of the controversy sur rounding coaching; the impact of special preparation on test validity is an equally germane consideration. Although the assumption is sometimes made that coaching can serve only to dilute the construct validity and impair the predictive power of a test, some kinds of special preparation may, by reducing irrelevant sources of test difficulty, actually improve both con struct validity and predictive validity. This study ex amined the relationships of both internal and external criteria to Graduate Record Examination (GRE) candi dates' performance on several analytical ability item types, obtained under several test preparation condi tions. The purpose was to assess the effects of these various preparations on test reliability and validity. The preparation conditions were those previously shown to be effective, in varying degrees, in improv ing examinee performance on two of three analytical item types (Powers & Swinton, 1982, 1984). The data for this study were those collected by Powers & Swin ton (1982, 1984). The results suggest that GRE ana lytical ability scores may relate more strongly to aca demic performance after special test preparation than under more standard conditions and that they may re late less to measures of other cognitive abilities (ver bal and quantitative scores). No consistent effects were detected on either the internal consistency or the convergent validity of the analytical measure.

  • Research Article
  • Cite Count Icon 1
  • 10.1016/j.jsr.2025.03.008
Development and validation of a video-based hazard prediction test for e-bike riders in China.
  • Jul 1, 2025
  • Journal of safety research
  • Long Sun + 2 more

Development and validation of a video-based hazard prediction test for e-bike riders in China.

  • Research Article
  • Cite Count Icon 58
  • 10.1016/j.apmr.2011.11.033
Rasch Validation and Predictive Validity of the Action Research Arm Test in Patients Receiving Stroke Rehabilitation
  • Mar 13, 2012
  • Archives of Physical Medicine and Rehabilitation
  • Hui-Fang Chen + 3 more

Rasch Validation and Predictive Validity of the Action Research Arm Test in Patients Receiving Stroke Rehabilitation

  • Research Article
  • Cite Count Icon 8
  • 10.1126/science.159.3817.851
Standardized ability tests and testing. Major issues and the validity of current criticisms of tests are discussed.
  • Feb 23, 1968
  • Science (New York, N.Y.)
  • D A Goslin

At the outset a distinction was made between criticisms directed at the validity of tests and criticisms not affected by the validity of the tests. It was noted further that all criticisms of tests must take into consideration the type of test and the use to which the test is put. Criticisms of the validity of tests involved the following issues: (i) tests may be unfair to certain groups and individuals, including the extremely gifted, the culturally disadvantaged, and those who lack experience in taking tests; (ii) tests are not perfect predictors of subsequent performance; (iii) tests may be used in overly rigid ways; (iv) tests may not measure inherent qualities of individuals; and (v) tests may contribute to their own predictive validity by serving as self-fulfilling prophecies. Criticisms that are more or less independent of test validity included the effects of tests on (i) thinking patterns of those tested frequently; (ii) school curricula; (iii) self-image, motivation, and aspirations; (iv) groups using tests as a criterion for selection or allocation, or both; and (v) privacy. Several concluding remarks are in order: 1) This paper has focused almost entirely on criticisms of tests. However, the positive value of standardized tests should not be ignored. Here we must keep in mind what possible alternative measures would be used if standardized tests were abandoned. 2) We must begin thinking about tests in a much broader perspective- one that includes consideration of the social effects of tests as well as their validity and reliability. 3) Finally, an effort should be made to develop rational and systematic policies on the use of tests with the culturally disadvantaged, the dissemination of test results, and the problem of invasion of privacy. Such policies can be formulated only if we are willing to take a long hard look at the role we want testing to play in the society. Standardized tests currently are a cornerstone in the edifice of stratification in American society. It is up to the social scientist to conduct research that will enable policy makers in education, business and industry, and government to determine in a consistent and rational way the ultimate shape of this edifice.

  • Research Article
  • Cite Count Icon 11
  • 10.1002/j.2330-8516.1986.tb00201.x
GENERALIZATION OF GRE GENERAL TEST VALIDITY ACROSS DEPARTMENTS
  • Dec 1, 1986
  • ETS Research Report Series
  • Robert F Boldt

ABSTRACTThis study of the validity of the GRE General Test used data from predictive validity studies that were conducted by the GRE Validity Study Service (VSS) in 79 graduate departments. The performance criterion was first‐year grades in graduate school. Observed validities were computed, and for each graduate department validities were also estimated for groups at two other stages of selection–applicants for admission to the department, and all GRE takers.Two validity generalization hypotheses were tested. One was that the General Test's validities were equal across studies; the other was that the General Test's validities had equal ratios across studies, that is, that the level of the validities might vary from institution to institution but the ratios would be constant. These hypotheses were applied for VSS groups, applicant groups, and all GRE takers, and implied validities (validities that would be observed if the hypotheses were true) were calculated. When the implied validities were compared to the observed validities, it was found that the assumption of equal validity did not account well for differences in the level of observed validity of the GRE General Test. The equal ratio hypothesis accounted for the observed validities rather well, possibly due to overcapitalization on chance, but departmental discipline was not significantly related to the degree of fit of observed to implied validities.At all levels of selection, the study yielded applicant validities that were predominantly positive. This lends support to the presumption that the General Test's validity is transportable, i.e., institutions that do not use the General Test can, if they adopt it, expect it to prove valid. In view of the scarcity of very low or negative validities, studies revealing such validities should be questioned.

  • Research Article
  • Cite Count Icon 24
  • 10.1136/oem.2008.042903
Criterion-related validity of functional capacity evaluation lifting tests on future work disability risk and return to work in the construction industry
  • May 24, 2009
  • Occupational and Environmental Medicine
  • V Gouttebarge + 5 more

Objectives:To assess the criterion-related validity of the five Ergo-Kit (EK) functional capacity evaluation (FCE) lifting tests in construction workers on sick leave due to musculoskeletal disorders (MSDs).Methods:Six weeks, 6 months...

  • Research Article
  • Cite Count Icon 24
  • 10.1111/j.1540-5885.2010.00784.x
The Moderating Roles of Prior Experience and Behavioral Importance in the Predictive Validity of New Product Concept Testing
  • Dec 16, 2010
  • Journal of Product Innovation Management
  • Muammer Ozer

Concept testing has long been recognized as an important new product development (NPD) activity. As one of the widely used concept testing techniques, the method of intentions surveys relies on the purchase intentions of the potential buyers of new products and helps firms assess the viabilities of their new products before making major financial and nonfinancial commitments to their development. Despite the importance of intentions-based new product concept testing and its widespread use by firms, the correspondence between initial behavioral intentions and subsequent purchase behaviors has been relatively low and heterogeneous, making it very difficult for firms to draw any useful conclusions from intentions surveys. Focusing on the predictive validity of intentions-based new product concept testing and addressing several calls for future research to identify specific conditions making it more effective, this paper tests the moderating roles of prior experience and behavioral importance in the predictive validity of intentions-based new product concept testing. It also tests whether people who state a positive intention and people who state a negative intention are equally accurate in their intentions. Finally, it tests the relative moderating roles of prior experience and behavioral importance in the intentions–behavior relationship. The results based on two longitudinal surveys first suggested that people's prior experience moderates the relationship between behavioral intentions and actual behaviors in a way that the relationship is stronger when prior experience is high as opposed to when it is low. Second, they showed that the level of importance that people attach to a behavior also moderates the relationship between behavioral intentions and actual behaviors such that the relationship is stronger when behavioral importance is high as opposed to when it is low. Third, they indicated that the behavioral intentions of people who state that they will not perform a behavior are more accurate than are those of people who state that they will perform it. Finally, the results suggested that the impact of behavioral importance is greater than that of prior experience. This study offers several implications. Most notably, the results can help firms better understand different factors affecting the predictive validity of intentions-based new product concept testing and hence make more accurate new product decisions.

  • Research Article
  • Cite Count Icon 6
  • 10.1186/s12877-021-02621-z
Reliability and validity of a quick test of cognitive speed (AQT) in screening for mild cognitive impairment and dementia
  • Dec 1, 2021
  • BMC Geriatrics
  • Pouya Farokhnezhad Afshar + 4 more

BackgroundCognitive disorders are one of the important issues in old age. There are many cognitive tests, but some variables affect their results (e.g., age and education). This study aimed to evaluate the reliability and validity of A Quick Test of Cognitive Speed (AQT) in screening for mild cognitive impairment (MCI) and dementia.MethodsThis is a psychometric properties study. 115 older adults participated in the study and were divided into three groups (46 with MCI, 24 with dementia, and 45 control) based on the diagnosis of two geriatric psychiatrists. Participants were assessed by AQT and Mini-Mental State Examination (MMSE). Data were analyzed using Pearson correlation, independent t-test, and ROC curve by SPSS v.23.ResultsThere was no significant correlation between AQT subscales and age and no significant difference between the AQT subscales in sex, educational levels. The test-retest correlations ranges were 0.84 from 097. Concurrent validity was significant between MMSE and AQT. Its correlation was with Color − 0.78, Form − 0.71, and Color-Form − 0.72. The cut-off point for Color was 43.50 s, Form 52 s, and Color-Form 89 s were based on sensitivity and specificity for differentiating older patients with MCI with controls. The cut-off point for Color was 62.50 s, for Form 111 s, and Color-Form 197.50 s based on sensitivity and specificity measures for differentiating older patients with dementia and MCI.ConclusionThe findings showed that AQT is a suitable tool for screening cognitive function in older adults.

  • Research Article
  • Cite Count Icon 12
  • 10.1111/j.1479-8301.2011.00388.x
Reliability and validity of A Quick Test of Cognitive Speed for detecting early‐stage dementia in elderly Japanese
  • Jun 1, 2012
  • Psychogeriatrics
  • Fumi Takahashi + 4 more

The aim of this study was to evaluate the reliability and validity of A Quick Test of Cognitive Speed (AQT) for detecting early-stage dementia in the elderly Japanese population. A total of 280 clinical participants (180 with mild Alzheimer's disease, 43 with amnestic mild cognitive impairment, 32 with non-amnestic mild cognitive impairment and 25 control subjects) and 22 community-dwelling elderly individuals without dementia were recruited. The Clinical Dementia Rating, the Mini-Mental State Examination, and AQT were administered to all participants. The Neurobehavioral Cognitive Status Examination was also administered to clinical participants. The intraclass correlation coefficient for the test-retest reliability of colour-form naming time on AQT was 0.88 (95% CI, 0.74-0.95, P < 0.001). AQT colour-form naming time was significantly correlated with the Clinical Dementia Rating, the total score on the Mini-Mental State Examination, and the total score on the Neurobehavioral Cognitive Status Examination and most of its subscales. AQT colour-form naming time was significantly longer in elderly individuals with mild Alzheimer's disease, amnestic mild cognitive impairment, and non-amnestic mild cognitive impairment than in control subjects. The receiver operating characteristic curve analysis indicated that AQT colour-form naming time significantly distinguished subjects with early-stage dementia (mild Alzheimer's disease, amnestic mild cognitive impairment, and non-amnestic mild cognitive impairment) from controls. The area under the curve was estimated to be 0.88 (95%CI = 0.82-0.95). A cut-off of 71/72 seconds yielded the best sensitivity/specificity trade-off: sensitivity = 85% and specificity = 76%. AQT is a useful brief screening tool for detecting early-stage dementia in elderly Japanese individuals.

  • Research Article
  • Cite Count Icon 42
  • 10.1111/j.1745-3984.1998.tb00526.x
The Effect of Coaching on the Predictive Validity of Scholastic Aptitude Tests
  • Mar 1, 1998
  • Journal of Educational Measurement
  • Avi Allalouf + 1 more

The present study was designed to examine whether coaching affects the predictive validity and fairness of scholastic aptitude tests. Two randomly allocated groups, coached and uncoached, were compared, and the results revealed that although coaching enhanced scores on the Israeli Psychometric Entrance Test by about 25% of a standard deviation, it did not affect predictive validity and did not create a prediction bias. These results refute claims that coaching reduces predictive validity and creates a bias against the uncoached examinees in predicting the criterion. The results are consistent with the idea that score improvement due to coaching does not result strictly from learning specific skills that are irrelevant to the criterion.

  • Research Article
  • Cite Count Icon 33
  • 10.1371/journal.pone.0198746
Admission testing for higher education: A multi-cohort study on the validity of high-fidelity curriculum-sampling tests
  • Jun 11, 2018
  • PLoS ONE
  • A Susan M Niessen + 2 more

We investigated the validity of curriculum-sampling tests for admission to higher education in two studies. Curriculum-sampling tests mimic representative parts of an academic program to predict future academic achievement. In the first study, we investigated the predictive validity of a curriculum-sampling test for first year academic achievement across three cohorts of undergraduate psychology applicants and for academic achievement after three years in one cohort. We also studied the relationship between the test scores and enrollment decisions. In the second study, we examined the cognitive and noncognitive construct saturation of curriculum-sampling tests in a sample of psychology students. The curriculum-sampling tests showed high predictive validity for first year and third year academic achievement, mostly comparable to the predictive validity of high school GPA. In addition, curriculum-sampling test scores showed incremental validity over high school GPA. Applicants who scored low on the curriculum-sampling tests decided not to enroll in the program more often, indicating that curriculum-sampling admission tests may also promote self-selection. Contrary to expectations, the curriculum-sampling tests scores did not show any relationships with cognitive ability, but there were some indications for noncognitive saturation, mostly for perceived test competence. So, curriculum-sampling tests can serve as efficient admission tests that yield high predictive validity. Furthermore, when self-selection or student-program fit are major objectives of admission procedures, curriculum-sampling test may be preferred over or may be used in addition to high school GPA.

  • Research Article
  • Cite Count Icon 22
  • 10.1007/bf02230978
Potential sources of criterion bias in supervisor ratings used for test validation
  • Jun 1, 1995
  • Journal of Business and Psychology
  • Joel Lefkowitz + 1 more

Four possible sources ofcriterion contamination were investigated in the supervisory performance ratings used for a predictive criterion-related validation study. Supervisors' liking for subordinates had a very large association with their performance ratings independent of the effects of employee ability. Also as hypothesized, expectations of employee qualifications were correlated significantly with initial (1-mo.) performance ratings but not with ratings made after 5 mos. Ethnicity was not associated with 1-mo. performance ratings, but after five months supervisors gave significantly higher ratings to subordinates of the same ethnic group as themselves. No evidence was found of sex bias in the ratings. Estimates of test validity were reduced substantially when the potential sources of criterion bias were controlled statistically. The data are interpreted in the contexts of construct relevance for ratings criteria, possible spurious inflation of employment test validities, and the developmental processes by which supervisor-subordinate relationships are established in the first few months of employment.

  • Research Article
  • Cite Count Icon 7
  • 10.1177/0013164487471027
Evaluation of Selected Interview Data in Improving the Predictive Validity of a Verbal Ability Test with Psychiatric Aide Trainees
  • Mar 1, 1987
  • Educational and Psychological Measurement
  • M K Distefano + 1 more

From 13 objective interview items, five with adequate response variability were studied to determine if they would improve the validity of a verbal ability selection test in predicting the work performance of 181 psychiatric aide trainees. Multiple regression analysis revealed that a combination of three of the interview variables (prior work experience, education, and age) improved the selection test validity of .27 (p &lt; .01) to .34 (p &lt; .01), but none of the variables individually significantly increased the validity of the test. While limited support was found for the use of such biographical interview data to enhance test validity, the number of variables studied were relatively few. None of the interview variables correlated as high with the criterion as the verbal test, which is consistent with prior reviews. No significant race differences on the interview variables or performance criterion were found and comparison of regression line slopes and intercepts revealed no evidence of selection test bias related to race.

Save Icon
Up Arrow
Open/Close
  • Ask R Discovery Star icon
  • Chat PDF Star icon

AI summaries and top papers from 250M+ research sources.