Abstract

A number of studies have been done to explain programming success in introductory computer science and MIS courses. Many of these studies have used a variety of cognitive tests: (1) the Group Embedded Figures Test, (2) General Aptitude Test Battery, (3) IBM Programmer Aptitude Test, (4) SCAT quantitative and verbal subtests, (5) SAT and ACT, etc. All of these studies have two major problems: (1) The tests measure either a single cognitive factor or a mixture of cognitive factors which have not been separated. The single-cognitive-factor approach is much too narrow. The mixed-cognitive-factor approach often yields one or two scores which do not isolate the exact mental factors involved. For example, the SAT yields two scores: verbal and quantitative. These two scores are likely to contain ten or more cognitive factors which are not separated from the two scores. Even the General Aptitude Test Battery (GATB) is not able to isolate known cognitive factors. I suggest that researchers use the Kit of Factor-Referenced Cognitive Tests [1] which measures 23 known cognitive factors. This battery of tests is finely tuned. (2) The second problem with these studies is the type of statistical analyses used. Typically, simple correlation and regression are used. Regression is best suited to predict, not explain. I suggest that factor analysis be used in research designed to explain programmer aptitude. The explanation of variance is the purpose of factor analysis. The research reported in this abstract used 18 tests [1] which measured separate mental ability factors. Factor analysis was used to analyze the data. Programming grades in an introductory-level class were factor analyze along with these 18 cognitive factors. The factor analysis produced seven factors. Programming scores loaded significantly (r = .77, p n = 45) on the second factor. About 60% of the variance in programming aptitude was accounted for by this single factor. Six, out of 18, cognitive factors loaded significantly on this factor: REASONING, LOGICAL (Loading: r = .81, p VERBAL COMPREHENSION (Loading: r = .61, p INTEGRATIVE PROCESS (Loading: r = .54, p FLEXIBILITY OF USE (Loading: r = .41, p CLOSURE, SPEED OF (Loading: r = .39, p SEQUENTIAL MEMORY SPAN (Loading: r = .30, p These cognitive tests have also served well for prediction equations using stepwise regression. The multiple-R was .71, p = .000, n = 45. With other variables (preference for graphics, gender, algorithm comprehension), the multiple-R climbed to .82, p = .000, n = 45. To the best of my knowledge .82 is the highest multiple-R in the literature. Only variables with F-ratios of 3 or more were allowed into the equation.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.