Abstract

This study developed and examined new methods to identify and quantify moment to moment fluctuations in attention as measured during an auditory Continuous Performance Task (CPT). The study had three major components. The first was the investigation of methods to describe and quantify fluctuations of good performance in CPT data. The results of that investigation were two different techniques, one which examined the number and length of hit runs (one or more consecutive successful target detections) in the data, and the second which examined response data via a spectral analysis process. Where applicable, the methodologies were thoroughly tested to assess whether they would either introduce artifacts or distort real fluctuations. The positive results obtained during the examination and testing of these techniques were a strong indicator of the validity of these techniques for identification and description of fluctuations in CPT performance. The application of those techniques to real CPT data was the second major component. Twenty to twenty five minutes of archival CPT data from 40 participants was examined. The examination included description of the nature of runs of good performance (hit runs) and the identification and description of the presence and distribution of periodicity in the data. Differences between people of different competencies were also assessed, and two different dependent measures (accuracy and reaction time) were used where possible. General findings suggested that runs and periodicity were detectable in subject performance, and that there were minimal differences in the nature of these fluctuations between subjects of differing ability. The third component of this work comprised the validity testing of the techniques developed in earlier components. Methods for examining the origins of fluctuations in CPT performance were designed and implemented. The primary question addressed by these methods was, were the findings in component 2 attributable to some fluctuation in an attention mechanism, or were they due to some random factor or some artifact of test structure? The methodology involved the creation of 4000 simulated data sets by taking actual subject data and re-assigning hits and misses to different targets, thereby leaving total percent accuracy constant. Simulated data sets were matched to subject performance categories to minimize the difference between the number of errors in the function. Quantitative and qualitative comparisons between human and simulated subjects failed to provide firm evidence of differences between the two groups. Possible explanations for the results are discussed.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.