Abstract

An empirical study of human expert reasoning processes is presented. Its purpose is to test a model of how a human expert's cognitive system learns to detect, and does detect, pertinent data and hypotheses. This process is called pertinence generation. The model is based on the phenomenon of spreading activation within semantic networks. Twenty‐two radiologists were asked to produce diagnoses from two very difficult X‐ray films. As the model predicted, pertinence increased with experience and with semantic network integration. However, the experts whose daily work involved explicit reasoning were able, in addition, to go beyond and to generate more pertinence. The results suggest that two qualitatively different kinds of expertise, basic and super, should be distinguished. A reinterpretation of the results of Lesgold et al. (1988) is proposed, suggesting that apparent nonmonotonicities in performance are not representative of common radiological expertise acquisition but result from the inclusion of basic and super expertise on the same curve.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.