Abstract

BRCA Gist is an Intelligent Tutoring System that helps women understand issues related to genetic testing and breast cancer risk. In two laboratory experiments and a field experiment with community and web-based samples, an avatar asked 120 participants to produce arguments for and against genetic testing for breast cancer risk. Two raters assessed the number of argumentation elements (claim, reason, backing, etc.) found in response to prompts soliciting arguments for and against genetic testing for breast cancer risk (IRR=.85). When asked to argue for genetic testing, 53.3% failed to meet the minimum operational definition of making an argument, a claim supported by one or more reasons. When asked to argue against genetic testing, 59.3% failed to do so. Of those who failed to generate arguments most simply listed disconnected reasons. However, participants who provided arguments against testing (40.7%) performed significantly higher on a posttest of declarative knowledge. In each study we found positive correlations between the quality of arguments against genetic testing (i.e., number of argumentation elements) and genetic risk categorization scores. Although most interactions did not contain two or more argument elements, when more elements of arguments were included in the argument against genetic testing interaction, participants had greater learning outcomes. Apparently, many participants lack skills in making coherent arguments. These results suggest an association between argumentation ability (knowing how to make complex arguments) and subsequent learning. Better education in developing arguments may be necessary for people to learn from generating arguments within Intelligent Tutoring Systems and other settings.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call