Abstract

Learning how to count in different bases has been seen as a trivial task in almost all introductory mathematics courses. However, the low performance shown by many students, while performing this task, is appalling. This situation has motivated serious research in this matter. In order to study a model of count learning, we analyze the performance of a multilayer perceptron that learns to count in several bases (5, 10, 13, 20, 60). We give evidence that it is not equivalent for the model to learn to count in all base as the errors are not equivalent, biased toward a low error when the task is to learn to count in base 20. When the task is to learn to count in all bases following a given sequence, the model shows non-equivalent errors for some bases. This may shed some light on education planning that can result in better introductory courses.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.