Abstract

Concurrency has been rapidly gaining importance in computing, and correspondingly in computing curricula. Concurrent programming is, however, notoriously hard even for expert programmers. New language designs promise to make it easier, but such claims call for empirical validation. We present a methodology for comparing concurrent languages for teaching purposes. A critical challenge is to avoid bias, especially when (as in our example application) the experimenters are also the designers of one of the approaches under comparison. For a study performed as part of a course, it is also essential to make sure that no student is penalized. The methodology addresses these concerns by using self-study material and applying an evaluation scheme that minimizes opportunities for subjective decisions. The example application compares two object-oriented concurrent languages: multithreaded Java and SCOOP. The results show an advantage for SCOOP even though the study participants had previous training in writing multithreaded Java programs. The lessons should be of use to educators interested in teaching concurrency, to researchers looking for objective ways of assessing teaching techniques, and to researchers who want to avoid bias in assessing an approach or tool that they have themselves designed.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.