Abstract

The concept of a graphical design language as a tool for implementing problem solving and algorithm design independent of any specific code language is extremely attractive with reference to initial instruction in algorithm design. The verification of the methodology requires an acceptable reliable metric to validate that the graphical design language approach does improve the ability of problem solvers. Current software engineering metrics have primarily concentrated on attempting to predict development overhead or to evaluate the final software product.Using an environment (DATA) (Doran & Longenecker 1989) (Doran & Longenecker 1990) which focuses on the crucial role of data in the algorithm design process, the goal remains the establishment of an instrument to test the algorithm design skills of problem solvers trained with DATA versus those trained using traditional methods (flow charting, pseudo-code, Nassi-Schneiderman charts, etc.). A stable metric applicable to graphical design languages in general is an obvious requirement. The approach to developing the metric involves application of the prototype instrument to groups of students through a series of courses aimed at bringing the students through problem solving and algorithm design to programmed implementation of algorithms. The performance of the groups will provide the data for calibrating the metric validating its reliability. The metric is then to be used to measure the impact on performance of the DATA teaching environment. In the papers cited, the need for the DATA environment and the apparent impact have been demonstrated. The natural next phase calls for the development of the metric.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call