Abstract

The evaluation of solutions is a major unresolved issue for all those involved in e-learning. In this paper we illustrate the importance of context by means of a qualitative comparison of two e-learning prototype implementations--an action research case undertaken in conjunction with a major German insurance company; and a more experimental approach undertaken during an undergraduate university course, where a variety of learning strategies were tested. Despite the apparent difference of the two prototypes, we believe that evaluators from both sides can learn from one another's experiences--and that the differences lie in the ranking of the evaluation categories, rather than in their in/exclusion. We conclude that we can learn from other evaluation projects--not in terms of operational evaluation criteria, but in terms of understanding evaluation categories and their integrated nature within the e-learning organisation.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call