There is much commend about this paper, which was released originally as technical report. First the authors have provided clear, circumscribed statement about the problem being addressed. The reader is given the unmistakable impression that the paper supplies limited but intended for career guidance practitioners. Second, they employ careful and thorough methods in the form of differential feature-cost analyses which are useful, say the authors, in the software selection phase of program planning. Thoroughness is evident by the fact that the authors have gathered information about over 400 features of CACG systems! Third, they employ fair and even-handed approach sensitive comparisons among programs whose developers have serious economic stakes in consumers' decisions. Finally, they provide good technical writing. It is pleasure write response when the paper has been delivered in such good shape. The intent of this response, therefore, is examine the effectiveness of the report in achieving its purpose as stated in the abstract: to highlight similarities and differences among nine computer-assisted career guidance (CACG) systems, so that service providers may make informed choices concerning the adoption of such systems. The authors specifically avoid using the analogy of a scoresheet which would probably encourage consumers simply tally the checks for each system. They prefer instead call the report's findings a preliminary guide and invoke the analogy of useful map of the forest. The question I will address is: Does the paper assist the career guidance professional make reasoned consumer decision? Characteristic of their thorough report, the authors specify several assumptions: (1) they assume the importance of the criteria adopted from several disparate sources and modified suit the purpose of this study, (2) they assume checklist comparison does not oversimplify