Abstract

Two procedures for the evaluation of the performances of expert systems are illustrated: one procedure evaluates predictive accuracy; the other procedure is complementary in that it uncovers the factors that contribute to predictive accuracy. Using these procedures, it is argued that expert systems should be more accurate than human experts in two senses. One sense is that expert systems must be more accurate to be cost-effective. Previous research is reviewed and original results are presented which show that simple statistical models typically perform better than human experts for the task of combining evidence from a given set of information sources. The results also suggest the second sense in which expert systems should be more accurate than human experts. They reveal that expert systems should share factors that contribute to human accuracy, but not factors that detract from human accuracy. Thus the thesis is that one should both require and expect systems to be more accurate than humans. >

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call