Abstract

The aim was to evaluate the potential for monotonicity constraints to bias machine learning systems to learn rules that were both accurate and meaningful. Two data sets, taken from problems as diverse as screening for dementia and assessing the risk of mental retardation, were collected and a rule learning system, with and without monotonicity constraints, was run on each. The rules were shown to experts, who were asked how willing they would be to use such rules in practice. The accuracy of the rules was also evaluated. Rules learned with monotonicity constraints were at least as accurate as rules learned without such constraints. Experts were, on average, more willing to use the rules learned with the monotonicity constraints. The analysis of medical databases has the potential of improving patient outcomes and/or lowering the cost of health care delivery. Various techniques, from statistics, pattern recognition, machine learning, and neural networks, have been proposed to "mine" this data by uncovering patterns that may be used to guide decision making. This study suggests cognitive factors make learned models coherent and, therefore, credible to experts. One factor that influences the acceptance of learned models is consistency with existing medical knowledge.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.