Financial institutions have recognized the value of collaborating human expertise and AI to create high-performance augmented decision-support systems. Stakeholders at lending firms have increasingly acknowledged that plugging data into AI algorithms and eliminating the role of human underwriters by automation, with the expectation of immediate returns on investment from business process automation, is a flawed strategy. This research emphasizes the necessity of auditing the consistency of decisions (or professional judgment) made by human underwriters and monitoring the ability of data to capture the lending policies of a firm to lay a strong foundation for a legitimate system before investing millions in AI projects. The judgments made by experts in the past re-emerge in the future as outcomes or labels in the data used to train and evaluate algorithms. This paper presents Evidential Reasoning-eXplainer, a methodology to estimate probability mass as an extent of support for a given decision on a loan application by jointly assessing multiple independent and conflicting pieces of evidence. It quantifies variability in past decisions by comparing the subjective judgments of underwriters during manual financial underwriting with outcomes estimated from data. The consistency analysis improves decision quality by bridging the gap between past inconsistent decisions and desired ultimate-true decisions. A case study on a specialist lending firm demonstrates the strategic work plan adapted to configure underwriters and developers to capture the correct data and audit the quality of decisions.
Read full abstract