Credit scoring is a prominent problem in machine learning (ML). ML classifiers used to assess creditworthiness need to be evaluated using appropriate metrics. Common metrics fall into two categories: cost-sensitive and traditional cost-insensitive. Traditional cost-insensitive metrics like accuracy and area under the curve (AUC) have several drawbacks. They can be difficult for managers to understand, potentially misleading in certain scenarios, and often overlook the business realities of credit scoring. Conversely, existing cost-sensitive metrics are imperfect, leading many studies to rely on classic AUC metrics. In this study, we introduce the expected profit ratio (EPR), a novel framework for evaluating classifiers in credit scoring. EPR does not require a base scenario and accurately calculates the profit. Its flexibility makes it suitable for all types of credit-scoring problems, and its simplicity aids in comprehension and explanation. By enabling accurate profit calculation, EPR helps managers, executives, and administrators make informed decisions and investments. This metric lays the foundation for more sophisticated and precise model evaluation and comparison in credit scoring. Additionally, it can be used as a performance measure, alongside fairness and other criteria needed to achieve trustworthy artificial intelligence (AI).