Signal detection theory (SDT) was developed to provide a measure of the discriminability of a signal against background noise, independently of response bias. However, equal discriminability over a range of bias is only achieved by the traditional signal detection measure d[Formula: see text] under a narrow set of conditions - i.e., binormal noise and signal distributions of equal variance and base rates. In response to observed departures from these conditions, more robust alternative measures of d[Formula: see text] have been developed, including da and, more recently, d[Formula: see text]. Each of these alternatives addresses some, but not all, of the difficulties that arise when the assumptions of SDT are violated. Moreover, none of these measures directly follow from a central idea of discriminability by an observer that adopts a minimize error count (MEC) strategy. I propose a new d[Formula: see text] alternative, d[Formula: see text], that is robust to violations of the standard signal detection assumptions, remains consistent with varying bias, and is grounded in the principle of discriminability following a MEC strategy. Simulations illustrate how d[Formula: see text] is similar to the recently developed d[Formula: see text] when the observer optimizes their criterion placement to minimize the number of errors but, unlike d[Formula: see text], remains consistent irrespective of the observer's criterion placement Moreover, unlike da, d[Formula: see text] reflects changes in discriminability related to base rates of signal vs. noise presentations. The use of d[Formula: see text] also has implications for the interpretation of bias metrics, such as β and c, which are examined at the optimal criterion under a variety of conditions.