Abstract
Research background: Bankruptcy literature is populated with scores of (econometric) models ranging from Altman?s Z-score, Ohlson?s O-score, Zmijewski?s probit model to k-nearest neighbors, classification trees, support vector machines, mathematical programming, evolutionary algorithms or neural networks, all designed to predict financial distress with highest precision. We believe corporate default is also an important research topic to be identified with the prediction accuracy only. Despite the wealth of modelling effort, a unified theory of default is yet to be proposed.
 Purpose of the article: Due to the disagreement both on the definition and hence the timing of default, as well as on the measurement of prediction accuracy, the comparison (of predictive power) of various models can be seriously misleading. The purpose of the article is to argue for the shift in research focus from maximizing accuracy to the analysis of the information capacity of predictors. By doing this, we may yet come closer to understanding default itself.
 Methods: We critically appraise the bankruptcy research literature for its methodological variety and empirical findings. Default definitions, sampling procedures, in and out-of-sample testing and accuracy measurement are all scrutinized. In an empirical part, we use a double stochastic Poisson process with multi-period prediction horizon and a comprehensive database of some 15,000 Polish non-listed companies to illustrate the merits of our new approach to default modelling.
 Findings & Value added: In the theoretical part, we call for the construction of a single unified default forecasting platform estimated for the largest dataset of firms possible to allow testing the utility of various sources of micro, mezzo, and macro information. Our preliminary empirical evidence is encouraging. The accuracy ratio amounts to 0.92 for t = 0 and drops to 0.81 two years ahead of default. We point to the pivotal role played by the information on firm?s liquidity (alternatively in profitability) and ? in contrast to Altman?s tradition ? hardly any contribution to predictive power of other financial ratios. Macro data is shown to be critical. It adds, on average, more than 10 p.p. to accuracy ratio. In the future, we hope to integrate listed and non-listed firms data into one model, ideally at higher frequency than annual, and include the information on firm's competitiveness position.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
More From: Equilibrium. Quarterly Journal of Economics and Economic Policy
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.