Modeling Loss Given Default Regressions
This publication is a part of:
Collection: OCC Working Papers – New Frontiers in Bank Risk
We investigate the puzzle in the literature that various parametric loss given default (LGD) statistical models perform similarly by comparing their performance in a simulation framework. We find that, even using the full set of explanatory variables from the assumed data generating process where noise is minimized, these models still show similar poor performance in terms of predictive accuracy and rank ordering when mean predictions and squared error loss functions are used. However, the sophisticated parametric models that are specifically designed to address the bi-modal distributions of LGD outperform the less sophisticated models by a large margin in terms of predicted distributions. Our results also suggest that stress testing may pose a challenge to all LGD models because of limited loss data and limited availability of relevant explanatory variables, and that model selection criteria based on goodness of fit may not serve the stress testing purpose well.