Introduction The question of whether or not property-liability insurers manage reported reserves for losses and loss adjustment expenses has received considerable attention in the insurance literature. Two distinct but related lines of empirical research have directly addressed this issue: One asks if there is evidence of statistically significant misestimation in the original reserve (Forbes, 1970; Anderson, 1971; Balcarek, 1975; Ansley, 1979; Smith, 1980); the other asks if misestimation of the original reserve can be explained in terms of income smoothing behavior (Smith, 1980; Weiss, 1985; Grace, 1990). Defining reserve error as the difference between an initial and terminal value of either estimated reserves (e.g., Grace) or incurred losses (e.g., Weiss), existing studies have relied upon four to five years of loss development to measure reserve error. The accuracy of these measurements depends, however, upon how closely the chosen terminal values approximate fully developed values. Consequently, an informed choice among alternative loss development horizons requires knowledge of the speed with which subsequent reserve reestimates converge to fully developed values. This issue has remained largely unexplored. The primary purpose of this research is to provide some initial evidence concerning the dynamics of the underlying reserve reestimation process for a sample of stock insurers. To fulfill this aim, two research questions are addressed. First, when testing for the existence of statistically significant errors in the original reserve estimates of a sample of insurers, what minimum point of loss development is necessary to draw correct statistical conclusions? Second, how is the precision of reserve error measurements affected by the choice among alternative loss development horizons? As a result of addressing these questions, we also extend the existing loss reserve error literature by providing evidence of significant reserve misestimation during the period from 1977 through 1987. This study suggests that no single measurement horizon is adequate in all circumstances. Shorter development periods appear to be sufficient when trying to detect statistically significant misestimations of the original reserve across a sample of insurers. Substantially longer development periods are necessary when the accuracy of individual insurer reserve error measurements is important. Motivation Ideally, reserve error would be measured by the difference between the originally established reserve and its fully developed value. However, the lack of complete loss development data imposes a practical limitation on this ideal. Particularly with respect to the reserves associated with liability losses, reliance on less than fully developed reserves to measure error is virtually inescapable. The general reliance on five-year loss developments stems from Forbes's (1970, p. 531) observation that a four year incurred loss development period is necessary to measure the accuracy of an insurer's reserving policy properly. At the end of this period, 97-100 percent of the original claims will have been settled...and the remaining claims will have developed to a point where their reserve closely approximates their actual unpaid liability. This observation was based solely upon the automobile bodily injury loss reserves of 1944 through 1961 (Forbes, 1970, p. 528). Since then, the proportion of total premiums attributable to liability risks has increased, extending the period over which losses are paid. Concurrently, the proportion of reserves attributable to incurred but not reported losses has also risen (Aiuppa and Trieschmann, 1987). In light of these changes, it seems prudent to examine the sufficiency of alternative loss development horizons for the purpose of reserve error measurement. In addition, the data necessary to statistically evaluate the sufficiency of five-year loss developments for measuring reserve error has become available. …
Read full abstract7-days of FREE Audio papers, translation & more with Prime
7-days of FREE Prime access