Credibility Theory Under the Least Squared Relative Loss Function
ABSTRACTThe classical Bühlmann model employs a least squared loss criterion that penalizes pricing errors equally across all risk classes. In contrast, this paper develops a new credibility theory based on the least squared relative loss (LSRL) function to address scenarios where the classical approach may fall short. We derive explicit expressions of LSRL‐based credibility estimators, including non‐parametric versions and Bühlmann–Straub extensions. Through a comparative study, we illustrate the real‐world applicability of the LSRL estimator across different scenarios, highlighting its advantages and limitations in comparison to the classical model. Additionally, we explore different LSRL formulations to provide deeper insights into their practical viability.
- Research Article
29
- 10.1016/j.ins.2021.07.054
- Jul 20, 2021
- Information Sciences
A novel three-way decision approach under hesitant fuzzy information
- Research Article
149
- 10.1016/j.ijar.2019.12.020
- Jan 9, 2020
- International Journal of Approximate Reasoning
A multiple attribute decision making three-way model for intuitionistic fuzzy numbers
- Research Article
104
- 10.1080/10920277.2005.10596196
- Apr 1, 2005
- North American Actuarial Journal
Credibility is a form of insurance pricing that is widely used, particularly in North America. The theory of credibility has been called a “cornerstone” in the field of actuarial science. Students of the North American actuarial bodies also study loss distributions, the process of statistical inference of relating a set of data to a theoretical (loss) distribution. In this work, we develop a direct link between credibility and loss distributions through the notion of a copula, a tool for understanding relationships among multivariate outcomes. This paper develops credibility using a longitudinal data framework. In a longitudinal data framework, one might encounter data from a cross section of risk classes (towns) with a history of insurance claims available for each risk class. For the marginal claims distributions, we use generalized linear models, an extension of linear regression that also encompasses Weibull and Gamma regressions. Copulas are used to model the dependencies over time; specifically, this paper is the first to propose using a t-copula in the context of generalized linear models. The t-copula is the copula associated with the multivariate t-distribution; like the univariate tdistributions, it seems especially suitable for empirical work. Moreover, we show that the t-copula gives rise to easily computable predictive distributions that we use to generate credibility predictors. Like Bayesian methods, our copula credibility prediction methods allow us to provide an entire distribution of predicted claims, not just a point prediction. We present an illustrative example of Massachusetts automobile claims, and compare our new credibility estimates with those currently existing in the literature.
- Research Article
16
- 10.1080/10920277.1998.10595681
- Jan 1, 1998
- North American Actuarial Journal
Current formulas in credibility theory often estimate expected claims as a function of the sample mean of the experience claims of a policyholder. An actuary may wish to estimate future claims as a function of some statistic other than the sample arithmetic mean of claims, such as the sample geometric mean. This can be suggested to the actuary through the exercise of regressing claims on the geometric mean of prior claims. It can also be suggested through a particular probabilistic model of claims, such as a model that assumes a lognormal conditional distribution. In the first case, the actuary may lean towards using a linear function of the geometric mean, depending on the results of the data analysis. On the other hand, through a probabilistic model, the actuary may want to use the most accurate estimator of future claims, as measured by squared-error loss. However, this estimator might not be linear. In this paper, I provide a method for balancing the conflicting goals of linearity and accuracy. The credibility estimator proposed minimizes the expectation of a linear combination of a squared-error term and a second-derivative term. The squared-error term measures the accuracy of the estimator, while the second-derivative term constrains the estimator to be close to linear. I consider only those families of distributions with a one-dimensional sufficient statistic and estimators that are functions of that sufficient statistic or of the sample mean. Claim estimators are evaluated by comparing their conditional mean squared errors. In general, functions of the sufficient statistics prove to be better credibility estimators than functions of the sample mean.
- Research Article
- 10.1088/1742-6596/1725/1/012099
- Jan 1, 2021
- Journal of Physics: Conference Series
Credibility theory is one of the tools to predict the amount of future claims by combining the experience of the claims in the past of a particular policyholder and external information, which is called manual rate, obtained from the experience of a large group of policyholders. One of the credibility theory that is widely used is Bühlmann credibility, which accommodates the heterogeneity of risk exposures, noted by unique risk parameters for each individual. However, Bühlmann credibility requires an assumption that the risk parameters are independent, which almost surely cannot be fulfilled by individuals living in the same area. Therefore, the Bühlmann credibility estimator with correlated risk parameters is formed by utilizing the orthogonal projection in Hilbert space. Also, the parameters included in the model are estimated. In addition, the standard Bühlmann credibility estimator and the Bühlmann credibility estimator assuming correlated risk parameters are compared to predict the amount of future claims based on data from a life insurance company. Comparing the root mean square error, the credibility estimator with correlated risk parameters is better in predicting the amount of future claim, meaning that the predicted amount of claim is closer to the original amount of claim. Also, as the correlation increases, the root mean square error becomes smaller. Moreover, the credibility estimator applied to the data that is partitioned based on the amount of past claims shows better performance than when applied to unpartitioned data.
- Research Article
6
- 10.1016/j.insmatheco.2022.02.003
- Feb 28, 2022
- Insurance: Mathematics and Economics
A general optimal approach to Bühlmann credibility theory
- Conference Article
- 10.58895/ksp/1000174544-12
- Nov 29, 2024
This paper investigates restart strategies for algorithms whose success depends on an algorithmic parameter λ . It is assumed that there exists a unique unknown optimal λ . After each restart λ is increased. The main question is whether there is an optimal strategy for choosing λ after each restart. To this end, possible restart strategies are classified into parameter-dependent strategy types. A loss function is introduced, that measures the wasted computational costs compared to the optimal strategy. One criterion that a viable restart strategy must satisfy is that the loss relative to the optimal λ is bounded. Experimental evidence demonstrates that this is not the case for all strategy types. However, for a specific strategy type, where the parameter λ is increased multiplicatively with an increasing constant ρ, the relative loss function has an upper bound. It will be shown, that for this strategy type there is an optimal choice for the parameter ρ that is independent of the optimal λ .
- Conference Article
1
- 10.58895/ksp//1000174544-12
- Nov 29, 2024
This paper investigates restart strategies for algorithms whose success depends on an algorithmic parameter. It is assumed that there exists a unique unknown optimal. After each restart is increased. The main question is whether there is an optimal strategy for choosing after each restart. To this end, possible restart strategies are classified into parameter-dependent strategy types. A loss function is introduced, that measures the wasted computational costs compared to the optimal strategy. One criterion that a viable restart strategy must satisfy is that the loss relative to the optimal is bounded. Experimental evidence demonstrates that this is not the case for all strategy types. However, for a specific strategy type, where the parameter is increased multiplicatively with an increasing constant, the relative loss function has an upper bound. It will be shown, that for this strategy type there is an optimal choice for the parameter that is independent of the optimal.
- Research Article
3
- 10.1051/matecconf/201822010004
- Jan 1, 2018
- MATEC Web of Conferences
The speckle noise of sonar images affects the human interpretation and automatic recognition of images seriously. It is important and difficult to realize the precision segmentation of sonar image with speckle noise in the field of image processing. Full convolution neural network (FCN) has the advantage of accepting arbitrary size image and preserving spatial information of original input image. In this paper, the image features are obtained by autonomic learning of convolutional neural network, the original learning rules based on the mean square error loss function is improved. Taking the pixel as the processing unit, the segmentation method based on FCN model with relative loss function(FCN-RLF) for small submarine sonar image is proposed, sonar image pixel-level segmentation is achievied. Experimental results show that the improved algorithm can improve the segmentation accuracy and keep the edge and detail of sonar image better. The proposed model has better ability to reject sonar image speckle noise.
- Research Article
29
- 10.1016/j.ins.2022.04.055
- Apr 29, 2022
- Information Sciences
An optimization viewpoint on evaluation-based interval-valued multi-attribute three-way decision model
- Research Article
3
- 10.1007/s00362-015-0719-6
- Oct 17, 2015
- Statistical Papers
In classical credibility theory, claims are assumed to be independent over risks and the premiums are derived under squared loss functions. However, in many practical situations, the assumptions may be violated in some situations. Hence, this paper investigates the credibility estimators under balanced loss function with equal dependence structure among the individual risks and inflation factor. To be specific, the inhomogeneous and homogeneous credibility estimators are derived for Buhlmann–Straub credibility model.
- Book Chapter
1
- 10.1002/9780470012505.tac067
- Sep 24, 2004
- Encyclopedia of Actuarial Science
The background for the development of credibility theory was the situation in which there was a portfolio of similar policies, for which the natural thing would have been to use the same premium rate for all the policies. However, this would not have captured any individual differences between the policies and therefore a methodology was developed that also utilized the claims experience from the individual policy. In this way, credibility estimation can be seen to allow the pooling of information between risks in premium rating. The consequence of this is that the premium is not estimated using just the data for the risk being rated, but also using information from similar risks. In the context of claims reserving, the reason for using an approach based on credibility estimation is similar: information from different sources can be ‘shared’ in some way. This article explains the principles of the reserving methods that use credibility theory from the actuarial literature, and refers the reader to the articles and other methods that use a similar modeling philosophy.
- Conference Article
- 10.2991/gefhr-14.2014.63
- Jan 1, 2014
In classical credibility models, claims are assumed to be independent and identically distributed. In many practical situations, however, claims are not identically distributed. In this paper, we present the assumptions of risk heterogeneous portfolio, the credibility models with dependent risk structure have been built under exponential principle. By means of the orthogonal projection method, the credibility estimator is obtained. The results generalize some well known existing results in credibility theory. Keywords—risk heterogeneous, credibility estimator, orthogonal projection, dependent risk
- Research Article
16
- 10.1016/s0167-6687(99)00048-7
- May 1, 2000
- Insurance: Mathematics and Economics
Credibility using semiparametric models and a loss function with a constancy penalty
- Research Article
80
- 10.2143/ast.27.1.563206
- May 1, 1997
- ASTIN Bulletin
This paper shows how credibility theory can be encompassed within the theory of Hierarchical Generalized Linear Models. It is shown that credibility estimates are obtained by including random effects in the model. The framework of Hierarchical Generalized Linear Models allows a more extensive range of models to be used than straightforward credibility theory. The model fitting and testing procedures can be carried out using a standard statistical package. Thus, the paper contributes a further range of models which may be useful in a wide range of actuarial applications, including premium rating and claims reserving.