Abstract

Catastrophe models estimate risk at the intersection of hazard, exposure, and vulnerability. Each of these areas requires diverse sources of data, which are very often incomplete, inconsistent, or missing altogether. The poor quality of the data is a source of epistemic uncertainty, which affects the vulnerability models as well as the output of the catastrophe models. This article identifies the different sources of epistemic uncertainty in the data, and elaborates on strategies to reduce this uncertainty, in particular through identification, augmentation, and integration of the different types of data. The challenges are illustrated through the Florida Public Hurricane Loss Model (FPHLM), which estimates insured losses on residential buildings caused by hurricane events in Florida. To define the input exposure, and for model development, calibration, and validation purposes, the FPHLM teams accessed three main sources of data: county tax appraiser databases, National Flood Insurance Protection (NFIP) portfolios, and wind insurance portfolios. The data from these different sources were reformatted and processed, and the insurance databases were separately cross-referenced at the county level with tax appraiser databases. The FPHLM hazard teams assigned estimates of natural hazard intensity measure to each insurance claim. These efforts produced an integrated and more complete set of building descriptors for each policy in the NFIP and wind portfolios. The article describes the impact of these uncertainty reductions on the development and validation of the vulnerability models, and suggests avenues for data improvement. Lessons learned should be of interest to professionals involved in disaster risk assessment and management.

Highlights

  • Catastrophe models for man-made infrastructure have four main components: a hazard component, which models the hazards, for example, hurricane or earthquake; an exposure model, which categorizes the exposure, for example, buildings, into generic classes; a vulnerability component, which models the effects of the hazard on the exposure and defines vulnerability functions for each building class; and an actuarial component, which combines the vulnerability, the hazard, and the exposure, to quantify the risk in terms of physical damage, economic damage, or insured losses

  • The analysis of insurance claims data provides a way for developing, validating, and calibrating various aspects of the vulnerability component of the cat model, in order to improve the credibility of the model outputs

  • The article discusses the different data sources involved in this development and validation process: insurance exposure and claim data; tax roll data; geographic information system data; elevation or topographic data; and hazard data from either observations or simulations

Read more

Summary

Introduction

Catastrophe (cat) models for man-made infrastructure have four main components: a hazard component, which models the hazards, for example, hurricane or earthquake; an exposure model, which categorizes the exposure, for example, buildings, into generic classes; a vulnerability component, which models the effects of the hazard on the exposure and defines vulnerability functions for each building (or other type of exposure) class; and an actuarial component, which combines the vulnerability, the hazard, and the exposure, to quantify the risk in terms of physical damage, economic damage, or insured losses. The other user group of cat models includes economists (Michel-Kerjan et al 2013), as well as disaster managers and city and emergency planners (Chian 2016; Biasi et al 2017), where the focus is on emergency planning, post-disaster recovery, and increasingly on resilience studies (Muir-Wood and Stander 2016) In this case, the input can be databases of building or other infrastructure exposure from tax rolls or other sources, and the outputs are physical or monetary damage. Risk modelers must combine different sources of data to enhance the quality of the exposure and claim data, to improve the development and validation processes, and to improve the quality of the input data into the risk model and to reduce the resulting output uncertainty. The article is centered around the Florida Public Hurricane Loss Model as a case study, but the findings are applicable to other cat models, and other types of hazards as well

Florida Public Hurricane Loss Model
Florida Public Hurricane Loss Model Vulnerability Models
Input Data for the Florida Public Hurricane Loss Model
National Flood Insurance Program Exposure Database
Wind Insurance Exposure Portfolios
National Flood Insurance Program Claims Database
Wind Insurance Claims Portfolios
Uncertainty in Building Characteristics
Uncertainty in Building Location
Uncertainty in Property Value
Uncertainty in Claim Adjustment Value
Uncertainty in Cause of Damage
Uncertainty in Hazard Intensity Measurement
Tax Appraiser Databases
Sources of Uncertainty in the Tax Appraiser Databases
Reformatting and Standardization of Building Attributes
Hazard Information
Federal Emergency Management Agency Estimates of Flood Elevations
Historical Reconstruction
Building and Contents Value Estimation
Geocoding and Integration of the Databases
Findings
Discussion
Conclusion and Recommendations
Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.