Abstract

Risk can be defined as the product of the likelihood of a failure and the consequences of the failure. For fire protection and detection systems, the failure of a system to operate properly on demand can mean the difference between a fire with relatively minor consequences and one with major consequences. Consequently, the risk associated with fire protection and detection systems failure is great if the reliability of the system is not controlled at a high level. A confounding factor in fire protection system reliability is the infrequent demand placed on fire protection systems. In event of a fire, fire protection and detection systems are expected to perform on demand despite years and potentially decades of inactivity. Inspection, testing and maintenance (ITM) schedules are specified in various fire safety codes and standards, such as those published by the National Fire Protection Agency (NFPA); however, most ITM schedules were developed based on a consensus of engineering judgment constituting “good practice”, which became code. Furthermore, when ITM schedules are modified in codes and standards, changes are still largely based on a consensus process following the failure of a system that resulting in loss of property or fatalities. While this process has been effective in ensuring a relatively high level of fire protection system reliability, it is difficult to quantify the level of reliability actually being provided and the question then becomes, “Do the currently specified ITM schedules in the NFPA standards provide for systems with 90% reliability, 99% reliability, or some other level of reliability?” If a different level of reliability is desired, how should engineers revise ITM schedules to achieve the specified level of

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call