Abstract

Since the advent of the implantable defibrillator over 3 decades ago, defibrillation testing has been used to assess whether the device will detect and treat a ventricular arrhythmia. Early in the development of this therapy, defibrillation testing was required since the energy needed to defibrillate a patient was highly dependent on the system (epicardial patches, coils, etc), and the output of the devices was limited. Current defibrillation systems have more efficacious waveforms and optimized electrodes that provide an increased safety margin. However, a small number of patients implanted with current defibrillators still have inadequate safety margins requiring revision of their systems. These are the patients that one wants to identify with defibrillation testing. Animal experiments demonstrated that defibrillation efficacy is a continuous function of defibrillation energy. 1 There is no threshold energy above which defibrillation is always successful and below which it always fails. The creation of a defibrillation efficacy curve requires a large number of defibrillation tests that can be performed in animals but is impractical in humans. To deal with this problem, the defibrillation threshold (DFT) concept was born. The DFT is an energy level that is meant to represent a point high enough on the defibrillation efficacy curve so that a shock given at an energy above the DFT (eg, 10 J) will have a very high likelihood of terminating a ventricular arrhythmia. Several different methods have been used to determine the DFT. These methods include starting at a high value where success is expected and performing repeated tests at lower energy until failure (step-down protocol), or starting at a voltage near the expected 50% success rate and performing repeated trials at higher or lower energy depending on the success or failure of the previous trial (binary search protocol). In an attempt to decrease the number of fibrillation trials, other criteria for a successful implant were defined. These include 2 successful shocks at 20 J or 1 successful shock at 20 or 14 J. The problem, however, is not to identify patients in whom the device will be successful since it is successful most of the time. The real question is to identify those patients in whom the safety margin is inadequate and thus require revision of their system. Which of the protocols is best at identifying these patients has never been tested owing to the difficulty and safety of performing repetitive defibrillation tests in humans. In this issue of HeartRhythm, Smits et al., 2 approach this question from a different direction. To define the defibrillation efficacy curve in humans, they used data from 564 patients in the PainFree RxII clinical study to create lognormal distributions for the E50 and C parameters that define the shape of a defibrillation success curve 3 as measured in animals. They then applied a Monte Carlo method to repetitively test simulated patients with multiple different defibrillation testing protocols. They repeated this process thousands of times to characterize the distribution of outcomes and identify the efficacy of each protocol at identifying patients with inadequate safety margins. By using this technique, the authors concluded that the optimal protocols were either 2 of 2 successes at 20 J or 1 of 1 success at 16 J. These protocols had the best balance between sensitivity, positive predictive value, and number of inductions for identifying patients with inadequate safety margins. By using this analysis, multiple defibrillation schemes could be compared with each other in humans for the first time. A simulation such as this, however, is more powerful than this simple result. It also yields insight into the process of defibrillation testing that was not readily apparent in human studies. For instance, the data clearly demonstrate that patients with high DFTs by either a step-down or a binary search protocol would have a 62% chance of having a lower DFT on retesting without intervention. This is not just due to regression to the mean (48%) with repetitive testing but also due to false negatives (14%). The false negatives are subjects misclassified as having low DFTs simply because testing was successful by chance. Understanding this concept profoundly affects the interpretation of testing and retesting with or without an intervention in clinical studies. In addition, if the goal of testing is to identify patients with an inadequate safety margin, it is also important to quantitate the proportion of patients with adequate thresholds who will fail defibrillation testing simply by chance and are thus misclassified as having an inadequate safety margin (false positives). These errors in classification are highlighted by this analysis.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call