Abstract

Epilepsy management employs self-reported seizure diaries, despite evidence of seizure underreporting. Wearable and implantable seizure detection devices are now becoming more widely available. There are no clear guidelines about what levels of accuracy are sufficient. This study aimed to simulate clinical use cases and identify the necessary level of accuracy for each. Using a realistic seizure simulator (CHOCOLATES), a ground truth was produced, which was then sampled to generate signals from simulated seizure detectors of various capabilities. Five use cases were evaluated: (1) randomized clinical trials (RCTs), (2) medication adjustment in clinic, (3) injury prevention, (4) sudden unexpected death in epilepsy (SUDEP) prevention, and (5) treatment of seizure clusters. We considered sensitivity (0%-100%), false alarm rate (FAR; 0-2/day), and device type (external wearable vs. implant) in each scenario. The RCT case was efficient for a wide range of wearable parameters, though implantable devices were preferred. Lower accuracy wearables resulted in subtle changes in the distribution of patients enrolled in RCTs, and therefore higher sensitivity and lower FAR values were preferred. In the clinic case, a wide range of sensitivity, FAR, and device type yielded similar results. For injury prevention, SUDEP prevention, and seizure cluster treatment, each scenario required high sensitivity and yet was minimally influenced by FAR. The choice of use case is paramount in determining acceptable accuracy levels for a wearable seizure detection device. We offer simulation results for determining and verifying utility for specific use case and specific wearable parameters.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call