Abstract

AbstractThe distribution of attributes assigned using data on independent sensors for a specific source, for example, magnitude, can be richly descriptive for final event characterization and associated uncertainty. Attribute distributions can also provide powerful context for event characterization in the absence of comprehensive annotation. This work develops a way to leverage distributional information across a set of sensors in the absence of comprehensive annotation as a domain‐informed regularization term applied during gradient‐based learning. The regularization term is the basis of event‐based training which I show can be a powerful semi‐supervised learning (SSL) approach. I first use a simple feed forward neural network and a toy data set to outline how data set structure interacts with the assumptions inherent to many semi‐supervised learning approaches. I then demonstrate the effectiveness of event‐based training using a deep convolutional neural network for seismic event classification in Utah, which increases SSL accuracy from 92% to 97% on event classification with a limited number of training labels.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call