Abstract

Software defined radio (SDR) provides significant advantages over traditional analog radio systems and are becoming increasingly relied on for “mission critical” applications. This along with risk of trojans, single-event upsets and human error creates the necessity for fault tolerant systems. Redundancy has been traditionally used to implement fault tolerance but incurs a substantial area overhead which is undesirable in most applications. Advancements in field-programmable gate array and system on a chip technologies have made implementing machine learning (ML) algorithms within embedded systems feasible. In this paper we explore the use of ML to implement fault tolerance in an SDR. Our approach, which we call adaptive component-level degeneracy (ACD), uses a ML model to learn the functionality of an SDR component. Once trained, the model can detect when the component is compromised and mitigate the issue with its own output. We demonstrate the ability of our model to learn multiple simulated SDR components. We compare the one-dimensional convolutional neural network and bidirectional recurrent neural network architectures at modeling time series components. We also implement ACD within a real-time SDR system using GNU Radio Companion. The results show great potential for the utilization of ML techniques for improving embedded system reliability.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call