Abstract
Recently, the sonar signal processing community has seen increased interest in neural networks for nonlinear classification and regression. While innovative neural network architectures have achieved remarkable results in other fields such as computer vision and natural language processing, their application to sonar data presents challenges. The multiple nonlinearities in these networks make it difficult to interpret results. Recent works such as Grad-CAM and Layer-wise Relevance Propagation attempt to empirically visualize what features are being selected by the network, but there has been little theoretical work to explain why neural networks perform so well. However, recent work by Pilanci et al. has shown that two-layer neural networks with a ReLU activation function can be reformulated as a convex optimization that can be solved optimally in polynomial time. Their results rely on the observation that ReLU neural networks are piece-wise linear models that lift the original data into a higher dimensional feature space. Using a valid activation function, we show that this Convex Neural Network framework can be extended to applications where the input data is complex-valued. We evaluate our algorithm on simulated sonar data. [This talk will present research funded by DoD Navy (NEEC) Grant number N001742010016 and the ONR grant numbers N000142112420 and N000142312503.]
Published Version
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have