Abstract

AbstractLateral interaction in the biological brain is a key mechanism that underlies higher cognitive functions. Linear self‐organising map (SOM) introduces lateral interaction in a general form in which signals of any modality can be used. Some approaches directly incorporate SOM learning rules into neural networks, but incur complex operations and poor extendibility. The efficient way to implement lateral interaction in deep neural networks is not well established. The use of Laplacian Matrix‐based Smoothing (LS) regularisation is proposed for implementing lateral interaction in a concise form. The authors’ derivation and experiments show that lateral interaction implemented by SOM model is a special case of LS‐regulated k‐means, and they both show the topology‐preserving capability. The authors also verify that LS‐regularisation can be used in conjunction with the end‐to‐end training paradigm in deep auto‐encoders. Additionally, the benefits of LS‐regularisation in relaxing the requirement of parameter initialisation in various models and improving the classification performance of prototype classifiers are evaluated. Furthermore, the topologically ordered structure introduced by LS‐regularisation in feature extractor can improve the generalisation performance on classification tasks. Overall, LS‐regularisation is an effective and efficient way to implement lateral interaction and can be easily extended to different models.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call