The resolvent operator and the corresponding Green’s function occupy a central position in the realms of differential and integral equations, operator theory, and in particular the modern physics. However, in the field of machine learning, when confronted with the complex and highly challenging learning tasks from the real world, the prowess of Green’s function of resolvent is rarely explored and exploited. This paper aims at innovating the conventional translation-invariant kernels and rotation-invariant kernels, through theoretical investigation into a new view of constructing kernel functions by means of the resolvent operator and its Green’s function. From the practical perspective, the newly developed kernel functions are applied for robust signal recovery from noise corrupted data in the scenario of linear programming support vector learning. In particular, the monotonic and non-monotonic activation functions are used for kernel design to improve the representation capability. In this manner, a new dimension is given for kernel-based robust sparse learning from the following two aspects: firstly, a new theoretical framework by bridging the gap between the mathematical subtleties of resolvent operator and Green’s function theory and kernel construction; secondly, a concretization for the fusion between activation functions design in neural networks and nonlinear kernels design. Finally, the experimental study demonstrates the potential and superiority of the newly developed kernel functions in robust signal recovery and multiscale sparse modeling, as one step towards removing the apparent boundaries between the realms of modern signal processing and computational intelligence.
Read full abstract