Abstract

Random features, such as randomized Fourier feature maps, are often considered for accelerating the training and testing speed of original kernel methods especially on large datasets. The random feature maps are typically constructed based on the Monte Carlo or Quasi-Monte Carlo methods for approximating the integral representations of shift-invariant kernels. However, such universally random (or quasi-random) method for generating the parameters of feature maps cannot guarantee the optimal performance due to the lack of biases for the specific problems we are handling with. Hence this paper proposes a new random feature map construction method based on linear discriminant analysis, which is problem-dependent and hence improve the performance of random features for modeling. Simulation results show that the proposed method can remarkably enhance the original random features both in model complexity and generalization performance.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call