Multiple Kernel Learning via Multi-Epochs SVRG
Host Publication: 9th NIPS Workshop on Optimization for Machine Learning
Authors: M. Perez Gonzalez, M. Oveneke, D. Jiang and H. Sahli
Publication Date: Dec. 2016
Number of Pages: 5
This work proposes a multiple kernel learning (MKL) descent strategy based on multiple epochs of stochastic variance reduced gradients (i.e. multi-epochs SVRG). The proposed descent strategy takes place with a constant-size learning step, that is entangled to the kernel's combination coefficients evolution, and hence corrected in between epochs. This descending regime leads to an improved MKL bound that exhibit a linear dependency in the number of samples n, and sub-linear one in both the number of kernels F and precision of the solution e. In particular, for an Lp-norm MKL, the proposed method is able to find an e-accurate solution in a complexity O( F^(1/q) n log(1/e)). This matches the optimal convergence rate reported for (non-accelerated) strongly-convex objectives and improves over other state-of-the-art MKL solutions.