Publication Details
Overview
 
 
Mitchel Perez Gonzalez, , Dongmei Jiang, Hichem Sahli
 

Chapter in Book/ Report/ Conference proceeding

Abstract 

This work proposes a multiple kernel learning (MKL) descent strategy based on multiple epochs of stochastic variance reduced gradients (i.e. multi-epochs SVRG). The proposed descent strategy takes place with a constant-size learning step, that is entangled to the kernel's combination coefficients evolution, and hence corrected in between epochs. This descending regime leads to an improved MKL bound that exhibit a linear dependency in the number of samples n, and sub-linear one in both the number of kernels F and precision of the solution e. In particular, for an Lp-norm MKL, the proposed method is able to find an e-accurate solution in a complexity O( F^(1/q) n log(1/e)). This matches the optimal convergence rate reported for (non-accelerated) strongly-convex objectives and improves over other state-of-the-art MKL solutions.

Reference 
 
 
Link