Abstract
This paper presents new and effective algorithms for learning kernels. In particular, as shown by our empirical results, these algorithms consistently outperform the so-called uniform combination solution that has proven to be difficult to improve upon in the past, as well as other algorithms for learning kernels based on convex combinations of base kernels in both classification and regression. Our algorithms are based on the notion of centered alignment which is used as a similarity measure between kernels or kernel matrices. We present a number of novel algorithmic, theoretical, and empirical results for learning kernels based on our notion of centered alignment. In particular, we describe efficient algorithms for learning a maximum alignment kernel by showing that the problem can be reduced to a simple QP and discuss a one-stage algorithm for learning both a kernel and a hypothesis based on that kernel using an alignment-based regularization. Our theoretical results include a novel concentration bound for centered alignment between kernel matrices, the proof of the existence of effective predictors for kernels with high alignment, both for classification and for regression, and the proof of stability-based generalization bounds for a broad family of algorithms for learning kernels based on centered alignment. We also report the results of experiments with our centered alignment-based algorithms in both classification and regression.
Abstract (translated)
本文提出了一种学习核的新且有效的算法。特别地,如图我们通过实证结果所示,这些算法 consistently 优于过去难以改进的所谓的均匀组合解决方案,以及基于凸组合的核学习算法。我们的算法基于基于中心对齐的概念,该概念作为核或核矩阵之间的相似度度量。我们给出了基于我们概念的中心对齐学习核的新颖算法、理论结果和实证结果。特别地,我们描述了通过将问题简化为一个简单的 QP 来学习最大对齐核的高效算法,并讨论了使用基于对齐的 regularization 学习核和假设的单阶段算法。我们的理论结果包括关于核矩阵中心对齐的 new 紧缩 bound,证明高对齐核的有效预测器,以及分类和回归中基于对齐的核学习的稳定扩展 bound。我们还报告了基于我们中心对齐算法的实验结果。
URL
https://arxiv.org/abs/1203.0550