Abstract
In this work, we consider learning sparse models in large scale settings, where the number of samples and the feature dimension can grow as large as millions or billions. Two immediate issues occur under such challenging scenario: (i) computational cost; (ii) memory overhead. In particular, the memory issue precludes a large volume of prior algorithms that are based on batch optimization technique. To remedy the problem, we propose to learn sparse models such as Lasso in an online manner where in each iteration, only one randomly chosen sample is revealed to update a sparse iterate. Thereby, the memory cost is independent of the sample size and gradient evaluation for one sample is efficient. Perhaps amazingly, we find that with the same parameter, sparsity promoted by batch methods is not preserved in online fashion. We analyze such interesting phenomenon and illustrate some effective variants including mini-batch methods and a hard thresholding based stochastic gradient algorithm. Extensive experiments are carried out on a public dataset which supports our findings and algorithms.
Abstract (translated)
在本文中,我们考虑在大规模环境中学习稀疏模型,其中样本数量和特征维度可以增长到数百万或数十亿。在这种挑战性的情况下,有两个立即出现的问题:(i)计算成本;(ii)内存 overhead。特别是,内存问题阻止了基于批量优化技术的大量先前算法。为了解决问题,我们建议在线学习稀疏模型,例如Lasso,在每迭代中只使用一个随机选择样本来更新一个稀疏迭代。这样,内存成本与样本大小无关,并且对一个样本的梯度评估高效。也许令人惊奇地,我们发现,使用相同的参数,批量方法促进的稀疏性在在线方式下并没有保持。我们分析了这种有趣的现象,并介绍了一些有效的变体,包括迷你批量方法和基于硬阈值的随机梯度算法。广泛的实验在一份公共数据集上进行了,支持了我们的结论和算法。
URL
https://arxiv.org/abs/2301.10958