Abstract
We investigate the problem of stochastic, combinatorial multi-armed bandits where the learner only has access to bandit feedback and the reward function can be non-linear. We provide a general framework for adapting discrete offline approximation algorithms into sublinear $\alpha$-regret methods that only require bandit feedback, achieving $\mathcal{O}\left(T^\frac{2}{3}\log(T)^\frac{1}{3}\right)$ expected cumulative $\alpha$-regret dependence on the horizon $T$. The framework only requires the offline algorithms to be robust to small errors in function evaluation. The adaptation procedure does not even require explicit knowledge of the offline approximation algorithm -- the offline algorithm can be used as black box subroutine. To demonstrate the utility of the proposed framework, the proposed framework is applied to multiple problems in submodular maximization, adapting approximation algorithms for cardinality and for knapsack constraints. The new CMAB algorithms for knapsack constraints outperform a full-bandit method developed for the adversarial setting in experiments with real-world data.
Abstract (translated)
我们研究的是随机组合多臂赌博问题,学习只能获得赌博者反馈,奖励函数可能是非线性的。我们提供了一种通用的框架,用于将离散在线近似算法改编为 sublinear $alpha$ regret 方法,只需要赌博者反馈,实现了 $T^frac{2}{3}log(T)^frac{1}{3}$ 期望的累积 $alpha$ regret 对 $T$ 的依赖关系。框架只需要在线算法对函数评估的小错误保持稳健。适应程序甚至不需要显式了解在线近似算法——在线算法可以被视为黑盒子函数。为了展示该提出的框架的有用性,该框架应用于多个子集最大化问题,适应数列和背包约束的近似算法。新的背包约束CMAB算法在实验中与为对抗环境开发的全赌博方法相比,表现出更好的表现。
URL
https://arxiv.org/abs/2301.13326