Covariance-adapting algorithm for semi-bandits with application to sparse outcomes
Pierre Perrault, Vianney Perchet, Michal Valko
Subject areas: Bandit problems,
Presented in: Session 2E, Session 3A
[Zoom link for poster in Session 2E], [Zoom link for poster in Session 3A]
Abstract:
We investigate \emph{stochastic combinatorial semi-bandits}, where the entire joint distribution of outcomes impacts the complexity of the problem instance (unlike in the standard bandits). Typical distributions considered depend on specific parameter values, whose prior knowledge is required in theory but quite difficult to estimate in practice; an example is the commonly assumed \emph{sub-Gaussian} family. We alleviate this issue by instead considering a new general family of \emph{sub-exponential} distributions, which contains bounded and Gaussian ones. We prove a new lower bound on the regret on this family, that is parameterized by the \emph{unknown} covariance matrix, a tighter quantity than the sub-Gaussian matrix. We then construct an algorithm that uses covariance estimates, and provide a tight asymptotic analysis of the regret. Finally, we apply and extend our results to the family of sparse outcomes, which has applications in many recommender systems.