Efficient and robust algorithms for adversarial linear contextual bandits
Gergely Neu, Julia Olkhovskaya
Subject areas: Bandit problems, Online learning
Presented in: Session 2A, Session 2E
[Zoom link for poster in Session 2A], [Zoom link for poster in Session 2E]
Abstract:
We consider an adversarial variant of the classic $K$-armed linear contextual bandit problem where the sequence of loss functions associated with each arm are allowed to change without restriction over time. Under the assumption that the $d$-dimensional contexts are generated i.i.d. at random from a known distributions, we develop computationally efficient algorithms based on the classic Exp3 algorithm. Our first algorithm, RealLinExp3, is shown to achieve a regret guarantee of order $\sqrt{KdT}$ over $T$ rounds, which matches the best available bound for this problem. Our second algorithm, RobustLinExp3, is shown to be robust to misspecification, in that it achieves a regret bound of order $(Kd)^{1/3}T^{2/3} + \varepsilon \sqrt{d} T$ if the true reward function is linear up to an additive nonlinear error uniformly bounded in absolute value by $\varepsilon$. To our knowledge, our performance guarantees constitute the very first results on this problem setting.