Algorithmic game theory in LLM alignment

Michal Valko (Meta Paris, INRIA, ENS Paris-Saclay)

July 1 at 1:30pm

Abstract & Bio
Abstract: Reinforcement learning from human feedback (RLHF) is a go-to solution for aligning large language models (LLMs) with human preferences; it passes through learning a reward model that subsequently optimizes the LLM's policy. However, an inherent limitation of current reward models is their inability to fully represent the richness of human preferences and their dependency on the sampling distribution. In the first part we turn to an alternative pipeline for the fine-tuning of LLMs using pairwise human feedback. Our approach entails the initial learning of a preference model, which is conditioned on two inputs given a prompt, followed by the pursuit of a policy that consistently generates responses preferred over those generated by any competing policy, thus defining the Nash equilibrium of this preference model. We term this approach Nash learning from human feedback (NLHF) and give a new algorithmic solution, Nash-MD, founded on the principles of mirror descent. NLHF is compelling for preference learning and policy optimization with the potential of advancing the field of aligning LLMs with human preferences. In the second part of the talk we delve into a deeper theoretical understanding of fine-tuning approaches as RLHF with PPO and offline fine-tuning with DPO (direct preference optimization) based on the Bradley-Terry model and come up with a new class of LLM alignment algorithms with better both practical and theoretical properties. We finish with the newest work showing links between and building on top of them.

Bio: Michal is a principal llama engineer at Meta Paris, tenured researcher at Inria, and the lecturer at the MVA master of ENS Paris-Saclay. Michal is primarily interested in designing algorithms that would require as little human supervision as possible. That is why he is working on methods and settings that are able to deal with minimal feedback, such as deep reinforcement learning, bandit algorithms, self-supervised learning, or self play. Michal has recently worked on representation learning, word models and deep (reinforcement) learning algorithms that have some theoretical underpinning. In the past he has also worked on sequential algorithms with structured decisions where exploiting the structure leads to provably faster learning. Michal is now working on large large models (LMMs), in particular providing algorithmic solutions for their scalable fine-tuning and alignment. He received his Ph.D. in 2011 from the University of Pittsburgh under the supervision of Miloš Hauskrecht and was a postdoc of Rémi Munos before getting a tenure at Inria in 2012 and starting Google DeepMind Paris in 2018.