Accepted Papers

Proceedings

    • Towards a Complete Analysis of Langevin Monte Carlo: Beyond Poincar\'e Inequality
      Alireza Mousavi-Hosseini (University of Toronto, Vector Institute)*; Tyler K Farghly (University of Oxford); Ye He (University of California, Davis); Krishna Balasubramanian (University of California, Davis); Murat A Erdogdu (University of Toronto, Vector Institute)
    • Improved Discretization Analysis for Underdamped Langevin Monte Carlo
      Shunshi Zhang (University of Toronto, Vector Institute)*; Sinho Chewi (Massachusetts Institute of Technology); Mufan Li (University of Toronto); Krishna Balasubramanian (University of California, Davis); Murat A Erdogdu (University of Toronto, Vector Institute)
    • The One-Inclusion Graph Algorithm is not Always Optimal
      Ishaq Aden-Ali (UC Berkeley); Yeshwanth Cherapanamjeri (UC Berkeley)*; Abhishek Shetty (Cornell University); Nikita Zhivotovskiy (Google)
    • Beyond Uniform Smoothness: A Stopped Analysis of Adaptive SGD
      Matthew Faw (University of Texas at Austin)*; Litu Rout (UT Austin); Constantine Caramanis (University of Texas); Sanjay Shakkottai (University of Texas at Austin)
    • Convergence of AdaGrad for Non-convex Objectives: Simple Proofs and Relaxed Assumptions
      Bohan Wang (University of Science and Technology of China)*; Huishuai Zhang (Microsoft Research Asia); Zhiming Ma (); Wei Chen (Institute of Computing Technology, Chinese Academy of Sciences)
    • Stability and Generalization of Stochastic Optimization with Nonconvex and Nonsmooth Problems
      Yunwen Lei (Hong Kong Baptist University)*
    • The Sample Complexity of Approximate Rejection Sampling With Applications to Smoothed Online Learning
      Adam Block (MIT)*; Yury Polyanskiy (MIT)
    • Online Learning and Solving Infinite Games with an ERM Oracle
      Yuval Dagan (-MIT)*; Constantinos Daskalakis (MIT); Angelos Assos (MIT); Idan Attias (Ben-Gurion University); Maxwell K Fishelson (MIT)
    • Online Learning in Dynamically Changing Environments
      Changlong Wu (Purdue University)*; Ananth Grama (Purdue University); Wojciech Szpankowski (Purdue University)
    • Accelerated Riemannian Optimization: Handling Constraints with a Prox to Bound Geometric Penalties
      David Martínez-Rubio (ZIB)*; Sebastian Pokutta (ZIB)
    • Bregman Deviations of Generic Exponential Families
      Sayak Ray Chowdhury (Indian Institute of Science); Patrick Saux (Inria)*; Odalric Maillard (Inria); Aditya Gopalan (Indian Institute of Science (IISc), Bangalore)
    • Bagging is an Optimal PAC Learner
      Kasper Green Larsen (Aarhus University)*
    • Community Detection in the Hypergraph SBM: Optimal Recovery Given the Similarity Matrix
      Julia Gaudio (Northwestern University); Nirmit Joshi (Toyota Technological Institute at Chicago)*
    • Find a witness or shatter: the landscape of computable PAC learning.
      Valentino Delle Rose (Universidad Católica de Chile); Alexander Kozachinskiy (Universidad Católica de Chile); Cristobal Rojas (IMC - Universidad Catolica); Tomasz Steifer (Universidad Católica de Chile)*
    • Proper Losses, Moduli of Convexity, and Surrogate Regret Bounds
      Han Bao (Kyoto University)*
    • Beyond Parallel Pancakes: Quasi-Polynomial Time Guarantees for Non-Spherical Gaussian Mixtures
      Rares-Darius Buhai (ETH Zurich)*; David Steurer (ETH Zurich)
    • Online Reinforcement Learning in Stochastic Continuous-Time Systems
      Mohamad Kazem Shirani Faradonbeh (University)*; Mohamad Sadegh Shirani Faradonbeh (Stanford University)
    • Best-of-three-worlds Analysis for Linear Bandits with Follow-the-regularized-leader Algorithm
      Fang Kong (Shanghai Jiao Tong University); Canzhe Zhao (Shanghai Jiao Tong University); Shuai Li (Shanghai Jiao Tong University)*
    • Private Online Prediction from Experts: Separations and Faster Rates
      Hilal Asi (Apple)*; Vitaly Feldman (Apple); Tomer Koren (Tel Aviv University and Google); Kunal Talwar (Apple)
    • Improved Bounds for Multi-task Learning with Trace Norm Regularization
      Weiwei Liu (Wuhan University)*
    • Local Glivenko-Cantelli
      Doron Cohen (Ben-Gurion University of the Negev); Aryeh Kontorovich (Ben-Gurion University of the Negev)*
    • Non-asymptotic convergence bounds for Sinkhorn iterates and their gradients: a coupling approach.
      Giacomo Greco (Eindhoven University of Technology)*; Maxence Noble (Ecole Polytechnique); Giovanni Conforti (École Polytechnique); Alain Durmus (Ecole polytechnique)
    • Multitask Learning via Shared Features: Algorithms and Hardness
      Konstantina Bairaktari (Northeastern University)*; Guy Blanc (Stanford University); Li-Yang Tan (Stanford University); Jonathan Ullman (Northeastern University); Lydia Zakynthinou (Northeastern University)
    • Optimal Prediction Using Expert Advice and Randomized Littlestone Dimension
      Yuval filmus (Technion); Steve Hanneke (Purdue University); Idan Mehalel (Technion)*; Shay Moran (Technion)
    • Uniqueness of BP fixed point for the Potts model and applications to community detection
      Yuzhou Gu (Massachusetts Institute of Technology)*; Yury Polyanskiy (MIT)
    • Weak Recovery Threshold for the Hypergraph Stochastic Block Model
      Yuzhou Gu (Massachusetts Institute of Technology)*; Yury Polyanskiy (MIT)
    • Statistical-Computational Tradeoffs in Mixed Sparse Linear Regression
      Gabriel Arpino (University of Cambridge)*; Ramji Venkataramanan (University of Cambridge)
    • Semidefinite programs simulate approximate message passing robustly
      Mikhail S Ivkov (Stanford University)*; Tselil Schramm (Stanford University)
    • VO$Q$L: Towards Optimal Regret in Model-free RL with Nonlinear Function Approximation
      Alekh Agarwal (Google); Yujia Jin (Stanford University)*; Tong Zhang (Google)
    • Nearly Optimal Algorithms for Testing and Learning Quantum Junta Channels
      Zongbo Bao (Nanjing University)*; Penghui Yao (Nanjing University)
    • Repeated Bilateral Trade Against a Smoothed Adversary
      Nicolò Cesa-Bianchi (University of Milan); Tommaso R. Cesari (University of Ottawa); Roberto Colomboni (Università degli Studi di Milano & Istituto Italiano di Tecnologia); Federico Fusco (Sapienza University of Rome)*; Stefano Leonardi (Sapienza University of Rome)
    • On the Existence of a Complexity in Fixed Budget Bandit Identification
      Rémy Degenne (Inria)*
    • Over-Parameterization Exponentially Slows Down Gradient Descent for Learning a Single Neuron
      Weihang Xu (Tsinghua University)*; Simon Du (University of Washington)
    • From high-dimensional & mean-field dynamics to dimensionless ODEs: A unifying approach to SGD in two-layers networks
      Luca Arnaboldi (EPFL); Ludovic Stephan (EPFL)*; FLORENT KRZAKALA (EPFL); Bruno Loureiro (EPFL)
    • Orthogonal Directions Constrained Gradient Method: from non-linear equality constraints to Stiefel manifold
      Sholom Schechtman (Télécom SudParis)*; Daniil Tiapkin (HSE University); Eric Moulines (Ecole Polytechnique); Michael Muehlebach ( Max Planck Institute for Intelligent Systems)
    • Projection-free Online Exp-concave Optimization
      Dan Garber (Technion)*; Ben Kretzu (Technion)
    • A Unified Analysis of Nonstochastic Delayed Feedback for Combinatorial Semi-Bandits, Linear Bandits, and MDPs
      Dirk van der Hoeven (University of Amsterdam)*; Lukas Zierahn (University of Milan); Tal Lancewicki (Tel-Aviv University); Aviv Rosenberg (Amazon Science); Nicolò Cesa-Bianchi (University of Milan)
    • Instance-Optimality in Interactive Decision Making: Toward a Non-Asymptotic Theory
      Andrew J Wagenmaker (University of Washington)*; Dylan J Foster (Microsoft Research)
    • Improved dimension dependence of a proximal algorithm for sampling
      Jiaojiao Fan (Georgia Institute of Technology)*; Bo Yuan (Georgia Institute of Technology); Yongxin Chen (Georgia Institute of Technology)
    • Private Covariance Approximation and Eigenvalue-Gap Bounds for Complex Gaussian Perturbations
      Oren Mangoubi (WPI); Nisheeth K. Vishnoi (Yale University)*
    • Exponential Hardness of Reinforcement Learning with Linear Function Approximation
      Sihan Liu (University of California San Diego)*; Gaurav Mahajan (University of California, San Diego); Daniel Kane (-); Shachar Lovett (University of California San Diego); Gellert Weisz (Deepmind / UCL); Csaba Szepesvari (DeepMind/University of Alberta)
    • Oracle-Efficient Smoothed Online Learning for Piecewise Continuous Decision Making
      Adam Block (MIT)*; Max Simchowitz (MIT); Alexander Rakhlin (MIT)
    • Learning and Testing Latent-Tree Ising Models Efficiently
      Vardis Kandiros (MIT)*; Constantinos Daskalakis (MIT); Yuval Dagan (-MIT); Davin Choo (National University of Singapore)
    • Universality of Langevin Diffusion for Private Optimization, with Applications to Sampling from Rashomon Sets
      Arun Ganesh (Google)*; Abhradeep Thakurta (Google); Jalaj Upadhyay (Rutgers University)
    • Complexity of High-Dimensional Identity Testing with Coordinate Conditional Sampling
      Antonio Blanca (Penn State)*; Zongchen Chen (MIT); Daniel Stefankovic (University of Rochester); Eric Vigoda (UCSB)
    • Contexts can be Cheap: Solving Stochastic Contextual Bandits with Linear Bandit Algorithms
      Osama A Hanna (UCLA)*; Lin Yang (UCLA); Christina Fragouli ()
    • Quantum Channel Certification with Incoherent Measurements
      Omar Fawzi (ENS Lyon); Nicolas Flammarion (EPFL); Aurélien Garivier (ENS Lyon); Aadil Oufkir (ENSL)*
    • List Online Classification
      Shay Moran (-); Ohad Sharon (Technion); Iska Tsubari (Technion)*; Sivan Yosebashvili (Technion)
    • InfoNCE Loss Provably Learns Cluster-Preserving Representations
      Advait Parulekar (University of Texas at Austin)*; Liam Collins (University of Texas at Austin); Karthikeyan Shanmugam (Google); Aryan Mokhtari (UT Austin); Sanjay Shakkottai (University of Texas at Austin)
    • Online Learning Guided Curvature Approximation: A Quasi-Newton Method with Global Non-Asymptotic Superlinear Convergence
      Ruichen Jiang (UT Austin)*; Qiujiang Jin (University of Texas at Austin); Aryan Mokhtari (UT Austin)
    • Exploring Local Norms in Exp-concave Statistical Learning
      Nikita Puchkin (HSE University and IITP RAS)*; Nikita Zhivotovskiy (Google)
    • Learning Hidden Markov Models Using Conditional Samples
      Gaurav Mahajan (University of California, San Diego)*; Sham Kakade (Harvard University); Akshay Krishnamurthy (Microsoft); Cyril Zhang (Microsoft Research)
    • A Second-Order Method for Stochastic Bandit Convex Optimisation
      Tor Lattimore (DeepMind)*; Andras Gyorgy (DeepMind)
    • A Lower Bound for Linear and Kernel Regression with Adaptive Covariates
      Tor Lattimore (DeepMind)*
    • Provable Benefits of Representational Transfer in Reinforcement Learning
      Alekh Agarwal (Google); Yuda Song (Carnegie Mellon University); Wen Sun (Cornell University); Kaiwen Wang (Cornell University)*; Mengdi Wang (Princeton University/DeepMind); Xuezhou Zhang (Princeton)
    • Geodesically convex $M$-estimation in metric spaces
      Victor-Emmanuel Brunel (ENSAE ParisTech)*
    • Information-Computation Tradeoffs for Learning Margin Halfspaces with Random Classification Noise
      Ilias Diakonikolas (University of Wisconsin-Madison); Jelena Diakonikolas (University of Wisconsin-Madison); Daniel M Kane (UCSD); Puqian Wang (Universty of Wisconsin Madison); Nikos Zarifis (University of Wisconsin-Madison )*
    • Tighter PAC-Bayes Bounds Through Coin-Betting
      Kyoungseok Jang (University of Arizona); Kwang-Sung Jun (University of Arizona)*; Ilja Kuzborskij (DeepMind); Francesco Orabona (Boston University)
    • Inference on Strongly Identified Functionals of Weakly Identified Functions
      Andrew Bennett (Cornell University)*; Nathan Kallus (Cornell University); Xiaojie Mao (Tsinghua University); Whitney Newey (MIT); Vasilis Syrgkanis (Stanford University); Masatoshi Uehara (Cornell University)
    • Breaking the Lower Bound with (Little) Structure: Acceleration in Non-Convex Stochastic Optimization with Heavy-Tailed Noise
      Zijian Liu (New York University)*; Jiawei Zhang (New York University); Zhengyuan Zhou (New York University)
    • Minimax Instrumental Variable Regression and $L_2$ Convergence Guarantees without Identification or Closedness
      Andrew Bennett (Cornell University)*; Nathan Kallus (Cornell University); Xiaojie Mao (Tsinghua University); Whitney Newey (MIT); Vasilis Syrgkanis (Stanford University); Masatoshi Uehara (Cornell University)
    • SQ Lower Bounds for Learning Mixtures of Separated and Bounded Covariance Gaussians
      Ilias Diakonikolas (University of Wisconsin-Madison); Daniel M. Kane (UCSD); Thanasis Pittas (University of Wisconsin-Madison)*; Nikos Zarifis (University of Wisconsin-Madison )
    • Allocating Divisible Resources on Arms with Unknown and Random Rewards
      Wenhao LI (University of Toronto)*; Ningyuan Chen (University of Toronto)
    • Semi-Random Sparse Recovery in Nearly-Linear Time
      Jonathan Kelner (MIT); Jerry Li (Microsoft); Allen X Liu (MIT); Aaron Sidford (Stanford); Kevin Tian (Microsoft Research)*
    • Algorithmic Aspects of the Log-Laplace Transform and a Non-Euclidean Proximal Sampler
      Sivakanth Gopi (Microsoft Research); Yin Tat Lee (UW); Daogao Liu (University of Washington); Ruoqi Shen (University of Washington); Kevin Tian (Microsoft Research)*
    • Detection-Recovery Gap for Planted Dense Cycles
      Cheng Mao (Georgia Institute of Technology)*; Alexander S Wein (UC Davis); Shenduo Zhang (Georgia Institute of Technology )
    • Differentially Private Algorithms for the Stochastic Saddle Point Problem with Optimal Rates for the Strong Gap
      Raef Bassily (The Ohio State University); Cristobal Guzman (Pontificia Universidad Católica de Chile); Michael Menart (The Ohio State University)*
    • Resolving the Mixing Time of the Langevin Algorithm to its Stationary Distribution for Log-Concave Sampling
      Jason Altschuler (MIT)*; Kunal Talwar (Apple)
    • A Pretty Fast Algorithm for Adaptive Private Mean Estimation
      Rohith Kuditipudi (Stanford University)*; John Duchi (Stanford University); Saminul Haque (Stanford University)
    • SGD learning on neural networks: leap complexity and saddle-to-saddle dynamics
      Emmanuel Abbe (EPFL); Enric Boix Adsera (MIT)*; Theodor Misiakiewicz (Stanford University)
    • Optimal Scoring Rules for Multi-dimensional Effort
      Jason D. Hartline (Northwestern University); Liren Shan (Northwestern University); Yingkai Li (Yale University); Yifan Wu (Northwestern University)*
    • Breaking the Curse of Multiagents in a Large State Space: RL in Markov Games with Independent Linear Function Approximation
      Qiwen Cui (University of Washington)*; Kaiqing Zhang (University of Maryland, College Park); Simon Du (University of Washington)
    • Best-of-Three-Worlds Linear Bandit Algorithm with Variance-Adaptive Regret Bounds
      Shinji Ito (NEC Corporation)*; Kei Takemura (NEC Corporation)
    • On the Complexity of Multi-Agent Decision Making: From Learning in Games to Partial Monitoring
      Dean Foster (Amazon); Dylan J Foster (Microsoft Research); Noah Golowich (Massachusetts Institute of Technology)*; Alexander Rakhlin (MIT)
    • Breaking the Curse of Multiagency: Provably Efficient Decentralized Multi-Agent RL with Function Approximation
      Yuanhao Wang (Princeton University); Qinghua Liu (Princeton University); Yu Bai (Salesforce Research)*; Chi Jin (Princeton University)
    • Tight bounds on the hardness of learning simple nonparametric mixtures
      Wai Ming Tai (University of Chicago)*; Bryon Aragam (University of Chicago)
    • Information-Directed Selection for Top-Two Algorithms
      Wei You (The Hong Kong University of Science and Technology)*; Chao Qin (Columbia University); Zihao Wang (The Hong Kong University of Science and Technology); Shuoguang Yang (The Hong Kong University of Science and Technology)
    • Accelerated and Sparse Algorithms for Approximate Personalized PageRank and Beyond
      David Martínez-Rubio (ZIB)*; Elias Wirth (TU Berlin); Sebastian Pokutta (ZIB)
    • Toward L_\infty Recovery of Nonlinear Functions: A Polynomial Sample Complexity Bound for Gaussian Random Fields
      Kefan Dong (Stanford University)*; Tengyu Ma (Stanford)
    • Self-Directed Linear Classification
      Ilias Diakonikolas (University of Wisconsin-Madison); Vasilis Kontonis (University of Wisconsin-Madison)*; Christos Tzamos (UW-Madison); Nikos Zarifis (University of Wisconsin-Madison )
    • On the Lower Bound of Minimizing Polyak-Łojasiewicz functions
      Pengyun Yue (Peking University)*; Cong Fang ( Peking University); Zhouchen Lin (Peking University)
    • Curvature and complexity: Better lower bounds for geodesically convex optimization
      Christopher CRISCITIELLO (Epfl)*; Nicolas Boumal (EPFL)
    • A Nearly Tight Bound for Fitting an Ellipsoid to Gaussian Random Points
      Daniel Kane (-)*; Ilias Diakonikolas (University of Wisconsin-Madison)
    • Hardness of Agnostically Learning Halfspaces from Worst-Case Lattice Problems
      Stefan Tiegel (ETH Zürich)*
    • Testing of Index-Invariant Properties in the Huge Object Model
      Sourav Chakraborty (Indian Statistical Institute); Eldar Fischer (Technion - Israel Institute of Technology); Arijit Ghosh (Indian Statistical Institute); Gopinath Mishra (University of Warwick, UK); Sayantan Sen (Indian Statistical Institute)*
    • Algorithmic Gaussianization through Sketching: Converting Data into Sub-gaussian Random Designs
      Michal Derezinski (University of Michigan)*
    • Benign Overfitting in Linear Classifiers and Leaky ReLU Networks from KKT Conditions for Margin Maximization
      Spencer Frei (UC Berkeley); Gal Vardi (TTI-Chicago)*; Peter Bartlett (); Nathan Srebro (Toyota Technical Institute of Chicago)
    • Simple Binary Hypothesis Testing under Local Differential Privacy and Communication Constraints
      Ankit Pensia (University of Wisconsin-Madison)*; Amir Reza Asadi (University of Cambridge); Varun Jog (University of Cambridge); Po-Ling Loh (University of Cambridge)
    • Geometric Barriers for Stable and Online Algorithms for Discrepancy Minimization
      David Gamarnik (MIT); Eren C Kizildag (Columbia University)*; Will Perkins (Georgia Tech); Changji Xu (Harvard)
    • Intrinsic dimensionality and generalization properties of the R-norm inductive bias
      Navid Ardeshir (Columbia University); Daniel J Hsu (Columbia University); Clayton H Sanford (Columbia)*
    • Improved Dynamic Regret for Online Frank-Wolfe
      Yuanyu Wan (Zhejiang University)*; Lijun Zhang (Nanjing University); Mingli Song (Zhejiang University)
    • Online Nonconvex Optimization with Limited Instantaneous Oracle Feedback
      Ziwei Guan (The Ohio State University)*; Yi Zhou (University of Utah); Yingbin Liang (The Ohio State University)
    • Tackling Distribution Shift with Bilinear Combinatorial Extrapolation
      Max Simchowitz (MIT)*; Abhishek Gupta (University of Washington); Kaiqing Zhang (University of Maryland, College Park)
    • The $k$-Cap Process on Geometric Random Graphs
      Mirabel E Reid (Georgia Institute of Technology)*; Santosh S Vempala (Georgia Tech)
    • Efficient Algorithms for Sparse Moment Problems without Separation
      Zhiyuan Fan (Tsinghua Unversity); Jian Li (" Tsinghua University, China")*
    • From Pseudorandomness to Multi-Group Fairness and Back
      Cynthia Dwork (Harvard)*; Daniel Lee (University of Pennsylvania); Huijia Lin (University of Washington); Pranay Tankala (Harvard University)
    • A new ranking scheme for modern data and its application to two-sample hypothesis testing
      Doudou Zhou (Harvard University); Hao Chen (University of California, Davis)*
    • Linearization Algorithms for Fully Composite Optimization
      Maria-Luiza Vladarean (EPFL)*; Nikita Doikov (EPFL); Martin Jaggi (EPFL); Nicolas Flammarion (EPFL)
    • Near Optimal Heteroscedastic Regression with Symbiotic Learning
      Aniket Das (Google Research)*; Dheeraj M Nagaraj (Google Research); Praneeth Netrapalli (Google Research); Dheeraj Baby (University of California Santa Barbara)
    • Approximately Stationary Bandits with Knapsacks
      Giannis Fikioris (Cornell University)*; Eva Tardos (Cornell University)
    • Lower Bounds for the Convergence of Tensor Power Iteration on Random Overcomplete Models
      Yuchen Wu (Stanford University); Kangjie Zhou (Stanford University)*
    • Causal Matrix Completion
      Anish Agarwal (Amazon)*; Munther Dahleh (MIT); Devavrat Shah (MIT); Dennis Shen (UC Berkeley)
    • A High-dimensional Convergence Theorem for U-statistics with Applications to Kernel-based Testing
      Kevin H Huang (UCL)*; Xing Liu (Imperial College London); Andrew Duncan (Imperial College London); Axel Gandy (Imperial College London)
    • Asymptotic confidence sets for random linear programs
      Shuyu Liu (New York University); Florentina Bunea (Cornell University); Jonathan Niles-Weed (NYU)*
    • Algorithmically Effective Differentially Private Synthetic Data
      Yiyun He (University of California, Irvine); Roman Vershynin (UCI); Yizhe Zhu (UC Irvine)*
    • Tight Guarantees for Interactive Decision Making with the Decision-Estimation Coefficient
      Dylan J Foster (Microsoft Research); Noah Golowich (Massachusetts Institute of Technology)*; Yanjun Han (MIT)
    • Reaching Kesten-Stigum Threshold in the Stochastic Block Model under Node Corruptions
      Yiding Hua (ETH Zürich)*; Jingqiu Ding (ETH Zurich); Tommaso d'Orsi (ETH ZURICH); David Steurer (ETH ZURICH)
    • Utilising the CLT Structure in Stochastic Gradient based Sampling : Improved Analysis and Faster Algorithms
      Aniket Das (Google Research)*; Dheeraj M Nagaraj (Google Research); Ananr Raj (INRIA)
    • The Expressive Power of Tuning Only the Normalization Layers
      Angeliki Giannou (UW Madison)*; Shashank Rajput (University of Wisconsin - Madison); Dimitris Papailiopoulos (University of Wisconsin - Madison)
    • Precise Asymptotic Analysis of Deep Random Feature Models
      David Bosch ( Chalmers University of Technology )*; Ashkan Panahi (Chalmers University of Technology); Babak Hassibi (Caltech)
    • The Complexity of Markov Equilibrium in Stochastic Games
      Constantinos Daskalakis (MIT); Noah Golowich (Massachusetts Institute of Technology)*; Kaiqing Zhang (University of Maryland, College Park)
    • Near-optimal fitting of ellipsoids to random points
      Aaron Potechin (Univerity of Chicago); Paxton M Turner (Harvard University); Prayaag Venkat (Harvard University)*; Alexander S Wein (UC Davis)
    • Entropic characterization of optimal rates for learning Gaussian mixtures
      Zeyu Jia (MIT)*; Yury Polyanskiy (MIT); Yihong Wu (Yale University)
    • Minimizing Dynamic Regret on Geodesic Metric Spaces
      Zihao Hu (Georgia Institute of Technology)*; Guanghui Wang (Nanjing University); Jacob D Abernethy (Georgia Institute of Technolog)
    • Sharp analysis of EM for learning mixtures of pairwise differences
      Abhishek Dhawan (Georgia Institute of Technology); Cheng Mao (Georgia Institute of Technology)*; Ashwin Pananjady (Georgia Institute of Technology)
    • Zeroth-order Optimization with Weak Dimension Dependency
      Pengyun Yue (Peking University)*; Long Yang (Peking University); Cong Fang ( Peking University); Zhouchen Lin (Peking University)
    • Quasi-Newton Steps for Efficient Online Exp-Concave Optimization
      Zakaria Mhammedi (MIT)*; Khashayar Gatmiry (Massachusetts Institute of Technology)
    • Condition-number-independent Convergence Rate of Riemannian Hamiltonian Monte Carlo with Numerical Integrators
      Yunbum Kook (Georgia Tech); Yin Tat Lee (UW); Ruoqi Shen (University of Washington)*; Santosh Vempala (Georgia Tech)
    • Deterministic Nonsmooth Nonconvex Optimization
      Guy Kornowski (Weizmann Institute of Science)*; Michael Jordan (UC Berkeley); Tianyi Lin (UC Berkeley); Ohad Shamir (Weizmann Institute of Science); Manolis Zampetakis (MIT)
    • How Can Deep Learning Perform Deep (Hierarchical) Learning Beyond Layerwise Training
      Zeyuan Allen-Zhu (Meta FAIR Labs)*; Yuanzhi Li (CMU)
    • Differentially Private and Lazy Online Convex Optimization
      Naman Agarwal (Google); Satyen Kale (Google)*; Karan Singh (Carnegie Mellon University); Abhradeep Thakurta (Google)
    • Contextual Bandits with Packing and Covering Constraints: A Modular Lagrangian Approach via Regression
      Dylan J Foster (Microsoft Research); Karthik Abinav Sankararaman (Facebook)*; Alex Slivkins (Microsoft Research)
    • Law of Large Numbers for Bayesian two-layer Neural Network trained with Variational Inference
      Arnaud Descours (Université Clermont Auvergne)*; Tom tom Huix (Polytechnique); Arnaud Guillin (Université Clermont-Auvergne); Manon Michel (Université Clermont Ferrand); Eric Moulines (Ecole Polytechnique); Boris Nectoux (Université Clermont-Auvergne)
    • Quadratic Memory is Necessary for Optimal Query Complexity in Convex Optimization: Center-of-Mass is Pareto-Optimal
      Moise Blanchard (Massachusetts Institute of Technology)*; Junhui Zhang (MIT); Patrick Jaillet (MIT)
    • Sparse PCA Beyond Covariance Thresholding
      Gleb Novikov (ETH Zurich)*
    • Finite-Sample Symmetric Mean Estimation with Fisher Information Rate
      Shivam Gupta (University of Texas at Austin)*; Jasper C.H. Lee (University of Wisconsin-Madison); Eric Price (University of Texas at Austin)
    • Fast Algorithms for a New Relaxation of Optimal Transport
      Moses Charikar (Stanford University, California); Beidi Chen (Carnegie Mellon University); Christopher Re (Stanford University); Erik Waingarten (University of Pennsylvania)*
    • Generalization Guarantees via Algorithm-dependent Rademacher Complexity
      Sarah Sachs (University of Amsterdam)*; Tim van Erven (University of Amsterdam); Liam Hodgkinson (University of California, Berkeley); Rajiv Khanna (Purdue University); Umut Simsekli (Inria/ENS)
    • Interpolation Learning With Minimum Description Length
      Naren Sarayu Manoj (TTIC); Nathan Srebro (Toyota Technical Institute of Chicago)*
    • $\ell_p$-Regression in the Arbitrary Partition Model of Communication
      Yi Li (Nanyang Technological University); Honghao Lin (Carnegie Mellon University)*; David Woodruff (Carnegie Mellon University)
    • On Classification-Calibration of Gamma-Phi Losses
      Yutong Wang (University of Michigan)*; Clayton Scott (University of Michigan)
    • Asymptotically Optimal Generalization Error Bounds for Noisy, Iterative Algorithms
      Amedeo Roberto Esposito (ISTA); Michael Gastpar (EPFL); Ibrahim Issa (American University of Beirut)*
    • Variance-Dependent Regret Bounds for Linear Bandits and Reinforcement Learning: Adaptivity and Computational Efficiency
      Heyang Zhao (UCLA)*; Jiafan He (UCLA); Dongruo Zhou (UCLA); Tong Zhang (The Hong Kong University of Science and Technology); Quanquan Gu (University of California, Los Angeles)
    • PAC Verification of Statistical Algorithms
      Saachi Mutreja (UC Berkeley ); Jonathan Shafer (UC Berkeley)*
    • Active Coverage for PAC Reinforcement Learning
      Aymen Al Marjani (ENS Lyon)*; Andrea Tirinzoni (Meta AI); Emilie Kaufmann (CNRS)
    • Ticketed Learning--Unlearning Schemes
      Badih Ghazi (Google); Pritish Kamath (Google)*; Ravi Kumar (Google); Pasin Manurangsi (Google); Ayush Sekhari (CORNELL UNIVERSITY); Chiyuan Zhang (Google)
    • Implicit Balancing and Regularization: Generalization and Convergence Guarantees for Overparameterized Asymmetric Matrix Sensing
      Mahdi Soltanolkotabi (USC); Dominik Stöger (KU Eichstätt-Ingolstadt)*; Changzhi Xie (University of southern California)
    • U-Calibration: Forecasting for an Unknown Agent
      Bobby Kleinberg (Cornell University and Google Research); Renato Paes Leme (Google); Jon Schneider (Google)*; Yifeng Teng (University of Wisconsin-Madison )
    • STay-ON-the-Ridge: Guaranteed Convergence to Local Minimax Equilibrium in Nonconvex-Nonconcave Games
      Constantinos Daskalakis (MIT); Noah Golowich (Massachusetts Institute of Technology); Stratis Skoulakis (Ecole Polytechnique de Lausanne)*; Emmanouil Zampetakis (UC Berkeley)
    • Empirical Bayes via ERM and Rademacher complexities: the Poisson model
      Soham Jana (Princeton University); Yury Polyanskiy (MIT); Anzo Z Teh (Massachusetts Institute of Technology)*; Yihong Wu (Yale University)
    • Kernelized Diffusion Maps
      Loucas Pillaud-Vivien (Flatiron institute)*; Francis Bach (INRIA - Ecole Normale Supérieure)
    • Statistical and Computational Limits for Tensor-on-Tensor Association Detection
      Ilias Diakonikolas (University of Wisconsin-Madison); Daniel M. Kane (UCSD); Yuetian Luo (University of Chicago)*; Anru Zhang (Duke University)
    • Sparsity aware generalization theory for deep neural networks
      Ramchandran Muthukumar (Johns Hopkins University); Jeremias Sulam (Johns Hopkins University)*
    • Is Planted Coloring Easier than Planted Clique?
      Pravesh Kothari (Princeton.edu); Santosh S Vempala (Georgia Tech); Alexander S Wein (UC Davis)*; Jeff Xu (CMU)
    • Moments, Random Walks, and Limits for Spectrum Approximation
      Yujia Jin (Stanford University); Christopher Musco (New York University); Aaron Sidford (Stanford); Apoorv Vikram Singh (NYU)*
    • Minimax optimal testing by classification
      Patrik R Gerber (MIT)*; Yanjun Han (MIT); Yury Polyanskiy (MIT)
    • Improper Multiclass Boosting
      Nataly Brukhim (Princeton University)*; Steve Hanneke (Purdue University); Shay Moran (Technion)
    • Distribution-Independent Regression for Generalized Linear Models with Oblivious Corruptions
      Ilias Diakonikolas (University of Wisconsin-Madison); Sushrut Karmalkar (University of Wisconsin-Madison)*; Jong Ho Park (Krafton Inc.); Christos Tzamos (UW-Madison)
    • Sharper Model-free Reinforcement Learning for Average-reward Markov Decision Processes
      Zihan Zhang (Tsinghua University); Qiaomin Xie (University of Wisconsin-Madison)*
    • The Aggregation--Heterogeneity Trade-off in Federated Learning
      Xuyang Zhao (Peking University); Huiyuan Wang (Peking University); Wei Lin (Peking University)*
    • A Blackbox Approach to Best of Both Worlds in Bandits and Beyond
      Chris Dann (Google); Chen-Yu Wei (University of Southern California)*; Julian Zimmert (Google)
    • The Computational Complexity of Finding Stationary Points in Non-Convex Optimization
      Alexandros Hollender (EPFL)*; Emmanouil Zampetakis (UC Berkeley)
    • Sharp thresholds in inference of planted subgraphs
      Elchanan Mossel (MIT); Jonathan Niles-Weed (NYU); Youngtak Sohn (MIT)*; Nike Sun (Massachusetts Institute of Technology); Ilias Zadik (MIT)
    • Fast, Sample-Efficient, Affine-Invariant Private Mean and Covariance Estimation for Subgaussian Distributions
      Gavin Brown (Boston University)*; Samuel Hopkins (MIT); Adam Smith (Boston University)
    • Learning Narrow One-Hidden-Layer ReLU Networks
      Sitan Chen (UC Berkeley)*; Zehao Dou (Yale University); Surbhi Goel (University of Texas at Austin); Adam Klivans (University of Texas at Austin); Raghu Meka (UCLA)
    • Universal Rates for Multiclass Learning
      Steve Hanneke (Purdue University); Shay Moran (-); Qian Zhang (Purdue University)*
    • Multiclass Online Learning and Uniform Convergence
      Steve Hanneke (Purdue University)*; Shay Moran (-)
    • Local Risk Bounds for Entropy-Regularized Aggregation
      Jaouad Mourtada (CREST, ENSAE)*; Nikita Zhivotovskiy (UC Berkeley); Tomas Vaskevicius (EPFL)
    • The Implicit Bias of Batch Normalization in Linear Models and Two-layer Linear Convolutional Neural Networks
      Yuan Cao (The University of Hong Kong)*; Difan Zou (The University of Hong Kong); Yuanzhi Li (CMU); Quanquan Gu (University of California, Los Angeles)
    • On a Class of Gibbs Sampling over Networks
      Bo Yuan (Georgia Institute of Technology)*; Jiaojiao Fan (Georgia Institute of Technology); Jiaming Liang (Yale University); Andre Wibisono (Yale University); Yongxin Chen (Georgia Institute of Technology)
    • Limits of Model Selection under Transfer Learning
      Steve Hanneke (Purdue University); Samory Kpotufe (Columbia University); Yasaman Mahdaviyeh (Columbia University)*
    • Bandit Learnability can be Undecidable
      Steve Hanneke (Purdue University)*; Liu Yang (Independent)
    • Detection-Recovery and Detection-Refutation Gaps via Reductions from Planted Clique
      Guy Bresler (MIT); Tianze Jiang (Massachusetts Institute of Technology)*
    • Fine-Grained Distribution-Dependent Learning Curves
      Olivier Bousquet (Google); Steve Hanneke (Purdue University); Shay Moran (Technion); Jonathan Shafer (UC Berkeley)*; Ilya Tolstikhin (Google)
    • Efficient median of means estimator
      Stanislav Minsker (U. of Southern California)*