Accepted Papers

Proceedings

    • Analysis of Langevin Monte Carlo from Poincare to Log-Sobolev
      Sinho Chewi; Murat Erdogdu; Mufan Li; Ruoqi Shen; Shunshi Zhang
    • Optimization-Based Separations for Neural Networks
      Itay Safran; Jason Lee
    • Mirror Descent Strikes Again: Optimal Stochastic Convex Optimization under Infinite Noise Variance
      Nuri Mert Vural; Lu Yu; Krishna Balasubramanian; Stanislav Volgushev; Murat Erdogdu
    • Wasserstein GANs with Gradient Penalty Compute Congested Transport
      Tristan Milne; Adrian Nachman
    • Robust Estimation for Random Graphs
      Jayadev Acharya; Ayush Jain; Gautam Kamath; Ananda Theertha Suresh; Huanyu Zhang
    • Tight Query Complexity Bounds for Learning Graph Partitions
      Xizhi Liu; Sayan Mukherjee
    • Pushing the Efficiency-Regret Pareto Frontier for Online Learning of Portfolios and Quantum States
      Julian Zimmert; Naman Agarwal; Satyen Kale
    • Risk Bounds for Aggregated Shallow Neural Networks using Gaussian Priors
      Laura Tinsi; Arnak Dalalyan
    • On the Benefits of Large Learning Rates for Kernel Methods
      Gaspard Beugnot; Julien Mairal; Alessandro Rudi
    • Near-Optimal Statistical Query Lower Bounds for Agnostically Learning Intersections of Halfspaces with Gaussian Marginals
      Daniel Hsu; Clayton Sanford; Rocco Servedio; Emmanouil Vlatakis-Gkaragkounis
    • The Power of Adaptivity in SGD: Self-Tuning Step Sizes with Unbounded Gradients and Affine Variance
      Matthew Faw; Isidoros Tziotis; Constantine Caramanis; Aryan Mokhtari; Sanjay Shakkottai; Rachel Ward
    • Optimal Mean Estimation without a Variance
      Yeshwanth Cherapanamjeri; Nilesh Tripuraneni; Peter Bartlett; Michael Jordan
    • Beyond No Regret: Instance-Dependent PAC Reinforcement Learning
      Andrew Wagenmaker; Max Simchowitz; Kevin Jamieson
    • Learning Low Degree Hypergraphs
      Eric Balkanski; Oussama Hanguir; Shatian Wang
    • Depth and Feature Learning are Provably Beneficial for Neural Network Discriminators
      Carles Domingo-Enrich
    • The Implicit Bias of Benign Overfitting
      Ohad Shamir
    • Universal Online Learning with Bounded Loss: Reduction to Binary Classification
      Moise Blanchard; Romain Cosson
    • Negative Curvature Obstructs Acceleration for Strongly Geodesically Convex Optimization, Even with Exact First-Order Oracles
      Christopher Criscitiello; Nicolas Boumal
    • Multi-Agent Learning for Iterative Dominance Elimination: Formal Barriers and New Algorithms
      Jibang Wu; Haifeng Xu; Fan Yao
    • A Private and Computationally-Efficient Estimator for Unbounded Gaussians
      Gautam Kamath; Argyris Mouzakis; Vikrant Singhal; Thomas Steinke; Jonathan Ullman
    • The Price of Tolerance in Distribution Testing
      Clement Canonne; Ayush Jain; Gautam Kamath; Jerry Li
    • A Bounded-Noise Mechanism for Differential Privacy
      Yuval Dagan; Gil Kur
    • Learning with Metric Losses
      Dan Tsir Cohen; Aryeh Kontorovich
    • Rate of Convergence of Polynomial Networks to Gaussian Processes
      Adam Klukowski
    • Private Robust Estimation by Stabilizing Convex Relaxations
      Pravesh Kothari; Pasin Manurangsi; Ameya Velingker
    • Stochastic Variance Reduction for Variational Inequality Methods
      Ahmet Alacaoglu; Yura Malitsky
    • Self-Consistency of the Fokker Planck Equation
      Zebang Shen; Zhenfu Wang; Satyen Kale; Alejandro Ribeiro; Amin Karbasi; Hamed Hassani
    • Monotone Learning
      Olivier Bousquet; Amit Daniely; Haim Kaplan; Yishay Mansour; Shay Moran; Uri Stemmer
    • Chasing Convex Bodies and Functions with Black-Box Advice
      Nicolas Christianson; Tinashe Handina; Adam Wierman
    • ROOT-SGD: Sharp Nonasymptotics and Asymptotic Efficiency in a Single Algorithm
      Chris Li; Wenlong Mou; Martin Wainwright; Michael Jordan
    • Policy Optimization for Stochastic Shortest Path
      Liyu Chen; Haipeng Luo; Aviv Rosenberg
    • Optimal SQ Lower Bounds for Learning Halfspaces with Massart Noise
      Stefan Tiegel; Rajai Nasser
    • Private and Polynomial Time Algorithms for Learning Gaussians and Beyond
      Hassan Ashtiani; Christopher Liaw
    • Universal Online Learning: An Optimistically Universal Learning Rule
      Moise Blanchard
    • (Nearly) Optimal Private Linear Regression for Sub-Gaussian Data via Adaptive Clipping
      Prateek Varshney; Abhradeep Thakurta; Prateek Jain
    • Differential Privacy and Robust Statistics in High Dimensions
      Xiyang Liu; Weihao Kong; Sewoong Oh
    • Lattice-Based Methods Surpass Sum-of-Squares in Clustering
      Ilias Zadik; Min Jae Song; Alexander Wein; Joan Bruna
    • Width Is Less Important than Depth in ReLU Neural Networks
      Gal Vardi; Gilad Yehudai; Ohad Shamir
    • Computational-Statistical Gap in Reinforcement Learning
      Daniel Kane; Sihan Liu; Shachar Lovett; Gaurav Mahajan
    • Trace Norm Regularization for Multi-Task Learning with Scarce Data
      Etienne Boursier; Mikhail Konobeev; Nicolas Flammarion
    • The Role of Interactivity in Structured Estimation
      Jayadev Acharya; Clement Canonne; Ziteng Sun; Himanshu Tyagi
    • Dimension-Free Convergence Rates for Gradient Langevin Dynamics in RKHS
      Boris Muzellec; Kanji Sato; Mathurin Massias; Taiji Suzuki
    • Adversarially Robust Multi-Armed Bandit Algorithm with Variance-Dependent Regret Bounds
      Shinji Ito; Taira Tsuchiya; Junya Honda
    • A Sharp Memory-Regret Trade-off for Multi-Pass Streaming Bandits
      Arpit Agarwal; Sanjeev Khanna; Prathamesh Patil
    • Approximate Cluster Recovery from Noisy Labels
      Buddhima Gamlath; Silvio Lattanzi; Ashkan Norouzi-Fard; Ola Svensson
    • An Efficient Minimax Optimal Estimator for Multivariate Convex Regression
      Gil Kur; Eli Putterman
    • Minimax Regret for Partial Monitoring: Infinite Outcomes and Rustichini’s Regret
      Tor Lattimore
    • Adaptive Bandit Convex Optimization with Heterogeneous Curvature
      Haipeng Luo; Mengxiao Zhang; Peng Zhao
    • Statistical Estimation and Online Inference via Local SGD
      Xiang Li; Jiadong Liang; Xiangyu Chang; Zhihua Zhang
    • Community Recovery in the Degree-Heterogeneous Stochastic Block Model
      Vincent Cohen-Addad; Frederik Mallmann-Trenn; David Saulpic
    • Strong Gaussian Approximation for the Sum of Random Vectors
      Nazar Buzun; Nikolay Shvetsov; Dmitry V. Dylov
    • Smoothed Online Learning Is as Easy as Statistical Learning
      Adam Block; Yuval Dagan; Noah Golowich; Alexander Rakhlin
    • Gardner Formula for Ising Perceptron Models at Small Densities
      Erwin Bolthausen; Shuta Nakajima; Nike Sun; Changji Xu
    • Derivatives and Residual Distribution of Regularized M-Estimators with Application to Adaptive Tuning
      Pierre Bellec; Yiwei Shen
    • Private Convex Optimization via Exponential Mechanism
      Sivakanth Gopi; Yin Tat Lee; Daogao Liu
    • Towards Optimal Algorithms for Multi-Player Bandits without Collision Sensing Information
      Wei Huang; Richard Combes; Cindy Trinh
    • Generalization Bounds for Data-Driven Numerical Linear Algebra
      Peter Bartlett; Piotr Indyk; Tal Wagner
    • The Query Complexity of Sampling from Strongly Log-Concave Distributions in One Dimension
      Sinho Chewi; Patrik Gerber; Chen Lu; Thibaut Le Gouic; Philippe Rigollet
    • Optimal and Instance-Dependent Guarantees for Markovian Linear Stochastic Approximation
      Wenlong Mou; Ashwin Pananjady; Martin Wainwright; Peter Bartlett
    • Accelerated SGD for Non-Strongly-Convex Least Squares
      Aditya Varre; Nicolas Flammarion
    • Label Noise (Stochastic) Gradient Descent Implicitly Solves the Lasso for Quadratic Parametrization
      Loucas Vivien; Julien Reygner; Nicolas Flammarion
    • Tracking Most Significant Arm Switches in Bandits
      Joe Suk; Samory Kpotufe
    • Exact Community Recovery in Correlated Stochastic Block Models
      Julia Gaudio; Miklos Racz; Anirudh Sridhar
    • Mean-Field Nonparametric Estimation of Interacting Particle Systems
      Rentian Yao; Xiaohui Chen; Yun Yang
    • Inductive Bias of Multi-Channel Linear Convolutional Networks with Bounded Weight Norm
      Meena Jagadeesan; Ilya Razenshteyn; Suriya Gunasekar
    • New Projection-Free Algorithms for Online Convex Optimization with Adaptive Regret Guarantees
      Ben Kretzu; Dan Garber
    • Making SGD Parameter-Free
      Yair Carmon; Oliver Hinder
    • Efficient Convex Optimization Requires Superlinear Memory
      Annie Marsden; Vatsal Sharan; Aaron Sidford; Gregory Valiant
    • Big-Step-Little-Step: Efficient Gradient Methods for Objectives with Multiple Scales
      Jonathan Kelner; Annie Marsden; Vatsal Sharan; Aaron Sidford; Gregory Valiant; Honglin Yuan
    • Toward Instance-Optimal State Certification with Incoherent Measurements
      Sitan Chen; Jerry Li; Ryan O'Donnell
    • EM's Convergence in Gaussian Latent Tree Models
      Yuval Dagan; Vardis Kandiros; Constantinos Daskalakis
    • Benign Overfitting without Linearity: Neural Network Classifiers Trained by Gradient Descent for Noisy Linear Data
      Spencer Frei; Niladri Chatterji; Peter Bartlett
    • Minimax Regret Optimization for Robust Machine Learning under Distribution Shift
      Alekh Agarwal; Tong Zhang
    • Offline Reinforcement Learning with Realizability and Single-Policy Concentrability
      Wenhao Zhan; Baihe Huang; Audrey Huang; Nan Jiang; Jason Lee
    • Non-Linear Reinforcement Learning in Large Action Spaces: Structural Conditions and Sample-Efficiency of Posterior Sampling
      Alekh Agarwal; Tong Zhang
    • Learning GMMs with Nearly Optimal Robustness Guarantees
      Allen Liu; Ankur Moitra
    • Towards a Theory of Non-Log-Concave Sampling: First-Order Stationarity Guarantees for Langevin Monte Carlo
      Krishna Balasubramanian; Sinho Chewi; Murat Erdogdu; Adil Salim; Shunshi Zhang
    • Understanding Riemannian Acceleration via a Proximal Extragradient Framework
      Jikai Jin; Suvrit Sra
    • On Almost Sure Convergence Rates of Stochastic Gradient Methods
      Jun Liu; Ye Yuan
    • Improved Analysis for a Proximal Algorithm for Sampling
      Yongxin Chen; Sinho Chewi; Adil Salim; Andre Wibisono
    • Realizable Learning Is All You Need
      Max Hopkins; Daniel Kane; Shachar Lovett; Gaurav Mahajan
    • Streaming Algorithms for Ellipsoidal Approximation of Convex Polytopes
      Yury Makarychev; Naren Manoj; Max Ovsiankin
    • The Pareto Frontier of Instance-Dependent Guarantees in Multi-Player Multi-Armed Bandits with No Communication
      Mark Sellke; Allen Liu
    • Minimax Regret on Patterns Using Kullback-Leibler Divergence Covering
      Jennifer Tang
    • Sharp Constants in Uniformity Testing via the Huber Statistic
      Shivam Gupta; Eric Price
    • Low-Degree Multicalibration
      Parikshit Gopalan; Michael Kim; Mihir Singhal; Shengjia Zhao
    • Thompson Sampling Achieves $\tilde{\mathcal{O}}(\sqrt{T})$ Regret in Linear Quadratic Control
      Taylan Kargin; Sahin Lale; Kamyar Azizzadenesheli; Animashree Anandkumar; Babak Hassibi
    • Return of the Bias: Minimax Optimal High Probability Bounds for Adversarial Linear Bandits
      Julian Zimmert; Tor Lattimore
    • Uniform Stability for First-Order Empirical Risk Minimization
      Amit Attia; Tomer Koren
    • Single Trajectory Nonparametric Learning of Nonlinear Dynamics
      Ingvar Ziemann; Henrik Sandberg; Nikolai Matni
    • On Characterizations of Learnability with Computable Learners
      Tom Sterkenburg
    • Stability vs. Implicit Bias of Gradient Methods on Separable Data and Beyond
      Matan Schliserman; Tomer Koren
    • Near Optimal Efficient Decoding from Pooled Data
      Max Hahn-Klimroth; Noela Müller
    • Kernel Interpolation in Sobolev Spaces Is Not consistent in Low Dimensions
      Simon Buchholz
    • Random Graph Matching in Geometric Models: The Case of Complete Graphs
      Haoyu Wang; Yihong Wu; Jiaming Xu; Israel Yolou
    • Offline Reinforcement Learning: Fundamental Barriers for Value Function Approximation
      Dylan Foster; Akshay Krishnamurthy; David Simchi-Levi; Yunzong Xu
    • Improved Parallel Algorithm for Minimum Cost Submodular Cover Problem
      Yingli Ran; Zhao Zhang; Shaojie Tang
    • The Dynamics of Riemannian Robbins-Monro Algorithms
      Mohammad Karimi; Ya-Ping Hsieh; Panayotis Mertikopoulos; Andreas Krause
    • Corruption-Robust Contextual Search through Density Updates
      Renato Leme; Chara Podimata; Jon Schneider
    • On the Memory Complexity of Uniformity Testing
      Tomer Berg; Or Ordentlich; Ofer Shayevitz
    • Generalization Bounds via Convex Analysis
      Gabor Lugosi; Gergely Neu
    • Private Matrix Approximation and Geometry of Unitary Orbits
      Oren Mangoubi; Yikai Wu; Satyen Kale; Abhradeep Thakurta; Nisheeth Vishnoi
    • Efficient Online Linear Control with Stochastic Convex Costs and Unknown Dynamics
      Asaf Cassel; Alon Cohen; Tomer Koren
    • Two-Sided Weak Submodularity for Matroid Constrained Optimization and Regression
      Justin Ward; Theophile Thiery
    • Corralling a Larger Band of Bandits: A Case Study on Switching Regret for Linear Bandits
      Haipeng Luo; Mengxiao Zhang; Peng Zhao; Zhi-Hua Zhou
    • Assemblies of Neurons Learn to Classify Well-Separated Distributions
      Max Dabagia; Santosh Vempala; Christos Papadimitriou
    • The Structured Abstain Problem and the Lovász Hinge
      Enrique Nueve; Rafael Frongillo; Jessica Finocchiaro
    • Fast Algorithm for Overcomplete Order-3 Tensor Decomposition
      Jingqiu Ding; Tommaso d'Orsi; Chih-Hung Liu; David Steurer; Stefan Tiegel
    • Hardness of Maximum Likelihood Learning of DPPs
      Elena Grigorescu; Brendan Juba; Karl Wimmer; Ning Xie
    • Learning to Control Linear Systems Can Be Hard
      Anastasios Tsiamis; Ingvar Ziemann; Manfred Morari; Nikolai Matni; George Pappas
    • Polynomial-Time Reinforcement Learning without Horizon
      Zihan Zhang; Xiangyang Ji; Simon Du
    • On the Well-Spread Property and Its Relation to Linear Regression
      Hongjie Chen; Tommaso d'Orsi
    • Optimal SQ Lower Bounds for Robustly Learning Discrete Product Distributions and Ising Models
      Ilias Diakonikolas; Daniel Kane; Yuxin Sun
    • Private High-Dimensional Hypothesis Testing
      Shyam Narayanan
    • How Catastrophic Can Catastrophic Forgetting Be in Linear Regression?
      Itay Evron; Edward Moroshko; Rachel Ward; Nathan Srebro; Daniel Soudry
    • Efficient Decentralized Multi-Agent Learning in Asymmetric Queuing Systems
      Daniel Freund; Thodoris Lykouris; Wentao Weng
    • Online Learning to Transport via the Minimal Selection Principle
      Wenxuan Guo; YoonHaeng Hur; Tengyuan Liang; Chris Ryan
    • On the Role of Channel Capacity in Learning Gaussian Mixture Models
      Elad Romanov; Or Ordentlich; Tamir Bendory
    • Parameter-Free Mirror Descent
      Andrew Jacobsen; Ashok Cutkosky
    • Chained Generalisation Bounds
      Eugenio Clerico; Amitis Shidani; George Deligiannidis; Arnaud Doucet
    • Near-Optimal Statistical Query Hardness of Learning Halfspaces with Massart Noise
      Ilias Diakonikolas; Daniel Kane
    • Faster Online Calibration without Randomization: Interval Forecasts and the Power of Two Choices
      Chirag Gupta; Aaditya Ramdas
    • Universality of Empirical Risk Minimization
      Basil Saeed; Andrea Montanari
    • Learning a Single Neuron with Adversarial Label Noise via Gradient Descent
      Ilias Diakonikolas; Vasilis Kontonis; Christos Tzamos; Nikos Zarifis
    • Sharper Rates for Separable Minimax and Finite Sum Optimization via Primal-Dual Extragradient Methods
      Yujia Jin; Aaron Sidford; Kevin Tian
    • Rate-Distortion Theoretic Generalization Bounds for Stochastic Learning Algorithms
      Milad Sefidgaran; Amin Gohari; Gaël Richard; Umut Simsekli
    • Scale-Free Unconstrained Online Learning for Curved Losses
      Jack Mayo; Hedi Hadiji; Tim van Erven
    • Robustly-Reliable Learners under Poisoning Attacks
      Maria-Florina Balcan; Avrim Blum; Steve Hanneke; Dravyansh Sharma
    • Non-Gaussian Component Analysis via Lattice Basis Reduction
      Ilias Diakonikolas; Daniel Kane
    • Can Q-learning Be Improved with Advice?
      Noah Golowich; Ankur Moitra
    • Non-Convex Optimization with Certificates and Fast Rates Through Kernel Sums of Squares
      Blake Woodworth; Francis Bach; Alessandro Rudi
    • Hierarchical Clustering in Graph Streams: Single-Pass Algorithms and Space Lower Bounds
      Sepehr Assadi; Vaggos Chatziafratis; Jakub Łącki; Vahab Mirrokni; Chen Wang
    • Robust Sparse Mean Estimation via Sum of Squares
      Ilias Diakonikolas; Daniel Kane; Sushrut Karmalkar; Ankit Pensia; Thanasis Pittas
    • Statistical and Computational Phase Transitions in Group Testing
      Amin Coja-Oghlan; Oliver Gebhard; Max Hahn-Klimroth; Alexander Wein; Ilias Zadik
    • The Merged-Staircase Property: A Necessary and Nearly Sufficient Condition for SGD Learning of Sparse Functions on Two-Layer Neural Networks
      Emmanuel Abbe; Enric Boix Adsera; Theodor Misiakiewicz
    • Eigenspace Restructuring: A Principle of Space and Frequency in Neural Networks
      Lechao Xiao
    • Sampling Approximately Low-Rank Ising Models: MCMC Meets Variational Methods
      Frederic Koehler; Holden Lee; Andrej Risteski
    • Strong Memory Lower Bounds for Learning Natural Models
      Gavin Brown; Mark Bun; Adam Smith
    • On the Power of Adaptivity in Statistical Adversaries
      Guy Blanc; Jane Lange; Ali Malik; Li-Yang Tan
    • Sample-Efficient Reinforcement Learning in the Presence of Exogenous Information
      Yonathan Efroni; Dylan Foster; Dipendra Misra; Akshay Krishnamurthy; John Langford
    • The Query Complexity of Local Search and Brouwer in Rounds
      Simina Branzei; Jiawei Li
    • Complete Policy Regret Bounds for Tallying Bandits
      Dhruv Malik; Yuanzhi Li; Aarti Singh
    • When Is Partially Observable Reinforcement Learning Not Scary?
      Qinghua Liu; Alan Chung; Csaba Szepesvari; Chi Jin
    • Strategizing Against Learners in Bayesian Games
      Yishay Mansour; Mehryar Mohri; Jon Schneider; Balasubramanian Sivan
    • Orthogonal Statistical Learning with Self-Concordant Loss
      Lang Liu; Carlos Cinelli; Zaid Harchaoui
    • Clustering with Queries under Semi-Random Noise
      Alberto Del Pia; Mingchen Ma; Christos Tzamos
    • Efficient Projection-Free Online Convex Optimization with Membership Oracle
      Zakaria Mhammedi
    • Better Private Algorithms for Correlation Clustering
      Daogao Liu
    • Neural Networks Can Learn Representations with Gradient Descent
      Alexandru Damian; Jason Lee; Mahdi Soltanolkotabi
    • Stochastic Linear Optimization Never Overfits with Quadratically-Bounded Losses on General Data
      Matus Telgarsky
    • Multilevel Optimization for Inverse Problems
      Simon Weissmann; Ashia Wilson; Jakob Zech
    • High-Dimensional Projection Pursuit: Outer Bounds and Applications to Interpolation in Neural Networks
      Kangjie Zhou; Andrea Montanari
    • Memorize to Generalize: On the Necessity of Interpolation in High Dimensional Linear Regression
      Chen Cheng; John Duchi; Rohith Kuditipudi
    • Damped Online Newton Step for Portfolio Selection
      Zakaria Mhammedi; Alexander Rakhlin
    • From Sampling to Optimization on Discrete Domains with Applications to Determinant Maximization
      Nima Anari; Thuy-Duong Vuong