Accepted Papers

  • Query complexity of least absolute deviation regression via robust uniform convergence
    Xue Chen; Michal Derezinski
  • SGD Generalizes Better Than GD (And Regularization Doesn't Help)
    Idan Amir; Tomer Koren; Roi Livni
  • Boosting in the Presence of Massart Noise
    Ilias Diakonikolas; Russell Impagliazzo; Daniel M Kane; Rex Lei; Jessica Sorrell; Christos Tzamos
  • Asymptotically Optimal Information-Directed Sampling
    Johannes Kirschner; Tor Lattimore; Claire Vernade; Csaba Szepesvari
  • Adaptive Discretization for Adversarial Lipschitz Bandits
    Chara Podimata; Alex Slivkins
  • A Priori Generalization Analysis of the Deep Ritz Method for Solving High Dimensional Elliptic Partial Differential Equations
    Yulong Lu; Jianfeng Lu; Min Wang
  • Fast Dimension Independent Private AdaGrad on Publicly Estimated Subspaces
    Peter Kairouz; Monica Ribero Diaz; Keith Rush; Abhradeep Thakurta
  • Learning sparse mixtures of permutations from noisy information
    Rocco Servedio; Anindya De; Ryan O'Donnell
  • The Sample Complexity of Robust Covariance Testing
    Ilias Diakonikolas; Daniel M Kane
  • Generalizing Complex Hypotheses on Product Distributions: Auctions, Prophet Inequalities, and Pandora's Problem
    Chenghao Guo; Zhiyi Huang; Zhihao Gavin Tang; Xinzhi Zhang
  • Sparse sketches with small inversion bias
    Michal Derezinski; Zhenyu Liao; Edgar Dobriban; Michael Mahoney
  • Efficient Algorithms for Learning from Coarse Labels
    Dimitris Fotakis; Alkis Kalavasis; Vasilis Kontonis; Christos Tzamos
  • Towards a Dimension-Free Understanding of Adaptive Linear Control
    Juan C Perdomo; Max Simchowitz; Alekh Agarwal; Peter Bartlett
  • Last-iterate Convergence of Decentralized Optimistic Gradient Descent/Ascent in Infinite-horizon Competitive Markov Games
    Chen-Yu Wei; Chung-Wei Lee; Mengxiao Zhang; Haipeng Luo
  • Exponential savings in agnostic active learning through abstention
    Nikita Puchkin; Nikita Zhivotovskiy
  • Black-Box Control for Linear Dynamical Systems
    Xinyi Chen; Elad Hazan
  • Sequential prediction under log-loss and misspecification
    Meir Feder; Yury Polyanskiy
  • Majorizing Measures, Sequential Complexities, and Online Learning
    Adam Block; Yuval Dagan; Alexander Rakhlin
  • On the (asymptotic) convergence of Stochastic Gradient Descent and Stochastic Heavy Ball
    Othmane Sebbouh; Robert M Gower; Aaron Defazio
  • Mirror Descent and the Information Ratio
    Tor Lattimore; Andras Gyorgy
  • It was ``all'' for ``nothing'': sharp phase transitions for noiseless discrete channels
    Ilias Zadik; Jonathan Niles-Weed
  • Modeling from Features: a Mean-field Framework for Over-parameterized Deep Neural Networks
    Cong Fang; Jason Lee; Pengkun Yang; Tong Zhang
  • Kernel Thinning
    Raaz Dwivedi; Lester Mackey
  • Learning from Censored and Dependent Data: The case of Linear Dynamics
    Orestis Plevrakis
  • Optimizing Optimizers: Regret-optimal gradient descent algorithms
    Philippe Casgrain; Anastasis Kratsios
  • The Effects of Mild Over-parameterization on the Optimization Landscape of Shallow ReLU Neural Networks
    Itay M Safran; Gilad Yehudai; Ohad Shamir
  • Non-Euclidean Differentially Private Stochastic Convex Optimization
    Raef Bassily; Cristobal Guzman; Anupama Nandi
  • Convergence rates and approximation results for SGD and its continuous-time counterpart
    Xavier Fontaine; Valentin De Bortoli; Alain Durmus
  • Survival of the strictest: Stable and unstable equilibria under regularized learning with partial information
    Emmanouil Vasileios Vlatakis-Gkaragkounis; Angeliki Giannou; Panayotis Mertikopoulos
  • The Last-Iterate Convergence Rate of Optimistic Mirror Descent in Stochastic Variational Inequalities
    Waïss Azizian; Franck Iutzeler; Jérôme Malick; Panayotis Mertikopoulos
  • Weak learning convex sets under normal distributions
    Anindya De; Rocco Servedio
  • Online Learning from Optimal Actions
    Omar Besbes; Yuri Fonseca; Ilan Lobel
  • The Connection Between Approximation, Depth Separation and Learnability in Neural Networks
    Eran Malach; Gilad Yehudai; Shai Shalev-Schwartz; Ohad Shamir
  • Structured Logconcave Sampling with a Restricted Gaussian Oracle
    Yin Tat Lee; Ruoqi Shen; Kevin Tian
  • Quantifying Variational Approximation for Log-Partition Function
    Romain Cosson; Devavrat Shah
  • Random Graph Matching with Improved Noise Robustness
    Cheng Mao; Mark Rudelson; Konstantin Tikhomirov
  • On the Approximation Power of Two-Layer Networks of Random ReLUs
    Daniel Hsu; Clayton H Sanford; Rocco Servedio; Emmanouil Vasileios Vlatakis-Gkaragkounis
  • Spectral Planting and the Hardness of Refuting Cuts, Colorability, and Communities in Random Graphs
    Afonso S Bandeira; Jess Banks; Dmitriy Kunisky; Christopher Moore; Alex Wein
  • Impossibility of Partial Recovery in the Graph Alignment Problem
    Luca Ganassali; Laurent Massoulie; Marc Lelarge
  • PAC-Bayes, MAC-Bayes and Conditional Mutual Information: Fast rate bounds that handle general VC classes
    Peter Grunwald; Thomas Steinke; Lydia Zakynthinou
  • Rank-one matrix estimation: analytic time evolution of gradient descent dynamics
    Antoine Bodin; Nicolas Macris
  • On Empirical Bayes Variational Autoencoder: Generalization and Non-Asymptotic Risk Bound
    Rong Tang; Yun Yang
  • Improved Regret for Zeroth-Order Stochastic Convex Bandits
    Tor Lattimore; Andras Gyorgy
  • Adversarially Robust Learning with Unknown Perturbation Sets
    Omar Montasser; Steve Hanneke; Nathan Srebro
  • Average-Case Communication Complexity of Statistical Problems
    Cyrus Rashtchian; David Woodruff; Peng Ye; Hanlin Zhu
  • Impossible Tuning Made Possible: A New Expert Algorithm and Its Applications
    Chen-Yu Wei; Haipeng Luo; Liyu Chen
  • A Dimension-free Computational Upper-bound for Smooth Optimal Transport Estimation
    Adrien Vacher; Boris Muzellec; Alessandro Rudi; Francis Bach; Francois-Xavier Vialard
  • Agnostic Proper Learning of Halfspaces under Gaussian Marginals
    Ilias Diakonikolas; Daniel M Kane; Vasilis Kontonis; Christos Tzamos; Nikos Zarifis
  • A Theory of Heuristic Learnability
    Mikito Nanashima
  • Group testing and local search: is there a computational-statistical gap?
    Fotis Iliopoulos; Ilias Zadik
  • Moment Multicalibration for Uncertainty Estimation
    Christopher Jung; Changhwa Lee; Mallesh Pai; Aaron Roth; Rakesh Vohra
  • Robust Online Convex Optimization in the Presence of Outliers
    Tim van Erven; Sarah Sachs; Wouter M Koolen; Wojciech Kotlowski
  • From Local Pseudorandom Generators to Hardness of Learning
    Amit Daniely; Gal Vardi
  • Frank-Wolfe with Nearest Extreme Point Oracle
    Dan Garber; Noam Wolf
  • Cooperative and Stochastic Multi-Player Multi-Armed Bandit: Optimal Regret With Neither Communication Nor Collisions
    Mark Sellke; Sebastien Bubeck; Thomas Budzinski
  • Deterministic Finite-Memory Bias Estimation
    Tomer Berg; Ofer Shayevitz; Or Ordentlich
  • Adaptive Learning in Continuous Games: Optimal Regret Bounds and Convergence to Nash Equilibrium
    Yu-Guan Hsieh; Kimon Antonakopoulos; Panayotis Mertikopoulos
  • Bounded Memory Active Learning through Enriched Queries
    Max Hopkins; Daniel Kane; Shachar Lovett; Michal Moshkovitz
  • Nonparametric Regression with Shallow Overparametrized Neural Networks Trained by GD with Early Stopping
    Ilja Kuzborskij; Csaba Szepesvari
  • Thinking Inside the Ball: Near-Optimal Minimization of the Maximal Loss
    Yair Carmon; Arun Jambulapati; Yujia Jin; Aaron Sidford
  • Improved Algorithms for Efficient Active Learning Halfspaces with Massart and Tsybakov noise
    Chicheng Zhang; Yinan Li
  • Size and Depth Separation in Approximating Natural Functions with Neural Networks
    Gal Vardi; Daniel Reichman; Toniann Pitassi; Ohad Shamir
  • A Local Convergence Theory for Mildly Over-Parameterized Two-Layer Neural Network
    Mo Zhou; Rong Ge; Chi Jin
  • Reduced-Rank Regression with Operator Norm Error
    Praneeth Kacham; David Woodruff
  • Is Reinforcement Learning More Difficult Than Bandits? A Near-optimal Algorithm Escaping the Curse of Horizon
    Zihan Zhang; Xiangyang Ji; Simon Du
  • The Sparse Vector Technique, Revisited
    Haim Kaplan; Yishay Mansour; Uri Stemmer
  • Double Explore-then-Commit: Asymptotic Optimality and Beyond
    Tianyuan Jin; Pan Xu; Xiaokui Xiao; Quanquan Gu
  • Softmax Policy Gradient Methods Can Take Exponential Time to Converge
    Gen Li; Yuting Wei; Yuejie Chi; Yuantao Gu; Yuxin Chen
  • Approximation Algorithms for Socially Fair Clustering
    Yury Makarychev; Ali Vakilian
  • Random Coordinate Langevin Monte Carlo
    Zhiyan Ding; Qin Li; Jianfeng Lu; Stephen J Wright
  • On Avoiding the Union Bound When Answering Multiple Differentially Private Queries
    Badih Ghazi; Ravi Kumar; Pasin Manurangsi
  • A Statistical Taylor Theorem and Extrapolation of Truncated Densities
    Constantinos Daskalakis; Vasilis Kontonis; Christos Tzamos; Emmanouil Zampetakis
  • Learning in Matrix Games can be Arbitrarily Complex
    Gabriel P Andrade; Rafael Frongillo; Georgios Piliouras
  • SGD in the Large: Average-case Analysis, Asymptotics, and Stepsize Criticality
    Courtney Paquette; Kiwon Lee; Fabian Pedregosa; Elliot Paquette
  • Reconstructing weighted voting schemes from partial information about their power indices
    Emmanouil Vasileios Vlatakis-Gkaragkounis; Huck Bennett; Anindya De; Rocco Servedio
  • Shape Matters: Understanding the Implicit Bias of the Noise Covariance
    Jeff Z. HaoChen; Colin Wei; Jason Lee; Tengyu Ma
  • Optimal Dynamic Regret in Exp-Concave Online Learning
    Dheeraj Baby; Yu-Xiang Wang
  • The Bethe and Sinkhorn Permanents of Low Rank Matrices and Implications for Profile Maximum Likelihood
    Nima Anari; Moses Charikar; Kirankumar Shiragur; Aaron Sidford
  • Learning to Stop with Surprisingly Few Samples
    Tianyi Zhang; Daniel Russo; Assaf Zeevi
  • Efficient Bandit Convex Optimization: Beyond Linear Losses
    Arun Sai Suggala; Pradeep Ravikumar; Praneeth Netrapalli
  • Nearly Minimax Optimal Reinforcement Learning for Linear Mixture MDPs
    Dongruo Zhou; Quanquan Gu; Csaba Szepesvari
  • Provable Memorization via Deep Neural Networks using Sub-linear Parameters
    Sejun Park; Jaeho Lee; Chulhee Yun; Jinwoo Shin
  • Concentration of Non-Isotropic Random Tensors with Applications to Learning and Empirical Risk Minimization
    Mathieu Even
  • On the Minimal Error of Empirical Risk Minimization
    Gil Kur; Alexander Rakhlin
  • Benign Overfitting of Constant-Stepsize SGD for Linear Regression
    Difan Zou; Jingfeng Wu; Vladimir Braverman; Quanquan Gu; Sham Kakade
  • Cautiously Optimistic Policy Optimization and Exploration with Linear Function Approximation
    Andrea Zanette; Ching-An Cheng; Alekh Agarwal
  • Near Optimal Distributed Learning of Halfspaces with Two Parties
    Mark Braverman; Gillat Kol; Shay Moran; Raghuvansh R. Saxena
  • Exponential Weights Algorithms for Selective Learning
    Mingda Qiao; Gregory Valiant
  • The Min-Max Complexity of Distributed Stochastic Convex Optimization with Intermittent Communication
    Blake E Woodworth; Brian Bullins; Ohad Shamir; Nathan Srebro
  • Stochastic Approximation for Online Tensorial Independent Component Analysis
    Chris Junchi Li; Michael Jordan
  • Regret Minimization in Heavy-Tailed Bandits
    Shubhada Agrawal; Sandeep K Juneja; Wouter M Koolen
  • Differentially Private Nonparametric Regression Under a Growth Condition
    Noah Golowich
  • Robust learning under clean-label attack
    Avrim Blum; Steve Hanneke; Jian Qian; Han Shao
  • On Query-efficient Planning in MDPs under Linear Realizability of the Optimal State-value Function
    Gellert Weisz; Philip Amortila; Barnabás Janzer; Yasin Abbasi-Yadkori; Nan Jiang; Csaba Szepesvari
  • Adversarially Robust Low Dimensional Representations
    Pranjal Awasthi; Vaggos Chatziafratis; Xue Chen; Aravindan Vijayaraghavan
  • Learning to Sample from Censored Markov Random Fields
    Ankur Moitra; Elchanan Mossel; Colin P Sandon
  • Hypothesis testing with low-degree polynomials in the Morris class of exponential families
    Dmitriy Kunisky
  • Johnson-Lindenstrauss Transforms with Best Confidence
    Maciej Skorski
  • Near-Optimal Entrywise Sampling of Numerically Sparse Matrices
    Vladimir Braverman; Robert Krauthgamer; Aditya R Krishnan; Shay Sapir
  • The Optimality of Polynomial Regression for Agnostic Learning under Gaussian Marginals
    Ilias Diakonikolas; Daniel M Kane; Thanasis Pittas; Nikos Zarifis
  • Tsallis-INF: Improved Robustness to Adversarial Corruptions in Stochastic Multi-armed Bandits and Beyond
    Saeed Masoudian; Yevgeny Seldin
  • Projected Stochastic Gradient Langevin Algorithms for Constrained Sampling and Non-Convex Learning
    Andrew Lamperski
  • Lazy OCO: Online Convex Optimization on a Switching Budget
    Uri Sherman; Tomer Koren
  • Learning and testing junta distributions with sub cube conditioning
    Xi Chen; Rajesh Jayaram; Amit Levi; Erik Waingarten
  • Fast Rates for the Regret of Offline Reinforcement Learning
    Yichun Hu; Nathan Kallus; Masatoshi Uehara
  • Outlier-Robust Learning of Ising Models Under Dobrushin's Condition
    Ilias Diakonikolas; Daniel M. Kane; Alistair Stewart; Yuxin Sun
  • Machine unlearning via Algorithmic stability
    Enayat Ullah; Tung Mai; Anup Rao; Ryan A. Rossi; Raman Arora
  • Instance-Dependent Complexity of Contextual Bandits and Reinforcement Learning: A Disagreement-Based Perspective
    Dylan Foster; Alexander Rakhlin; David Simchi-Levi; Yunzong Xu
  • Online Learning with Simple Predictors and a Combinatorial Characterization of Minimax in 0/1 Games
    Steve Hanneke; Roi Livni; Shay Moran
  • Adaptivity in Adaptive Submodularity
    Hossein Esfandiari; Amin Karbasi; Vahab Mirrokni
  • Non-asymptotic approximations of neural networks by Gaussian processes
    Ronen Eldan; Dan Mikulincer; Tselil Schramm
  • Breaking The Dimension Dependence in Sparse Distribution Estimation under Communication Constraints
    Wei-Ning Chen; Peter Kairouz; Ayfer Ozgur
  • A Law of Robustness for Two-Layers Neural Networks
    Sebastien Bubeck; Yuanzhi Li; Dheeraj M Nagaraj
  • Fine-Grained Gap-Dependent Bounds for Tabular MDPs via Adaptive Multi-Step Bootstrap
    Haike Xu; Tengyu Ma; Simon Du
  • On the Convergence of Langevin Monte Carlo: The Interplay between Tail Growth and Smoothness
    Murat A Erdogdu; Rasa Hosseinzadeh
  • Minimax Regret for Stochastic Shortest Path with Adversarial Costs and Known Transition
    Liyu Chen; Haipeng Luo; Chen-Yu Wei
  • Streaming k-PCA: Efficient guarantees for Oja's algorithm, beyond rank-one updates
    De Huang; Jonathan Niles-Weed; Rachel Ward
  • On the Stability of Random Matrix Product with Markovian Noise: Application to Linear Stochastic Approximation and TD Learning
    Alain Durmus; Eric Moulines; Alexey Naumov; Sergey Samsonov; Hoi-To Wai
  • Implicit Regularization in ReLU Networks with the Square Loss
    Gal Vardi; Ohad Shamir
  • Source Identification for Mixtures of Product Distributions
    Spencer Gordon; Bijan H Mazaheri; Yuval Rabani; Leonard Schulman
  • Multiplayer Bandit Learning, from Competition to Cooperation
    Simina Branzei; Yuval Peres
  • Towards a Query-Optimal and Time-Efficient Algorithm for Clustering with a Faulty Oracle
    Pan Peng; Jiapeng Zhang
  • Learning with invariances in random features and kernel models
    Song Mei; Theodor Misiakiewicz; Andrea Montanari
  • Statistical Query Algorithms and Low Degree Tests Are Almost Equivalent
    Matthew S Brennan; Guy Bresler; Sam Hopkins; Jerry Li; Tselil Schramm
  • Corruption-robust exploration in episodic reinforcement learning
    Thodoris Lykouris; Max Simchowitz; Alex Slivkins; Wen Sun
  • Parameter-Free Multi-Armed Bandit Algorithms with Hybrid Data-Dependent Regret Bounds
    Shinji Ito
  • Non-stationary Reinforcement Learning without Prior Knowledge: an Optimal Black-box Approach
    Chen-Yu Wei; Haipeng Luo
  • Exponentially Improved Dimensionality Reduction for l1: Subspace Embeddings and Independence Testing
    Taisuke Yasuda; David Woodruff; Yi Li
  • Fast Rates for Structured Prediction
    Vivien A Cabannnes; Alessandro Rudi; Francis Bach
  • Online Markov Decision Processes with Aggregate Bandit Feedback
    Alon Cohen; Haim Kaplan; Tomer Koren; Yishay Mansour
  • Stochastic block model entropy and broadcasting on trees with survey
    Emmanuel Abbe; Elisabetta Cornacchia; Yuzhou Gu; Yury Polyanskiy
  • Information-Theoretic Generalization Bounds for Stochastic Gradient Descent
    Gergely Neu
  • When does gradient descent with logistic loss interpolate using deep networks with smoothed ReLU activations?
    Niladri S Chatterji; Philip M Long; Peter Bartlett
  • Exact Recovery of Clusters in Finite Metric Spaces Using Oracle Queries
    Marco Bressan; Nicolò Cesa-Bianchi; Silvio Lattanzi; Andrea Paudice
  • Optimal dimension dependence of the Metropolis-Adjusted Langevin Algorithm
    Sinho Chewi; Chen Lu; Kwangjun Ahn; Xiang Cheng; Thibaut Le Gouic; Philippe Rigollet
  • Functions with average smoothness: structure, algorithms, and learning
    Yair Ashlagi; Lee-Ad Gottlieb; Aryeh Kontorovich