AWARDS
Best Paper Award
Statistical Learning with a Nuisance Component
Dylan Foster, Vasilis SyrgkanisBest Student Paper Awards
The Complexity of Making the Gradient Small in Stochastic Convex Optimization
Dylan Foster, Ayush Sekhari, Ohad Shamir, Nathan Srebro, Karthik Sridharan, Blake WoodworthVC Classes are Adversarially Robustly Learnable, but Only Improperly
Omar Montasser, Steve Hanneke, Nathan SrebroACCEPTED PAPERS
Uniform concentration and symmetrization for weak interactions
Andreas Maurer, Massimiliano PontilExponential Convergence Time of Gradient Descent for One-Dimensional Deep Linear Neural Networks
Ohad ShamirThe Relative Complexity of Maximum Likelihood Estimation, MAP Estimation, and Sampling
Christopher Tosh, Sanjoy DasguptaDisagreement-Based Combinatorial Pure Exploration: Sample Complexity Bounds and an Efficient Algorithm
Tongyi Cao, Akshay KrishnamurthyStochastic Approximation of Smooth and Strongly Convex Functions: Beyond the $O(1/T)$ Convergence Rate
Lijun Zhang, Zhi-Hua ZhouAn Optimal High-Order Tensor Method for Convex Optimization
Bo Jiang, Haoyue Wang, Shuzhong ZhangTesting Identity of Multidimensional Histograms
Ilias Diakonikolas, Daniel M. Kane, John PeeblesImproved Path-length Regret Bounds for Bandits
Sébastien Bubeck, Yuanzhi Li, Haipeng Luo, Chen-Yu WeiSorted Top-k in Rounds
Mark Braverman, Jieming Mao, Yuval PeresPrivately Learning High-Dimensional Distributions
Gautam Kamath, Jerry Li, Vikrant Singhal, Jonathan UllmanLearning Neural Networks with Two Nonlinear Layers in Polynomial Time
Surbhi Goel, Adam R. KlivansOn the Regret Minimization of Nonconvex Online Gradient Ascent for Online PCA
Dan GarberNearly Minimax-Optimal Regret for Linearly Parameterized Bandits
Yingkai Li, Yining Wang, Yuan ZhouLower Bounds for Parallel and Randomized Convex Optimization
Jelena Diakonikolas, Cristóbal GuzmánLearning Two Layer Rectified Neural Networks in Polynomial Time
Ainesh Bakshi, Rajesh Jayaram, David P. WoodruffStochastic Gradient Descent Learns State Equations with Nonlinear Activations
Samet OymakLearning Ising Models with Independent Failures
Surbhi Goel, Daniel M. Kane, Adam R. KlivansThe Gap Between Model-Based and Model-Free Methods on the Linear Quadratic Regulator: An Asymptotic Viewpoint
Stephen Tu, Benjamin RechtTight analyses for non smooth stochastic gradient descent
Nicholas J. A. Harvey, Christopher Liaw, Yaniv Plan, Sikander RandhawaEstimating the Mixing Time of Ergodic Markov Chains
Wolfer Geoffrey, Kontorovich AryehSharp Analysis for Nonconvex SGD Escaping from Saddle Points
Cong Fang, Zhouchen Lin, Tong ZhangReasoning in Bayesian Opinion Exchange Networks Is PSPACE-Hard
Jan Hązła, Ali Jadbabaie, Elchanan Mossel, M. Amin RahimianLearning in Non-convex Games with an Optimization Oracle
Naman Agarwal, Alon Gonen, Elad HazanDepth Separations in Neural Networks: What is Actually Being Separated?
Itay Safran, Ronen Eldan, Ohad ShamirGradient Descent for One-Hidden-Layer Neural Networks: Polynomial Convergence and SQ Lower Bounds
Santosh Vempala, John WilmesA Universal Algorithm for Variational Inequalities Adaptive to Smoothness and Noise
Francis Bach, Kfir Y. LevyMulti-armed Bandit Problems with Strategic Arms
Mark Braverman, Jieming Mao, Jon Schneider, S. Matthew WeinbergDistribution-Dependent Analysis of Gibbs-ERM Principle
Ilja Kuzborskij, Nicolò Cesa-Bianchi, Csaba SzepesvariSample complexity of partition identification using multi-armed bandits
Sandeep Juneja, Subhashini KrishnasamyThe Optimal Approximation Factor in Density Estimation
Olivier Bousquet, Daniel Kane, Shay MoranPlanting trees in graphs, and finding them back
Laurent Massoulié, Ludovic Stephan, Don TowsleyFast determinantal point processes via distortion-free intermediate sampling
Michal DerezinskiIs your function low dimensional?
Anindya De, Elchanan Mossel, Joe NeemanAccuracy-Memory Tradeoffs and Phase Transitions in Belief Propagation
Vishesh Jain, Frederic Koehler, Jingbo Liu, Elchanan MosselMaking the Last Iterate of SGD Information Theoretically Optimal
Prateek Jain, Dheeraj Nagaraj, Praneeth NetrapalliSharp Theoretical Analysis for Nonparametric Testing under Random Projection
Meimei Liu, Zuofeng Shang, Guang ChengRobustness of spectral methods for community detection
Ludovic Stephan, Laurent MassouliéAn Information-Theoretic Approach to Minimax Regret in Partial Monitoring
Tor Lattimore, Csaba SzepesvariGlobal Convergence of the EM Algorithm for Mixtures of Two Component Linear Regression
Jeongyeol Kwon, Wei Qian, Constantine Caramanis, Yudong Chen, Damek DavisNear-optimal method for highly smooth convex optimization
Sébastien Bubeck, Qijia Jiang, Yin Tat Lee, Yuanzhi Li, Aaron SidfordSampling and Optimization on Convex Sets in Riemannian Manifolds of Non-Negative Curvature
Navin Goyal, Abhishek ShettyLearning to Prune: Speeding up Repeated Computations
Daniel Alabi, Adam Tauman Kalai, Katrina Ligett, Cameron Musco, Christos Tzamos, Ellen VitercikConsistency of Interpolation with Laplace Kernels is a High-Dimensional Phenomenon
Alexander Rakhlin, Xiyu ZhaiOn Communication Complexity of Classification Problems
Daniel Kane, Roi Livni, Shay Moran, Amir YehudayoffTheoretical guarantees for sampling and inference in generative models with latent diffusions
Belinda Tzen, Maxim RaginskyMean-field theory of two-layers neural networks: dimension-free bounds and kernel limit
Song Mei, Theodor Misiakiewicz, Andrea MontanariBatch-Size Independent Regret Bounds for the Combinatorial Multi-Armed Bandit Problem
Nadav Merlis, Shie MannorCombinatorial Algorithms for Optimal Design
Vivek Madan, Mohit Singh, Uthaipon Tantipongpipat, Weijun XieMinimax experimental design: Bridging the gap between statistical and worst-case approaches to least squares regression
Michal Derezinski, Kenneth L. Clarkson, Michael W. Mahoney, Manfred K. WarmuthBudgeted Kernelized Bandits for Large Scale Gaussian Process Optimization
Daniele Calandriello, Luigi Carratino, Alessandro Lazaric, Michal Valko, Lorenzo RosascoCombining Online Learning Guarantees
Ashok CutkoskyEstimation of smooth densities in Wasserstein distance
Jonathan Weed, Quentin BerthetArtificial Constraints and Hints for Unbounded Online Learning
Ashok CutkoskyInference under Local Constraints: Lower Bounds from Chi-Square Contractions
Jayadev Acharya, Clément L. Canonne, Himanshu TyagiCommunication and Memory Efficient Testing of Discrete Distributions
Ilias Diakonikolas, Themis Gouleakis, Daniel M. Kane, Sankeerth RaoStochastic first-order methods: non-asymptotic and computer-aided analyses via potential functions
Adrien Taylor, Francis BachLearning rates for Gaussian mixtures under group invariance
Victor-Emmanuel BrunelOptimal Tensor Methods in Smooth Convex and Uniformly Convex Optimization
Alexander Gasnikov, Pavel Dvurechensky, Eduard Gorbunov, Evgeniya Vorontsova, Daniil Selikhanovych, Cesar A. UribeContextual Bandits with Continuous Actions: Smoothing, Zooming, and Adapting
Akshay Krishnamurthy, John Langford, Aleksandrs Slivkins, Chicheng ZhangModel-based RL in Contextual Decision Processes: PAC bounds and Exponential Improvements over Model-free Approaches
Wen Sun, Nan Jiang, Akshay Krishnamurthy, Alekh Agarwal, John LangfordA Robust Spectral Algorithm for Overcomplete Tensor Decomposition
Samuel B. Hopkins, Tselil Schramm, Jonathan ShiDiscrepancy, Coresets, and Sketches in Machine Learning
Zohar Karnin, Edo LibertyWhen can unlabeled data improve the learning rate?
Christina Göpfert, Shai Ben-David, Olivier Bousquet, Sylvain Gelly, Ilya Tolstikhin, Ruth UrnerBeyond Least-Squares: Fast Rates for Regularized Empirical Risk Minimization through Self-Concordance
Ulysse Marteau-Ferey, Dmitrii M. Ostrovskii, Francis Bach, Alessandro RudiA Theory of Selective Prediction
Mingda Qiao, Gregory ValiantReconstructing Trees from Traces
Sami Davies, Miklos Racz, Cyrus RashtchianSolving Empirical Risk Minimization in the Current Matrix Multiplication Time
Yin Tat Lee, Zhao Song, Qiuyi ZhangNonconvex sampling with the Metropolis-adjusted Langevin algorithm
Oren Mangoubi, Nisheeth K. VishnoiStabilized SVRG: Simple Variance Reduction for Nonconvex Optimization
Rong Ge, Zhize Li, Weiyao Wang, Xiang WangOn Mean Estimation for General Norms with Statistical Queries
Jerry Li, Aleksandar Nikolov, Ilya Razenshteyn, Erik WaingartenMaximum Entropy Distributions: Bit Complexity and Stability
Damian Straszak, Nisheeth K. VishnoiClassification with unknown class conditional label noise on non-compact feature spaces
Henry Reeve, Ata KabanTesting Markov Chains Without Hitting
Yeshwanth Cherapanamjeri, Peter BartlettFinite-Time Error Bounds For Linear Stochastic Approximation and TD Learning
R. Srikant, Lei YingSum-of-squares meets square loss: Fast rates for agnostic tensor completion
Dylan Foster, Andrej RisteskiA New Algorithm for Non-stationary Contextual Bandits: Efficient, Optimal, and Parameter-free
Yifang Chen, Chung-Wei Lee, Haipeng Luo, Chen-Yu WeiStatistical Learning with a Nuisance Component
Dylan Foster, Vasilis SyrgkanisOptimal Average-Case Reductions to Sparse PCA: From Weak Assumptions to Strong Hardness
Matthew Brennan, Guy BreslerAffine Invariant Covariance Estimation for Heavy-Tailed Distributions
Dmitrii M. Ostrovskii, Alessandro RudiLearning Linear Dynamical Systems with Semi-Parametric Least Squares
Max Simchowitz, Ross Boczar, Benjamin RechtNon-asymptotic Analysis of Biased Stochastic Approximation Scheme
Belhal Karimi, Blazej Miasojedow, Eric Moulines, Hoi-To WaiAdaptive Hard Thresholding for Near-optimal Consistent Robust Regression
Arun Sai Suggala, Kush Bhatia, Pradeep Ravikumar, Prateek JainOn the Performance of Thompson Sampling on Logistic Bandits
Shi Dong, Tengyu Ma, Benjamin Van RoyLower Bounds for Locally Private Estimation via Communication Complexity
John Duchi, Ryan RogersLower bounds for testing graphical models: colorings and antiferromagnetic Ising models
Ivona Bezakova, Antonio Blanca, Zongchen Chen, Daniel Stefankovic, Eric VigodaSpace lower bounds for linear prediction
Yuval Dagan, Gil Kur, Ohad ShamirTesting Mixtures of Discrete Distributions
Maryam Aliakbarpour, Ravi Kumar, Ronitt RubinfeldUniversality of Computational Lower Bounds for Submatrix Detection
Matthew Brennan, Guy Bresler, Wasim HuleihelHow Hard is Robust Mean Estimation?
Samuel B. Hopkins, Jerry LiApproximate Guarantees for Dictionary Learning
Aditya Bhaskara, Wai Ming TaiPrivate Center Points and Learning of Halfspaces
Amos Beimel, Shay Moran, Kobbi NIssim, Uri StemmerFast Mean Estimation with Sub-Gaussian Rates
Yeshwanth Cherapanamjeri, Nicolas Flammarion, Peter BartlettBetter Algorithms for Stochastic Bandits with Adversarial Corruptions
Anupam Gupta, Tomer Koren, Kunal TalwarSample-Optimal Low-Rank Approximation of Distance Matrices
Piotr Indyk, Ali Vakilian, Tal Wagner, David WoodruffHigh probability generalization bounds for uniformly stable algorithms with nearly optimal rate
Vitaly Feldman, Jan VondrakComputational Limitations in Robust Classification and Win-Win Results
Akshay Degwekar, Preetum Nakkiran, Vinod VaikuntanathanThe implicit bias of gradient descent on nonseparable data
Ziwei Ji, Matus TelgarskyBandit Principal Component Analysis
Wojciech Kotlowski, Gergely NeuA near-optimal algorithm for approximating the John Ellipsoid
Michael B. Cohen, Ben Cousins, Yin Tat Lee, Xin YangPure Entropic Regularization for MTS
Christian Coester, James R. LeeNormal Approximation for Stochastic Gradient Descent via Non-Asymptotic Rates of Martingale CLT
Andreas Anastasiou, Krishnakumar Balasubramanian, Murat ErdogduAchieving the Bayes Error Rate in Stochastic Block Model by SDP, Robustly
Yingjie Fei, Yudong ChenAdaptively Tracking the Best Bandit Arm with an Unknown Number of Distribution Changes
Peter Auer, Pratik Gajane, and Ronald OrtnerVC Classes are Adversarially Robustly Learnable, but Only Improperly
Omar Montasser, Steve Hanneke, Nathan SrebroThe Complexity of Making the Gradient Small in Stochastic Convex Optimization
Dylan Foster, Ayush Sekhari, Ohad Shamir, Nathan Srebro, Karthik Sridharan, Blake WoodworthTowards Testing Monotonicity of Distributions Over General Posets
Maryam Aliakbarpour, Themis Gouleakis, John Peebles, Ronitt Rubinfeld, Anak YodpinyaneeOn the Computational Power of Online Gradient Descent
Vaggos Chatziafratis, Tim Roughgarden, Joshua R. WangHow do infinite width bounded norm networks look in function space?
Pedro Savarese, Itay Evron, Daniel Soudry, Nathan SrebroVortices Instead of Equilibria in MinMax Optimization: Chaos and Butterfly Effects of Online Learning in Zero-Sum Games
Yun Kuen Cheung, Georgios PiliourasActive Regression via Linear-Sample Sparsification
Xue Chen, Eric PriceFaster Algorithms for High-Dimensional Robust Covariance Estimation
Yu Cheng, Ilias Diakonikolas, Rong Ge, David P. WoodruffParameter-free Online Convex Optimization with Sub-Exponential Noise
Kwang-Sung Jun, Francesco OrabonaComputationally and Statistically Efficient Truncated Regression
Constantinos Daskalakis, Themis Gouleakis, Christos Tzamos, Emmanouil ZampetakisLipschitz Adaptivity with Multiple Learning Rates in Online Learning
Zakaria Mhammedi, Wouter M. Koolen, Tim van ErvenThe All-or-Nothing Phenomenon in Sparse Linear Regression
Galen Reeves, Jiaming Xu, Ilias ZadikA Rank-1 Sketch for Matrix Multiplicative Weights
Yair Carmon, John C. Duchi, Aaron Sidford, Kevin TianOptimal Learning for Mallows Block Model
Robert Busa-Fekete, Dimitris Fotakis, Balazs Szorenyi, Manolis ZampetakisGeneralization and learning under Dobrushin's condition
Yuval Dagan, Constantinos Daskalakis, Nishanth Dikkala, Siddhartha JayantiCall for papers
The 32nd Annual Conference on Learning Theory (COLT 2019) will take place in Phoenix, Arizona, June 25-28, 2019, as part of the ACM Federated Computing Research Conference, which also includes EC and STOC. We invite submissions of papers addressing theoretical aspects of machine learning and related topics. We strongly support a broad definition of learning theory, including, but not limited to:
- Design and analysis of learning algorithms
- Statistical and computational complexity of learning
- Optimization methods for learning
- Unsupervised and semi-supervised learning
- Interactive learning, planning and control, and reinforcement learning
- Online learning and decision-making under uncertainty
- Interactions of learning theory with other mathematical fields
- Artificial neural networks, including deep learning
- High-dimensional and non-parametric statistics
- Learning with algebraic or combinatorial structure
- Bayesian methods in learning
- Game theory and learning
- Learning with system constraints (e.g., privacy, computational, memory, communication)
- Learning from complex data: e.g., networks, time series
- Learning in other settings: e.g., computational social science, economics
Submissions by authors who are new to COLT are encouraged. While the primary focus of the conference is theoretical, the authors may support their analysis by including relevant experimental results.
All accepted papers will be presented in a single track at the conference. At least one of each paper’s authors should be present at the conference to present the work. Accepted papers will be published electronically in the Proceedings of Machine Learning Research (PMLR). The authors of accepted papers will have the option of opting-out of the proceedings in favor of a 1-page extended abstract. The full paper reviewed for COLT will then be placed on the arXiv repository.
In coordination with the STOC 2019 PC chair, we allow papers submitted to STOC to be simultaneously submitted to COLT provided that the authors (1) explicitly state that the submission is under consideration for STOC, (2) immediately withdraw the COLT submission if the STOC submission is accepted, and (3) agree to allow the (anonymized) STOC reviews to be provided to the COLT reviewers once they are available. The authors will be allowed to submit a rebuttal to the STOC reviews a few days after the STOC reviews are available (via the submission server).
- Paper submission deadline: February 1, 11:00 PM EST
- Author feedback: March 22-27
- Author notification: April 17
- Conference: June 25-28 (welcome reception on June 24)
- Design and analysis of learning algorithms
- Statistical and computational complexity of learning
- Optimization methods for learning
- Unsupervised and semi-supervised learning
- Interactive learning, planning and control, and reinforcement learning
- Online learning and decision-making under uncertainty
- Interactions of learning theory with other mathematical fields
- Artificial neural networks, including deep learning
- High-dimensional and non-parametric statistics
- Learning with algebraic or combinatorial structure
- Bayesian methods in learning
- Game theory and learning
- Learning with system constraints (e.g., privacy, computational, memory, communication)
- Learning from complex data: e.g., networks, time series
- Learning in other settings: e.g., computational social science, economics
Submissions by authors who are new to COLT are encouraged. While the primary focus of the conference is theoretical, the authors may support their analysis by including relevant experimental results.
All accepted papers will be presented in a single track at the conference. At least one of each paper’s authors should be present at the conference to present the work. Accepted papers will be published electronically in the Proceedings of Machine Learning Research (PMLR). The authors of accepted papers will have the option of opting-out of the proceedings in favor of a 1-page extended abstract. The full paper reviewed for COLT will then be placed on the arXiv repository.
Paper Awards:
COLT will award both best paper and best student paper awards. To be eligible for the best student paper award, the primary contributor(s) must be full-time students at the time of submission. For eligible papers, authors must indicate at submission time if they wish their paper to be considered for a student paper award. The program committee may decline to make these awards, or may split them among several papers.Dual Submissions:
Submissions that are substantially similar to papers that have been previously published, accepted for publication, or submitted in parallel to other peer-reviewed conferences with proceedings may not be submitted to COLT, with the sole exception described below. The same policy applies to journals, unless the submission is a short version of a paper submitted to a journal, and not yet published. Authors must declare such dual submissions through the submission server.In coordination with the STOC 2019 PC chair, we allow papers submitted to STOC to be simultaneously submitted to COLT provided that the authors (1) explicitly state that the submission is under consideration for STOC, (2) immediately withdraw the COLT submission if the STOC submission is accepted, and (3) agree to allow the (anonymized) STOC reviews to be provided to the COLT reviewers once they are available. The authors will be allowed to submit a rebuttal to the STOC reviews a few days after the STOC reviews are available (via the submission server).
Formatting:
Submissions are limited to 12 PMLR-formatted pages, plus unlimited additional pages for references and appendices. All details, proofs and derivations required to substantiate the results must be included in the submission, possibly in the appendices. However, the contribution, novelty and significance of submissions will be judged primarily based on the main text (without appendices), and so enough details, including proof details, must be provided in the main text to convince the reviewers of the submissions’ merits. Formatting and submission instructions can be found on the conference website: http://learningtheory.org/colt2019/.Rebuttal Phase:
As in previous years, there will be a rebuttal phase during the review process. Initial reviews will be sent to authors before final decisions have been made. Authors will have the opportunity to provide a short response on the PC’s initial evaluation.Important Dates:
All dates are in 2019.- Paper submission deadline: February 1, 11:00 PM EST
- Author feedback: March 22-27
- Author notification: April 17
- Conference: June 25-28 (welcome reception on June 24)