Accepted Papers

    • Pruning is Optimal for Learning Sparse Features in High-Dimensions
      Vural, Nuri Mert; Erdogdu, Murat A
    • Sampling from the Mean-Field Stationary Distribution
      Kook, Yunbum; Zhang, Matthew; Chewi, Sinho; Erdogdu, Murat A; Li, Mufan
    • Minimax Linear Regression under the Quantile Risk
      El Hanchi, Ayoub; Maddison, Chris; Erdogdu, Murat A
    • Fast parallel sampling under isoperimetry
      Anari, Nima; Chewi, Sinho; Vuong, Thuy-Duong
    • Online Structured Prediction with Fenchel--Young Losses and Improved Surrogate Regret for Online Multiclass Classification with Logistic Loss
      Sakaue, Shinsaku; Bao, Han; Tsuchiya, Taira; Oki, Taihei
    • Mode Identification with Partial Feedback
      Arnal, Charles A; Cabannnes, Vivien A; Perchet, Vianney
    • Optimal Multi-Distribution Learning
      Zhang, Zihan; Zhan, Wenhao; Chen, Yuxin; Du, Simon; Lee, Jason
    • Settling the Sample Complexity of Online Reinforcement Learning
      Zhang, Zihan; Chen, Yuxin; Lee, Jason; Du, Simon
    • Some Constructions of Private, Efficient, and Optimal $K$-Norm and Elliptic Gaussian Noise
      Joseph, Matthew; Yu, Alexander
    • Oracle-Efficient Hybrid Online Learning with Unknown Distribution
      Wu, Changlong; Sima, Jin; Szpankowski, Wojciech
    • The sample complexity of multi-distribution learning
      Peng, Binghui
    • Fast sampling from constrained spaces using the Metropolis-adjusted Mirror Langevin algorithm
      Srinivasan, Vishwak; Wibisono, Andre; Wilson, Ashia
    • A Unified Characterization of Private Learnability via Graph Theory
      Alon, Noga ; Moran, Shay; Schefler, Hilla; Yehudayoff, Amir
    • Topological Expressivity of ReLU Neural Networks
      Ergen, Ekin; Grillo, Moritz
    • Exact Mean Square Linear Stability Analysis for SGD
      Mulayoff, Rotem; Michaeli, Tomer
    • Computation-information gap in high-dimensional clustering
      Even, Bertrand; Giraud, Christophe; Verzelen, Nicolas
    • Mitigating Covariate Shift in Misspecified Regression with Applications to Reinforcement Learning
      Amortila, Philip; Cao, Tongyi; Krishnamurthy, Akshay
    • Metric Clustering and MST with Strong and Weak Distance Oracles
      Jayaram, Rajesh; Wang, Chen; Dharangutte, Prathamesh; Bateni, MohammadHossein
    • The role of shared randomness in quantum state certification with unentangled measurements
      Liu, Yuhan; Acharya, Jayadev
    • Refined Sample Complexity for Markov Games with Independent Linear Function Approximation
      Dai, Yan; Cui, Qiwen; Du, Simon
    • Non-Clashing Teaching Maps for Balls in Graphs
      Chalopin, Jérémie; Chepoi, Victor; Mc Inerney, Fionn; Ratel, Sébastien
    • Follow-the-Perturbed-Leader with Fr\'{e}chet-type Tail Distributions: Optimality in Adversarial Bandits and Best-of-Both-Worlds
      Lee, Jongyeong; Honda, Junya; Ito, Shinji; Oh, Min-hwan
    • Physics-informed machine learning as a kernel method
      Doumèche, Nathan; Bach, Francis; Biau, Gérard; Boyer, Claire
    • Minimax-Optimal Reward-Agnostic Exploration in Reinforcement Learning
      Li, Gen; Yan, Yuling; Chen, Yuxin; Fan, Jianqing
    • Efficient Algorithms for Attributed Graph Alignment with Vanishing Edge Correlation
      Wang, Ziao; Wang, Weina; Wang, Lele
    • On the Distance from Calibration in Sequential Prediction
      Qiao, Mingda; Zheng, Letian
    • Majority-of-Three: The Simplest Optimal Learner?
      Aden-Ali, Ishaq ; Høgsgaard, Mikael Møller ; Green Larsen, Kasper; Zhivotovskiy, Nikita
    • Provable Advantage in Quantum PAC Learning
      Salmon, Wilfred A; Strelchuk, Sergii; Gur, Tom
    • Simple online learning with consistent oracle
      Kozachinskiy, Alexander; Steifer, Tomasz
    • Smooth Lower Bounds for Differentially Private Algorithms via Padding-and-Permuting Fingerprinting Codes
      Tsfadia, Eliad; Peter, Naty; Ullman, Jonathan
    • Omnipredictors for regression and the approximate rank of convex functions
      Gopalan, Parikshit; Okoroafor, Princewill; Raghavendra, Prasad; Shetty, Abhishek; Singhal, Mihir
    • Faster Sampling without Isoperimetry via Diffusion-based Monte Carlo
      Huang, Xunpeng; Zou, Difan; Dong, Hanze; Ma, Yian; Zhang, Tong
    • Nearly Optimal Regret for Decentralized Online Convex Optimization
      Wan, Yuanyu; Wei, Tong; Song, Mingli; Zhang, Lijun
    • Correlated Binomial Process
      Blanchard, Moise; Cohen, Doron; Kontorovich, Aryeh
    • Spectral Estimators for Structured Generalized Linear Models via Approximate Message Passing
      Zhang, Yihan; Ji, Hong Chang; Venkataramanan, Ramji; Mondelli, Marco
    • Mirror Descent Algorithms with Nearly Dimension-Independent Rates for Differentially-Private Stochastic Saddle-Point Problems
      Gonzalez Lara, Tomas C; Guzman, Cristobal; Paquette, Courtney
    • Accelerated Parameter-Free Stochastic Optimization
      Kreisler, Itai; Ivgi, Maor; Hinder, Oliver; Carmon, Yair
    • The Price of Adaptivity in Stochastic Convex Optimization
      Carmon, Yair; Hinder, Oliver
    • Statistical curriculum learning: An elimination algorithm achieving an oracle risk
      Cohen, Omer; Meir, Ron; Weinberger, Nir
    • The Power of an Adversary in Glauber Dynamics
      Chin, Byron; Moitra, Ankur; Mossel, Elchanan; Sandon, Colin P
    • Apple Tasting: Combinatorial Dimensions and Minimax Rates
      Raman, Vinod; Subedi, Unique ; Raman, Ananth S; Tewari, Ambuj
    • Optimistic Rates for Learning from Label Proportions
      Li, Gene; Chen, Lin; Javanmard, Adel; Mirrokni, Vahab
    • Online Learning with Set-valued Feedback
      Subedi, Unique ; Raman, Vinod; Tewari, Ambuj
    • Online Policy Optimization in Unknown Nonlinear Systems
      Lin, Yiheng; Preiss, James A; Xie, Fengze; Anand, Emile T; Chung, Soon-Jo; Yue, Yisong; Wierman, Adam
    • Community detection in the hypergraph stochastic block model and reconstruction on hypertrees
      Gu, Yuzhou; Pandey, Aaradhya
    • Autobidders with Budget and ROI Constraints: Efficiency, Regret, and Pacing Dynamics
      Lucier, Brendan; Pattathil, Sarath; Slivkins, Alex; Zhang, Mengxiao
    • On Finding Small Hyper-Gradients in Bilevel Optimization: Hardness Results and Improved Analysis
      Chen, Lesi; Xu, Jing; Zhang, Jingzhao
    • A Theory of Interpretable Approximations
      Bressan, Marco; Cesa-Bianchi, Nicolò; Esposito, Emmanuel; Mansour, Yishay; Moran, Shay; Thiessen, Maximilian
    • Information-theoretic generalization bounds for learning from quantum data
      Caro, Matthias C; Gur, Tom; Rouzé, Cambyse; Stilck França, Daniel; Subramanian, Sathyawageeswar
    • On the sample complexity of parameter estimation in logistic regression with normal design
      Hsu, Daniel J; Mazumdar, Arya
    • Undetectable Watermarks for Language Models
      Christ, Miranda; Gunn, Sam; Zamir, Or
    • Efficient Algorithms for Learning Monophonic Halfspaces in Graphs
      Bressan, Marco; Esposito, Emmanuel; Thiessen, Maximilian
    • A faster and simpler algorithm for learning shallow networks
      Chen, Sitan; Narayanan, Shyam
    • Finite-Sample Analysis of the Temporal Difference Learning
      Samsonov, Sergey; Tiapkin, Daniil; Naumov, Alexey; Moulines, Eric
    • Training Dynamics of Multi-Head Softmax Attention for In-Context Learning: Emergence, Convergence, and Optimality
      Chen, Siyu; Sheen, Heejune; Wang, Tianhao; Yang, Zhuoran
    • Computational-Statistical Gaps for Improper Learning in Sparse Linear Regression
      Buhai, Rares-Darius; Ding, Jingqiu; Tiegel, Stefan
    • Detection of $L_\infty$ Geometry in Random Geometric Graphs: Suboptimality of Triangles and Cluster Expansion
      Bangachev, Kiril; Bresler, Guy
    • Regularization and Optimal Multiclass Learning
      Asilis, Julian; Sharan, Vatsal; Devic, Siddartha; Dughmi, Shaddin; Teng, Shanghua
    • The Real Price of Bandit Information in Multiclass Classification
      Erez, Liad; Mansour, Yishay; Moran, Shay; Koren, Tomer; Cohen, Alon
    • Second Order Methods for Bandit Optimization and Control
      Sun, Y. Jennifer; Suggala, Arun Sai ; Netrapalli, Praneeth; Hazan, Elad
    • Fast Two-Time-Scale Stochastic Gradient Method with Applications in Reinforcement Learning
      Zeng, Sihan; Doan, Thinh T
    • Bridging the Gap: Rademacher Complexity in Robust and Standard Generalization
      Xiao, Jiancong; Sun, Ruoyu; Long, Qi; Su, Weijie
    • Convergence of Gradient Descent with Small Initialization for Unregularized Matrix Completion
      Ma, Jianhao; Fattahi, Salar
    • Risk-Sensitive Online Algorithms
      Christianson, Nicolas; Sun, Bo; Low, Steven; Wierman, Adam
    • Algorithms for mean-field variational inference via polyhedral optimization in the Wasserstein space
      Jiang, Yiheng; Chewi, Sinho; Pooladian, Aram-Alexandre
    • Testable Learning of General Halfspaces with Adversarial Label Noise
      Diakonikolas, Ilias; Kane, Daniel M; Liu, Sihan; Zarifis, Nikos
    • Information-Theoretic Thresholds for the Alignments of Partially Correlated Graphs
      Huang, Dong; Song, Xianwen; Yang, Pengkun
    • The complexity of approximate (coarse) correlated equilibrium for incomplete information games
      Peng, Binghui; Rubinstein, Aviad
    • Active Learning with Simple Questions
      Kontonis, Vasilis; Ma, Mingchen; Tzamos, Christos
    • On the Performance of Empirical Risk Minimization with Smoothed Data
      Block, Adam; Rakhlin, Alexander; Shetty, Abhishek
    • Lasso with Latents: Efficient Estimation, Covariate Rescaling, and Computational-Statistical Gaps
      Kelner, Jonathan; Koehler, Frederic; Meka, Raghu; Rohatgi, Dhruv
    • A non-backtracking method for long matrix and tensor completion
      Stephan, Ludovic; Zhu, Yizhe
    • Choosing the p in Lp loss: adaptive rates for symmetric mean estimation
      Kao, Yu-Chun; Xu, Min; Zhang, Cun-Hui
    • Superconstant Inapproximability of Decision Tree Learning
      Koch, Caleb; Strassle, Carmen; Tan, Li-Yang
    • Scale-free Adversarial Reinforcement Learning
      Chen, Mingyu; Zhang, Xuezhou
    • Faster Spectral Density Estimation and Sparsification in the Nuclear Norm
      Jin, Yujia; Karmarkar, Ishani; Musco, Christopher; Sidford, Aaron; Singh, Apoorv Vikram
    • Improved Hardness Results for Learning Intersections of Halfspaces
      Tiegel, Stefan
    • On Computationally Efficient Multi-Class Calibration
      Gopalan, Parikshit; Hu, Lunjia; Rothblum, Guy N
    • Black-Box k-to-1-PCA Reductions: Theory and Applications
      Jambulapati, Arun; Kumar, Syamantak; Li, Jerry; Pandey, Shourya; Pensia, Ankit; Tian, Kevin
    • Optimal score estimation via empirical Bayes smoothing
      Wibisono, Andre; Wu, Yihong; Yang, Kaylee Yingxi
    • Sample-Optimal Locally Private Hypothesis Selection and the Provable Benefits of Interactivity
      Pour, Alireza; Ashtiani, Hassan; Asoodeh, Shahab
    • On Convex Optimization with Semi-Sensitive Features
      Ghazi, Badih; Kamath, Pritish; Kumar, Ravi; Manurangsi, Pasin; Meka, Raghu; Zhang, Chiyuan
    • Gap-Free Clustering: Sensitivity and Robustness of SDP
      Zurek, Matthew; Chen, Yudong
    • Statistical Query Lower Bounds for Learning Truncated Gaussians
      Diakonikolas, Ilias; Kane, Daniel M.; Pittas, Thanasis; Zarifis, Nikos
    • Universal Rates for Real-Valued Regression: Separations between Cut-Off and Absolute Loss
      Attias, Idan; Hanneke, Steve; Kalavasis, Alkis; Karbasi, Amin; Velegkas, Grigoris
    • Linear Bellman Completeness Suffices for Efficient Online Reinforcement Learning with Few Actions
      Golowich, Noah; Moitra, Ankur
    • Is Efficient PAC Learning Possible with an Oracle That Responds “Yes” or “No”?
      Daskalakis, Constantinos; Golowich, Noah
    • Low-degree phase transitions for detecting a planted clique in sublinear time
      Verchand, Kabir A; Wein, Alexander S; Mardia, Jay
    • Closing the Computational-Query Depth Gap in Parallel Stochastic Convex Optimization
      Jambulapati, Arun; Sidford, Aaron; Tian, Kevin
    • Dual VC Dimension Obstructs Sample Compression by Embeddings
      Chase, Zachary; Chornomaz, Bogdan; Hanneke, Steve; Moran, Shay; Yehudayoff, Amir
    • A Non-Adaptive Algorithm for the Quantitative Group Testing Problem
      Soleymani, Mahdi; Javidi, Tara
    • Safe Linear Bandits over Unknown Polytopes
      Gangrade, Aditya; Chen, Tianrui; Saligrama, Venkatesh
    • List Sample Compression and Uniform Convergence
      Waknine, Tom; Moran, Shay; Hanneke, Steve
    • Dimension-free Structured Covariance Estimation
      Puchkin, Nikita; Rakhuba, Maxim
    • Fundamental limits of Non-Linear Low-Rank Matrix Estimation
      Mergny, Pierre; Ko, Justin P; KRZAKALA, FLORENT; Zdeborova, Lenka
    • Universal Lower Bounds and Optimal Rates: Achieving Minimax Clustering Error in Sub-Exponential Mixture Models
      Dreveton, Maximilien; Gözeten, Alperen; Grossglauser, Matthias; Thiran, Patrick
    • Linear bandits with polylogarithmic minimax regret
      Lumbreras Zarapico, Josep; Tomamichel, Marco
    • Contraction of Markovian Operators in Orlicz Spaces and Error Bounds for Markov Chain Monte Carlo
      Esposito, Amedeo Roberto; Mondelli, Marco
    • $(\epsilon, u)$-Adaptive Regret Minimization in Heavy-Tailed Bandits
      Genalti, Gianmarco; Marsigli, Lupo; Gatti, Nicola; Metelli, Alberto Maria
    • Harmonics of Learning: Universal Fourier Features Emerge in Invariant Networks
      Marchetti, Giovanni Luca; Hillar, Christopher; Kragic, Danica; Sanborn, Sophia
    • Fit Like You Sample: Sample-Efficient Generalized Score Matching from Fast Mixing Diffusions
      Qin, Yilong; Risteski, Andrej
    • New Lower Bounds for Testing Monotonicity and Log Concavity of Distributions
      Cheng, Yuqian; Kane, Daniel M; Zheng, Zhicheng
    • Multiple-output composite quantile regression through an optimal transport lens
      Yang, Xuzhi; Wang, Tengyao
    • On sampling diluted Spin-Glasses using Glauber Dynamics
      Efthymiou, Charilaos; Zampetakis, Kostas
    • The Sample Complexity of Simple Binary Hypothesis Testing
      Pensia, Ankit; Jog, Varun; Loh, Po-Ling
    • Counting Stars is Constant-Degree Optimal For Detecting Any Planted Subgraph
      Yu, Xifan; Zadik, Ilias; Zhang, Peiyuan
    • Large Stepsize Gradient Descent for Logistic Loss: Non-Monotonicity of the Loss Improves Optimization Efficiency
      Wu, Jingfeng; Bartlett, Peter; Telgarsky, Matus; Yu, Bin
    • Reconstructing the Geometry of Random Geometric Graphs
      Huang, Han; Mossel, Elchanan; Jiradilok, Pakawut
    • Errors are Robustly Tamed in Cumulative Knowledge Processes
      Brandenberger, Anna; Marcussen, Cassandra; Mossel, Elchanan; Sudan, Madhu
    • Better-than-KL PAC-Bayes Bounds
      Kuzborskij , Ilja; Jun, Kwang-Sung; Wu, Yulian; Jang, Kyoungseok; Orabona, Francesco
    • Lower Bounds for Differential Privacy Under Continual Observation and Online Threshold Queries
      Cohen, Edith; Lyu, Xin; Nelson, Jelani; Sarlos, Tamas; Stemmer, Uri
    • Efficiently Learning One-Hidden-Layer ReLU Networks via Schur Polynomials
      Diakonikolas, Ilias; Kane, Daniel M
    • Learnability Gaps of Strategic Classification
      Cohen, Lee; Mansour, Yishay; Moran, Shay; Shao, Han
    • Agnostic Active Learning of Single Index Models with Linear Sample Complexity
      Gajjar, Aarshvi; Xu, Xingyu; Hegde, Chinmay; Musco, Christopher; Tai, Wai Ming; Li, Yi
    • Metalearning with Very Few Samples Per Task
      Aliakbarpour, Maryam; Bairaktari, Konstantina; Brown, Gavin; Smith, Adam; Srebro, Nathan; Ullman, Jonathan
    • Near-Optimal Learning and Planning in Separated Latent MDPs
      Chen, Fan; Daskalakis, Constantinos; Golowich, Noah; Rakhlin, Alexander
    • Projection by Convolution: Optimal Sample Complexity for RL in Continuous-Space MDPs
      Maran, Davide; Metelli, Alberto Maria; Papini, Matteo; Restelli, Marcello
    • Testable Learning with Distribution Shift
      Klivans, Adam; Stavropoulos, Konstantinos; Vasilyan, Arsen
    • Optimistic Information Directed Sampling
      Neu, Gergely; Papini, Matteo; Schwartz, Ludovic
    • Online Newton Method for Bandit Convex Optimisation
      Fokkema, Hidde; van der Hoeven, Dirk; Lattimore, Tor; Mayo, Jack J.
    • Two fundamental limits for uncertainty quantification in predictive inference
      Areces, Felipe P; Cheng, Chen; Duchi, John; Kuditipudi, Rohith
    • Testably Learning Intersections of Halfspaces with Distribution Shift: Improved Algorithms and SQ Lower Bounds
      Klivans, Adam; Stavropoulos, Konstantinos; Vasilyan, Arsen
    • Spatial properties of Bayesian unsupervised trees
      Liu, Linxi; Ma, Li
    • Offline Reinforcement Learning: Role of State Aggregation and Trajectory Data
      Jia, Zeyu; Rakhlin, Alexander; Sekhari, Ayush; Wei, Chen-Yu
    • The Statistical Query Complexity of Gaussian Single-Index Models
      Damian, Alexandru ; Pillaud-Vivien, Loucas; Lee, Jason; Bruna, Joan
    • Learning sum of ridge functions: Efficient gradient-based training and computational hardness
      Oko, Kazusato; Song, Yujin; Suzuki, Taiji; Wu, Denny
    • The SMART Approach to Instance-Optimal Online Learning
      Banerjee, Siddhartha; Bhatt, Alankrita; Yu, Christina Lee
    • Smaller Confidence Intervals From IPW Estimators via Data-Dependent Coarsening
      Kalavasis, Alkis; Mehrotra, Anay; Zampetakis, Emmanouil
    • The Predicted-Updates Dynamic Model: Offline, Incremental, and Decremental to Fully Dynamic Transformations
      Liu, Quanquan C.; Srinivas, Vaidehi
    • Adversarially-Robust Inference on Trees via Belief Propagation
      Li, Anqi; Hopkins, Samuel
    • Adversarial Online Learning with Temporal Feedback Graphs
      Gatmiry, Khashayar; Schneider, Jon
    • Top-$K$ ranking with a monotone adversary
      Yang, Yuepeng; Chen, Antares; Orecchia, Lorenzo; Ma, Cong
    • Prediction from compression for models with infinite memory, with applications to hidden Markov and renewal processes
      Han, Yanjun; Jiang, Tianze; Wu, Yihong
    • Learning Neural Networks with Sparse Activations
      Awasthi, Pranjal; Dikkala, Nishanth; Kamath, Pritish; Meka, Raghu
    • Depth Separation in Norm-Bounded Infinite-Width Neural Networks
      Parkinson, Suzanna J; Ongie, Greg; Willett, Rebecca; Shamir, Ohad; Srebro, Nathan
    • Gaussian Cooling and Dikin Walks: The Interior-Point Method for Logconcave Sampling
      Kook, Yunbum; Vempala, Santosh
    • On the Computability of Robust PAC Learning
      Gourdeau, Pascale; Lechner, Tosca; Urner, Ruth
    • Stochastic Constrained Contextual Bandits via Lyapunov Optimization Based Estimation to Decision Framework
      Hengquan, Guo; Liu, Xin
    • Adaptive Learning Rate for Follow-the-Regularized-Leader: Competitive Ratio Analysis and Best-of-Both-Worlds
      Ito, Shinji; Tsuchiya, Taira; Honda, Junya
    • Insufficient Statistics Perturbation: Stable Estimators for Private Least Squares
      Brown, Gavin R; Hayase, Jonathan; Hopkins, Samuel; Kong, Weihao; Liu, Xiyang; Oh, Sewoong; Perdomo, Juan C; Smith, Adam
    • Beyond Catoni: Sharper Rates for Heavy-Tailed and Robust Mean Estimation
      Gupta, Shivam; Hopkins, Samuel; Price, Eric
    • Nonlinear spiked covariance matrices and signal propagation in deep neural networks
      Wang, Zhichao; Wu, Denny; Fan, Zhou
    • The Limits and Potentials of Local SGD for Distributed Heterogeneous Learning with Intermittent Communication
      Patel, Kumar Kshitij; Glasgow, Margalit R; zindari, ali; Wang, Lingxiao; Stich, Sebastian U; Cheng, Ziheng; Joshi, Nirmit; Srebro, Nathan
    • Identification of Mixtures of Discrete Product Distributions in Near-Optimal Sample and Time Complexity
      Gordon, Spencer; Jahn, Erik L; Mazaheri, Bijan H; Rabani, Yuval; Schulman, Leonard J
    • Online Stackelberg Optimization via Nonlinear Control
      Brown, William; Papadimitriou, Christos ; Roughgarden, Tim
    • An information-theoretic lower bound in time-uniform estimation
      Haque, Saminul; Duchi, John
    • On the Growth of Mistakes in Differentially Private Online Learning: A Lower Bound Perspective
      Dmitriev, Daniil; Szabó, Kristóf; Sanyal, Amartya
    • Smoothed Analysis for Learning Concepts with Low Intrinsic Dimension
      Chandrasekaran, Gautam; Klivans, Adam; Kontonis, Vasilis; Meka, Raghu; Stavropoulos, Konstantinos
    • Inherent limitations of dimensions for characterizing learnability of distribution classes
      Lechner, Tosca; Ben-David, Shai
    • Finding Super-spreaders in Network Cascades
      Mossel, Elchanan; Sridhar, Anirudh
    • Convergence of Kinetic Langevin MCMC on Lie groups
      Kong, Lingkai; Tao, Molei
    • Sampling Polytopes with Riemannian HMC: Faster Mixing via the Lewis Weights Barrier
      Gatmiry, Khashayar; Vempala, Santosh S; Kelner, Jonathan
    • Universally Instance-Optimal Mechanisms for Private Statistical Estimation
      Asi, Hilal; Haque, Saminul; Duchi, John; Li, Zewei; Ruan, Feng
    • Principal eigenstate classical shadows
      Grier, Daniel; Schaeffer, Luke ; Pashayan, Hakop
    • Limits of Approximating the Median Treatment Effect
      Bhandari, Siddharth; Addanki, Raghavendra
    • Robust Estimation under Local and Global Adversarial Corruptions]{Robust Distribution Learning with Local and Global Adversarial Corruptions
      Nietert, Sloan; Goldfeld, Ziv; Shafiee, Soroosh
    • Fast, blind, and accurate: Tunning-free sparse regression with global linear convergence
      Mayrink Verdun, Claudio; Melnyk, Oleh; Krahmer, Felix; Jung, Peter
    • Elementary Observations About the Dimensions of Disagreement: The Star Number and Eluder Dimension
      Hanneke, Steve
    • Thresholds for Reconstruction of Random Hypergraphs From Graph Projections
      Bresler, Guy; Guo, Chenghao; Polyanskiy, Yury
    • The Best Arm Evades: Near-optimal Multi-pass Streaming Lower Bounds for Pure Exploration in Multi-armed Bandits
      Wang, Chen; Assadi, Sepehr