- Time: Tue, Jul 1 11:30–12:30
- Title: Optimization in Machine Learning: From Convexity to Non-Convexity
- Abstract: Optimization algorithms, such as gradient descent and its stochastic variants, are fundamental tools in modern machine learning. Over the past fifteen years, the research landscape has evolved significantly: the early emphasis on convex optimization with strong quantitative theoretical guarantees (particularly for linear models) has gradually shifted toward the challenges of non-convex optimization, which underpins more complex models like neural networks and often lacks such guarantees. In this talk, I will survey key theoretical insights and empirical findings from both domains, highlighting the role of convexity, whether explicit or implicit, in shaping our current understanding of optimization for machine learning. I will also discuss emerging directions for future research.
- Bio: Francis Bach is a researcher at Inria, leading since 2011 the machine learning team which is part of the Computer Science department at Ecole Normale Supérieure. He graduated from Ecole Polytechnique in 1997 and completed his Ph.D. in Computer Science at U.C. Berkeley in 2005, working with Professor Michael Jordan. He spent two years in the Mathematical Morphology group at Ecole des Mines de Paris; then he joined the computer vision project-team at Inria/Ecole Normale Supérieure from 2007 to 2010.
Francis Bach is primarily interested in machine learning, and especially in sparse methods, kernel-based learning, neural networks, and large-scale optimization. He published the book "Learning Theory from First Principles" through MIT Press in 2024.
He obtained in 2009 a Starting Grant and in 2016 a Consolidator Grant from the European Research Council, and received the Inria young researcher prize in 2012, the ICML test-of-time award in 2014 and 2019, the NeurIPS test-of-time award in 2021, as well as the Lagrange prize in continuous optimization in 2018, and the Jean-Jacques Moreau prize in 2019. He was elected in 2020 at the French Academy of Sciences. In 2015, he was program co-chair of the International Conference in Machine learning (ICML), general chair in 2018, and president of its board between 2021 and 2023; he was co-editor-in-chief of the Journal of Machine Learning Research between 2018 and 2023.
- Time: Thu, Jul 3 11:30–12:30
- Title: Actually, data is a rival good
- Abstract: There is a tendency in many fields, including computer science, economics, and industry, to model data as a non-rival good, meaning that one entity using a particular piece of data doesn't impinge on its use by others. Food is a classic rival good (if I eat the apple, you cannot); digital music is a classic non-rival good (my listening to the song has no effect on your listening experience). Data might, at first blush, seem more like digital music than like an apple. In this talk, I will give arguments from three fields---economics, privacy, and statistics---for why modeling data as non-rival is problematic, and will argue that we need a new paradigm. One of the implications is a need for new infrastructure (both technical and legal) for handling data.
- Bio: Katrina Ligett is a professor in the School of Computer Science and Engineering at Hebrew University, where she is also the academic director of the interdisciplinary Federmann Center for the Study of Rationality, an affiliated faculty member and former head of the program on the Interfaces of Technology, Society, and Networks (formerly known as Internet & Society), and an affiliate of the Federmann Cyber Security Research Center. Before joining Hebrew University, she was faculty in computer science and economics at Caltech. Katrina's primary research interests are in data privacy, algorithmic fairness, machine learning theory, and algorithmic game theory. She received her PhD in Computer Science from Carnegie Mellon University in 2009 and did her postdoc at Cornell University. She is a recipient of an ERC Consolidator grant, the NSF CAREER award, and a Microsoft Faculty Fellowship. Katrina was a co-chair of the 2021 International Conference on Algorithmic Learning Theory (ALT), the chair of the 2021 Symposium on Foundations of Responsible Computing (FORC), and the general chair of the 2025 ACM Symposium on Computer Science and Law.
- Time: Fri, Jul 4 11:30–12:30
- Title: Mathematical and sociological questions in deep learning and large language models
- Abstract: This talk will cover open problems in deep learning and large language models. They will be both mathematical (e.g., regarding the analysis of first-order methods and characterizing the power of chain-of-thought methods), and sociological (e.g., regarding career dilemmas facing junior researchers, difficulties teaching, and difficulties finding tractable research paths).
- Bio: Matus Telgarsky is an assistant professor at the Courant Institute of Mathematics, NYU, specializing in deep learning theory. Previously, he was gratefully employed and tolerated at UIUC. Before that, he was fortunate to receive a PhD at UCSD under Sanjoy Dasgupta. Other highlights include: co-founding and co-chairing, in 2017, the Midwest ML Symposium (MMLS) with Po-Ling Loh; receiving a 2018 NSF CAREER award; organizing two Simons Institute programs, one on deep learning theory (summer 2019), and one on generalization (fall 2024).