Keynote Speakers
David Blei
Title: Scaling and Generalizing Approximate Bayesian Inference
Abstract:
A core problem in statistics and machine learning is to approximate difficult-to-compute probability distributions. This problem is especially important in Bayesian statistics, which frames all inference about unknown quantities as a calculation about a conditional distribution. In this talk I review and discuss innovations in variational inference (VI), a method a that approximates probability distributions through optimization. VI has been used in myriad applications in machine learning and Bayesian statistics. It tends to be faster than more traditional methods, such as Markov chain Monte Carlo sampling.
After quickly reviewing the basics, I will discuss some recent research on VI. I first describe stochastic variational inference, an approximate inference algorithm for handling massive data sets, and demonstrate its application to probabilistic topic models of millions of articles. Then I discuss black box variational inference, a generic algorithm for approximating the posterior. Black box inference easily applies to many models but requires minimal mathematical work to implement. I will demonstrate black box inference on deep exponential families---a method for Bayesian deep learning---and describe how it enables powerful tools for probabilistic programming.
Bio: David Blei is a Professor of Statistics and Computer Science at Columbia University, and a member of the Columbia Data Science Institute. He studies probabilistic machine learning, including its theory, algorithms, and application. David has received several awards for his research. He received a Sloan Fellowship (2010), Office of Naval Research Young Investigator Award (2011), Presidential Early Career Award for Scientists and Engineers (2011), Blavatnik Faculty Award (2013), ACM-Infosys Foundation Award (2013), a Guggenheim fellowship (2017), and a Simons Investigator Award (2019). He is the co-editor-in-chief of the Journal of Machine Learning Research. He is a fellow of the ACM and the IMS.
Salil Vadhan
Title: The theory and practice of differential privacy
Abstract:
Since it was introduced in 2006 by Dwork, McSherry, Nissim, and Smith, differential privacy has become accepted as a gold standard for ensuring that individual-level information is not leaked through statistical analyses or machine learning on sensitive datasets. It has proved to be extremely rich for theoretical investigation, with deep connections to many other topics in theoretical computer science and mathematics, and has also made the transition to practice, with large-scale deployments by the US Census Bureau and technology companies like Google, Apple, and Microsoft.
In this talk, I will survey some of the recent theoretical advances and challenges in differential privacy, highlighting connections to learning theory. I will also discuss efforts toward wider practical adoption, such as OpenDP, a new community effort to build a suite of trusted, open-source tools for deploying differential privacy.
Bio: Salil Vadhan is the Vicky Joseph Professor of Computer Science and Applied Mathematics at the Harvard John A. Paulson School of Engineering & Applied Sciences, and Lead PI on the Harvard Privacy Tools Project. Vadhan's research in theoretical computer science spans computational complexity, cryptography, and data privacy. He is a Simons Investigator, a Harvard College Professor, and an ACM Fellow, and his past honors include a Godel Prize and a Guggenheim Fellowship.
Rebecca Willett
Title: Learning to Solve Inverse Problems in Imaging
Abstract: Many challenging image processing tasks can be described by an ill-posed linear inverse problem: deblurring, deconvolution, inpainting, compressed sensing, and superresolution all lie in this framework. Traditional inverse problem solvers minimize a cost function consisting of a data-fit term, which measures how well an image matches the observations, and a regularizer, which reflects prior knowledge and promotes images with desirable properties like smoothness. Recent advances in machine learning and image processing have illustrated that it is often possible to learn a regularizer from training data that can outperform more traditional regularizers. However, some popular approaches are highly suboptimal in terms of sample complexity, which we can see from the perspective of conditional density estimation. I will describe an end-to-end, data-driven method of solving inverse problems inspired by the Neumann series, called a Neumann network. The Neumann network architecture outperforms traditional inverse problem solution methods, model-free deep learning approaches and state-of-the-art unrolled iterative methods on standard datasets. Finally, when the images belong to a union of subspaces and under appropriate assumptions on the forward model, we prove there exists a Neumann network configuration that well-approximates the optimal oracle estimator for the inverse problem and demonstrate empirically that the trained Neumann network has the form predicted by theory. This is joint work with Davis Gilton and Greg Ongie.
Bio: Rebecca Willett is a Professor of Statistics and Computer Science at the University of Chicago. Her research is focused on machine learning, signal processing, and large-scale data science. She completed her PhD in Electrical and Computer Engineering at Rice University in 2005 and was an Assistant then tenured Associate Professor of Electrical and Computer Engineering at Duke University from 2005 to 2013. She was an Associate Professor of Electrical and Computer Engineering, Harvey D. Spangler Faculty Scholar, and Fellow of the Wisconsin Institutes for Discovery at the University of Wisconsin-Madison from 2013 to 2018. Willett received the National Science Foundation CAREER Award in 2007, was a member of the DARPA Computer Science Study Group, and received an Air Force Office of Scientific Research Young Investigator Program award in 2010.