Sidebar


AI Horizons


[Online] The Promise and Pitfalls of Public Pre-training in Differentially Private Machine Learning

Distinguished Speaker Seminar

Speaker: Dr Gautam Kamath, University of Washington 

Date: Oct 4, 2024
Time: 9:00AM – 10:30AM SGT

Please register for the talk here.

Abstract:

Modern machine learning systems are trained on massive amounts of data. Without special care, machine learning models are prone to regurgitating or otherwise revealing information about individual data points. I will discuss differential privacy, a rigorous notion of data privacy, and how it can be used to provably protect against such inadvertent data disclosures by machine learning models. I will focus primarily on recent approaches that employ public data for pre-training the model, granting significant improvements in the utility of privately fine-tuned models. I will also discuss pitfalls of these methods and potential paths forward for the field.

Biography:

Gautam Kamath is an Assistant Professor at the David R. Cheriton School of Computer Science at the University of Waterloo, and a Canada CIFAR AI Chair and Faculty Member at the Vector Institute. He has a B.S. in Computer Science and Electrical and Computer Engineering from Cornell University, and an M.S. and Ph.D. in Computer Science from the Massachusetts Institute of Technology. He is interested in reliable and trustworthy statistics and machine learning, including considerations such as data privacy and robustness. He was a Microsoft Research Fellow, as a part of the Simons-Berkeley Research Fellowship Program at the Simons Institute for the Theory of Computing. He serves as an Editor in Chief of Transactions on Machine Learning Research, and is the program committee co-chair of the 36th International Conference on Algorithmic Learning Theory (ALT 2025). He is the recipient of several awards, including the Caspar Bowden Award for Outstanding Research in Privacy Enhancing Technologies, a best paper award at the Forty-first International Conference on Machine Learning (ICML 2024), and the Faculty of Math Golden Jubilee Research Excellence Award.

[Online] Transfer Learning for Bayesian Optimization on Homogeneous and Heterogeneous Search Spaces

[Online] Transfer Learning for Bayesian Optimization on Homogeneous and Heterogeneous Search Spaces

Speaker: Dr Zi Wang, Google DeepMind 

Date: Sep 6, 2024
Time: 9:00AM – 10:30AM SGT

Please register for the talk here.

Abstract:

Bayesian Optimization (BO) is a powerful approach for optimizing expensive black-box functions. However, its success depends heavily on expert-specified Gaussian Process (GP) priors, which can be difficult to construct in practice. This talk explores the use of pre-training to specify these functional priors in two settings: 1) homogeneous: the “training” functions share the same domain as the “test” function (black-box function to be optimized), and 2) heterogeneous: the domains of all training and test functions can be different. In the homogeneous setting, we introduce HyperBO, which pre-trains a GP and uses it for optimizing the test function. This approach leads to bounded posterior predictions and near-zero regrets without requiring the knowledge of the ground truth GP prior. For the heterogeneous case, we introduce a method for model pre-training on heterogeneous domains (MPHD). MPHD enjoys the benefits of HyperBO and further enhances it with domain adaptability. Using a neural net pre-trained in existing domains, MPHD generates customized hierarchical GP specifications based on domain descriptions. We discuss both the theoretical and practical advantages, and demonstrate their superior performance on a wide range of real-world tuning tasks.

Biography:

Zi Wang is a senior research scientist at Google DeepMind. She works on intelligent decision making under uncertainty, with a focus on understanding and improving the alignment between human judgements and Bayesian beliefs in AI systems. Zi obtained her Ph.D. in computer science from MIT in 2020 and was a lecturer at Harvard University in 2022.

AI Horizons: Reliable Data Use – Synthesis, Retrieval, and Interaction

AI Horizons: Reliable Data Use – Synthesis, Retrieval, and Interaction

Speaker: Dr Pang Wei Koh, University of Washington

Date: Aug 16, 2024
Time: 10:00AM – 11:30AM SGT
Venue: COM1 Seminar Room 1 #02-06

Abstract:

How can we better use our data to build more reliable and responsible models? Dr Koh will first discuss when it might be useful to train on synthetic image data derived, in turn, from a generative model trained on the available real data. Next, he will describe how scaling up the datastore for retrieval-based language models can significantly improve performance, indicating that the amount of data used at inference time—and not just at training time—should be considered as a new dimension of scaling language models. Finally, he will discuss how the static nature of most of our training data leads to language model failures in interactive settings, and present an adaptive conformal inference method to accurately quantify uncertainty in such dynamic environments.

Biography:

Pang Wei Koh is an assistant professor in the Allen School of Computer Science and Engineering at the University of Washington, a visiting research scientist at AI2, and a Singapore AI Visiting Professor. His research interests are in the theory and practice of building reliable machine learning systems. His research has been published in Nature and Cell, featured in media outlets such as The New York Times and The Washington Post, and recognized by the MIT Technology Review Innovators Under 35 Asia Pacific award and best paper awards at ICML and KDD. He received his PhD and BS in Computer Science from Stanford University. Prior to his PhD, he was the 3rd employee and Director of Partnerships at Coursera.

Towards Robotic Caregiving – Building Robots That Work Alongside Human Stakeholders 

Towards Robotic Caregiving – Building Robots That Work Alongside Human Stakeholders

Speaker: Prof. Tapomayukh Bhattacharjee, Cornell University (Dept. of Computer Science)

Date: Jul 5, 2024
Time: 2:00PM – 3:30PM SGT
Venue: Meeting Room 20, COM3 #02-59 

Abstract:

How do we build robots that can assist people with mobility limitations with activities of daily living? To successfully perform these activities, a robot needs to be able to physically interact with humans and objects in unstructured human environments. Through this talk, I will cover various projects in my lab that showcase fundamental advances in the field of physical robotic caregiving that involve complex and uncertain physical human-robot interaction. Specifically, I will show you how we can build caregiving robots to perform activities of daily living such as feeding, meal-preparation, and bed-bathing using our newly developed caregiving simulation tools and algorithms that leverage multimodal perception and user feedback, and how we deployed these systems to work in the real world with real users.

Biography:

Tapomayukh “Tapo” Bhattacharjee is an Assistant Professor in the Department of Computer Science at Cornell University where he directs the EmPRISE Lab (https://emprise.cs.cornell.edu/). He completed his Ph.D. in Robotics from Georgia Institute of Technology and was an NIH Ruth L. Kirschstein NRSA postdoctoral research associate in Computer Science & Engineering at the University of Washington. He wants to enable robots to assist people with mobility limitations with activities of daily living. His work spans the fields of human-robot interaction, haptic perception, and robot manipulation and focuses on addressing the fundamental research question on how to leverage robot-world physical interactions in unstructured human environments to perform relevant activities of daily living. He is the recipient of TRI Young Faculty Researcher Award’24, NSF CAREER Award’23, and his work has won Best Paper Award Finalist at HRI’24, Best Demo Award at HRI’24, Best RoboCup Paper Award at IROS’22, Best Paper Award Finalist and Best Student Paper Award Finalist at IROS’22, Best Technical Advances Paper Award at HRI’19, and Best Demonstration Award at NeurIPS’18. His work has also been featured in many media outlets including the BBC, Reuters, New York Times, IEEE Spectrum, and GeekWire and his robot-assisted feeding work was selected to be one of the best interactive designs of 2019 by Fast Company.

Distributed Language Models – Isolating Data Risks

Distinguished Speaker Seminar

Speaker: Dr Sewon Min, UC Berkeley (Dept. of Electrical Engineering and Computer Sciences)

Date: Jul 3, 2024
Time: 2:00PM – 3:30PM SGT
Venue: Meeting Room 20, COM3 #02-59 

A recording of the talk is available here

Abstract:

The impressive capabilities of large language models (LMs) are derived from their training data, which is often sourced from the web. However, these datasets also present issues concerning copyright and the lack of consent from data owners. Understanding the degree to which LMs depend on this data, and developing models that adhere to their restriction, remain significant challenges. In this talk, I will first introduce the Open License Corpus (OLC), a new corpus comprising 228 billion tokens of public domain and permissively licensed text. Despite its vast size, models trained solely on the OLC experience performance degradation due to limited domain coverage. We then propose SILO, a new language model that combines a parametric LM with a modifiable nonparametric datastore. This datastore, containing copyrighted books and news, is only accessed during inference. This methodology allows for the use of copyrighted data without directly training on it, enables sentence-level attribution of model outputs to data sources, and provides an opt-out mechanism for data owners. Finally, I will outline our roadmap towards “distributed models”, a set of model components that use data with varying levels of restrictions—such as public domain, copyrighted, or unreleasable data—in different capacities, such as for training, as a datastore, or in other ways.

Biography:

Sewon Min is an incoming assistant professor at UC Berkeley EECS. She recently received her Ph.D. in Computer Science & Engineering from the University of Washington. Her research focuses on language models (LMs): studying the science of LMs, and designing new model classes and learning methods that make LMs more performant and flexible. She also studies LMs in information-seeking, legal, and privacy contexts. She won a paper award at ACL 2023, received a J.P. Morgan Fellowship, and was named an EECS rising star in 2022. Previously, she was a part-time visiting researcher at Meta AI, interned at Google and Salesforce, and earned her B.S. in Computer Science & Engineering from Seoul National University.