How can we better use our data to build more reliable and responsible models? Dr Koh will first discuss when it might be useful to train on synthetic image data derived, in turn, from a generative model trained on the available real data. Next, he will describe how scaling up the datastore for retrieval-based language models can significantly improve performance, indicating that the amount of data used at inference time—and not just at training time—should be considered as a new dimension of scaling language models. Finally, he will discuss how the static nature of most of our training data leads to language model failures in interactive settings, and present an adaptive conformal inference method to accurately quantify uncertainty in such dynamic environments.
Biography:
Pang Wei Koh is an assistant professor in the Allen School of Computer Science and Engineering at the University of Washington, a visiting research scientist at AI2, and a Singapore AI Visiting Professor. His research interests are in the theory and practice of building reliable machine learning systems. His research has been published in Nature and Cell, featured in media outlets such as The New York Times and The Washington Post, and recognized by the MIT Technology Review Innovators Under 35 Asia Pacific award and best paper awards at ICML and KDD. He received his PhD and BS in Computer Science from Stanford University. Prior to his PhD, he was the 3rd employee and Director of Partnerships at Coursera.