PLDI 2024
Mon 24 - Fri 28 June 2024 Copenhagen, Denmark

Deep neural networks (DNNs) are becoming increasingly important components of software, and are considered the state-of-the-art solution for a number of problems, such as image recognition. However, DNNs are far from infallible, and incorrect behavior of DNNs can have disastrous real-world consequences. In this tutorial, we discuss recent advances in the problem of provable repair of DNNs. Given a trained DNN and a repair specification, provable repair modifies the parameters of the DNN to guarantee that the repaired DNN satisfies the given specification while still ensuring high accuracy. The tutorial will describe algorithms for provable repair that support different DNN architectures as well as various types of repair specifications (pointwise, V-polytope, and H-polytope). The tutorial will demonstrate the utility of provable repair using examples from a variety of application domains, including image recognition, natural language processing, and autonomous drone controllers. The attendees will get hands on experience with provable repair tools build using PyTorch, a popular machine learning library.

Organizer: Aditya Thakur, https://thakur.cs.ucdavis.edu/

Brief Bio: Aditya Thakur is an Associate Professor in the Department of Computer Science and the Vice Chair of the Graduate Group of Computer Science at the University of California, Davis. He received his Ph.D. from the University of Wisconsin–Madison, and has held positions at Google, Microsoft Research, and the University of California, Berkeley. His research interests include programming languages, machine learning, formal methods, and software engineering. He was the recipient of the NSF CAREER award 2021, DOE Early Career Award 2021, Facebook Probability and Programming Research Award 2019 and 2020, and Facebook Testing and Verification Research Award 2018.