- This event has passed.
Thursday Morning Talk: Leon Sixt (Biorobotics Lab, FU Berlin): Opportunities and Challenges in Interpetable ML
Abstract: Deep neural networks underlie many state of the art solutions to hard problems in computer vision, natural language processing or playing Go. Yet, their power comes with a price. Deep networks transform inputs gradually into outputs, using many parameters and intermediary activations. Understanding what a network has learned, how inputs are mapped to outputs, is inherently difficult. In my talk I will focus on attribution methods, algorithms that provide an explanation as to which input variables were relevant for the network’s decision. I will present some of my recent work in this field by first showing how attribution methods may fail and then presenting a new method that is based on information bottlenecks. In the remainder of my talk I will discuss the challenges we face in interpretable ML and how it may provide the opportunity to gain insight into the datasets themselves.
***Want to attend this lecture? Subscribe to our mailing list here or by sending an empty email to email@example.com
The Zoom Link will be sent the day before the lecture. (Contact firstname.lastname@example.org for specific questions)