- This event has passed.
Thursday Morning Talk with Yuejiang Liu (EPFL University), “Learning Beyond the IID Setting with Robust and Adaptive Representations”
Machine learning models have achieved stunning successes in the IID setting. Yet, beyond this setting, existing models still suffer from two grand challenges: brittle under covariate shift and inefficient for knowledge transfer. In this talk, I will introduce three approaches to tackle these challenges, namely self-supervised learning, causal representation learning, and test-time training. More specifically, I will share our recent findings on (i) incorporating prior knowledge of negative examples into representation learning, (ii) promoting causal invariance and structure by making use of data from multiple domains, (iii) exploiting extra information besides model parameters for effective test-time adaptation. I will show how these techniques enable deep neural networks to more robustly generalize and efficiently adapt to new environments in the motion or vision context. I will finally discuss the implications of these results on the design, training, and deployment of deep models for domain generalization and adaptation. Comments and feedback are more than welcome.
Yuejiang Liu is a PhD student at EPFL, advised by Alexandre Alahi. His research interests center around representation learning and its applications to autonomous agents. He is particularly excited about unsupervised learning for robust generalization and efficient adaptation.