BEGIN:VCALENDAR
VERSION:2.0
PRODID:-//scienceofintelligence.de - ECPv6.15.12.2//NONSGML v1.0//EN
CALSCALE:GREGORIAN
METHOD:PUBLISH
X-WR-CALNAME:scienceofintelligence.de
X-ORIGINAL-URL:https://www.scienceofintelligence.de
X-WR-CALDESC:Events for scienceofintelligence.de
REFRESH-INTERVAL;VALUE=DURATION:PT1H
X-Robots-Tag:noindex
X-PUBLISHED-TTL:PT1H
BEGIN:VTIMEZONE
TZID:Europe/Berlin
BEGIN:DAYLIGHT
TZOFFSETFROM:+0100
TZOFFSETTO:+0200
TZNAME:CEST
DTSTART:20210328T010000
END:DAYLIGHT
BEGIN:STANDARD
TZOFFSETFROM:+0200
TZOFFSETTO:+0100
TZNAME:CET
DTSTART:20211031T010000
END:STANDARD
BEGIN:DAYLIGHT
TZOFFSETFROM:+0100
TZOFFSETTO:+0200
TZNAME:CEST
DTSTART:20220327T010000
END:DAYLIGHT
BEGIN:STANDARD
TZOFFSETFROM:+0200
TZOFFSETTO:+0100
TZNAME:CET
DTSTART:20221030T010000
END:STANDARD
BEGIN:DAYLIGHT
TZOFFSETFROM:+0100
TZOFFSETTO:+0200
TZNAME:CEST
DTSTART:20230326T010000
END:DAYLIGHT
BEGIN:STANDARD
TZOFFSETFROM:+0200
TZOFFSETTO:+0100
TZNAME:CET
DTSTART:20231029T010000
END:STANDARD
END:VTIMEZONE
BEGIN:VEVENT
DTSTART;TZID=Europe/Berlin:20220210T100000
DTEND;TZID=Europe/Berlin:20220210T110000
DTSTAMP:20260410T060728
CREATED:20220131T105742Z
LAST-MODIFIED:20250604T092633Z
UID:11655-1644487200-1644490800@www.scienceofintelligence.de
SUMMARY:Mathilde Caron\, “Self-Supervised Learning: How To Learn From Images Without Human Annotations”
DESCRIPTION:Abstract:\nSelf-supervised learning (SSL) consists in training neural network systems without using any human annotations. Typically\, neural networks require large amounts of annotated data\, which have limited their applications in fields where accessing these annotations is expensive or difficult. Moreover\, manual annotations are biased towards a specific task and towards the annotator’s own biases\, which can result in noisy and unreliable signals. Training systems without annotations could lead to better\, more generic and robust representations. In this talk\, I will present different contributions to the fast-growing field of SSL conducted during my PhD. I will finish by discussing open questions and challenges for the future of SSL. \n  \nThe Zoom Link will be sent the day before the lecture.
URL:https://www.scienceofintelligence.de/event/thursday-morning-talk-with-mathilde-caron/
LOCATION:On Zoom
CATEGORIES:Thursday Morning Talk
ATTACH;FMTTYPE=image/jpeg:https://www.scienceofintelligence.de/wp-content/uploads/2022/01/carol.jpg
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=Europe/Berlin:20220217T100000
DTEND;TZID=Europe/Berlin:20220217T110000
DTSTAMP:20260410T060728
CREATED:20220117T152745Z
LAST-MODIFIED:20240813T100442Z
UID:11627-1645092000-1645095600@www.scienceofintelligence.de
SUMMARY:Yuejiang Liu (EPFL University)\, "Learning Beyond the IID Setting with Robust and Adaptive Representations"
DESCRIPTION:Abstract \nMachine learning models have achieved stunning successes in the IID setting. Yet\, beyond this setting\, existing models still suffer from two grand challenges: brittle under covariate shift and inefficient for knowledge transfer. In this talk\, I will introduce three approaches to tackle these challenges\, namely self-supervised learning\, causal representation learning\, and test-time training. More specifically\, I will share our recent findings on (i) incorporating prior knowledge of negative examples into representation learning\, (ii) promoting causal invariance and structure by making use of data from multiple domains\, (iii) exploiting extra information besides model parameters for effective test-time adaptation. I will show how these techniques enable deep neural networks to more robustly generalize and efficiently adapt to new environments in the motion or vision context. I will finally discuss the implications of these results on the design\, training\, and deployment of deep models for domain generalization and adaptation. Comments and feedback are more than welcome. \n  \nPaper Links \nSocial NCE: Contrastive Learning of Socially-Aware Motion Representations\, ICCV’21 \nTTT++: When Does Self-Supervised Test-Time Training Fail or Thrive? NeurIPS’21 \nCollaborative Sampling in Generative Adversarial Networks\, AAAI’20 \nTowards Robust and Adaptive Motion Forecasting: A Causal Representation Perspective\, Preprint’21 (under review) \n  \nBio \nYuejiang Liu is a PhD student at EPFL\, advised by Alexandre Alahi. His research interests center around representation learning and its applications to autonomous agents. He is particularly excited about unsupervised learning for robust generalization and efficient adaptation. \n  \nThe Zoom Link will be sent the day before the lecture.
URL:https://www.scienceofintelligence.de/event/thursday-morning-talk-with-yuejiang-liu-epfl-university-learning-beyond-the-iid-setting-with-robust-and-adaptive-representations/
LOCATION:On Zoom
CATEGORIES:Thursday Morning Talk
ATTACH;FMTTYPE=image/jpeg:https://www.scienceofintelligence.de/wp-content/uploads/2022/01/photo_yuejiang-e1642433180790.jpg
END:VEVENT
END:VCALENDAR