PI Lecture with Alan Akbik: Few-Shot Learning for NLP with TARS (Task-Specific Representations)
In order to train good models for Natural Language Processing (NLP) tasks such as text classification and sequence labeling, we typically require very large amounts of labeled training data. However, such data is often not available and known to be very expensive to produce. With Few-Shot Learning, we instead research methods that allow us to train NLP models with only a handful of training examples – or none at all. This is inspired from human ability to learn new tasks and new concepts (often) from a few examples. In this talk, I present TARS, a novel approach for Few-Shot (and Zero-Shot) Learning in text classification. The main idea is to learn a generic model that can produce arbitrary task-specific representations of semantics. I present how TARS is able to (1) learn new tasks with few training examples, (2) learn new tasks with no training examples at all and (3) operate in a “Continual Learning” setting in which it sequentially learns more and more tasks – without forgetting the old tasks. Time permitting, I’ll discuss further research ideas as well.
TARS is fully integrated into the Flair framework (https://github.com/flairNLP/flair) for use and reproduction by the research community.
***Want to attend this lecture? Subscribe to our mailing list here or by sending an empty email to firstname.lastname@example.org
The Zoom Link will be sent the day before the lecture. (Contact email@example.com for specific questions)