Bridging the gap between human and synthetic face processing
Research Unit 1, SCIoI Project 08
Face perception and categorization is fundamental to social interactions. In humans, input from facial features is integrated with top-down influences from other cognitive domains, such as expectations, memories and contextual knowledge. For instance, whether a face is perceived as depicting an angry expression may depend on prior knowledge about the context (Aviezer et al., 2007) or person (Abdel Rahman, 2011; Suess, Rabovsky & Abdel Rahman 2014). Furthermore, humans have a strong tendency to infer traits such as trustworthiness directly from faces.
In contrast to human perception, automatic systems of face processing are typically based purely on bottom-up information without considering factors as prior knowledge. Even in modern deep learning approaches where system performance depends on massive amounts of training data a combination of visual input data and given knowledge is regularly not considered. This principle difference to human face perception limits the scope of understanding and successful interactions between artificial agents and humans.
The aim of the project is therefore to bridge the gap between human and synthetic face processing by integrating top-down components typical for human perception into synthetic systems. This will be done by linking empirical observations with computational modelling and state-of-the-art image analysis methods. An intermediate result of the investigations may be an improved understanding of meaning and impact of prior knowledge w.r.t. visual feature representations in the process of computing an interpretation of a facial expression. In perspective, the insights on human-like face perception may be integrated in humanoid robots to adapt to social perception and face-to-face communication.