Knowledge-augmented face perception

Principal Investigators:

Olaf Hellwich
Rasha Abdel Rahman

Team members:

Martin Maier (Postdoctoral researcher)
Pia Bideau (Postdoctoral researcher)
Florian Blume (Doctoral researcher)

 

 

 

Bridging the gap between human and synthetic face processing

Research Unit 1, SCIoI Project 08

Face perception and categorization is fundamental to social interactions. In humans, input from facial features is integrated with top-down influences from other cognitive domains, such as expectations, memories and contextual knowledge. For instance, whether a face is perceived as depicting an angry expression may depend on prior knowledge about the context (Aviezer et al., 2007) or person (Abdel Rahman, 2011; Suess, Rabovsky & Abdel Rahman 2014). Furthermore, humans have a strong tendency to infer traits such as trustworthiness directly from faces.

In contrast to human perception, automatic systems of face processing are typically based purely on bottom-up information without considering factors as prior knowledge. Even in modern deep learning approaches where system performance depends on massive amounts of training data a combination of visual input data and given knowledge is regularly not considered. This principle difference to human face perception limits the scope of understanding and successful interactions between artificial agents and humans.

The aim of the project is therefore to bridge the gap between human and synthetic face processing by integrating top-down components typical for human perception into synthetic systems. This will be done by linking empirical observations with computational modelling and state-of-the-art image analysis methods. An intermediate result of the investigations may be an improved understanding of meaning and impact of prior knowledge w.r.t. visual feature representations in the process of computing an interpretation of a facial expression. In perspective, the insights on human-like face perception may be integrated in humanoid robots to adapt to social perception and face-to-face communication.

Related Publications

Maier, M., Blume, F., Bideau, P., Hellwich, O., & Abdel Rahman, R. (2022). Knowledge-Augmented Face Perception: Prospects for the Bayesian Brain-Framework to Align AI and Human Vision. Consciousness and Cognition, 101. https://doi.org/10.1016/j.concog.2022.103301
Maier, M., Frömer, R., Rost, J., Sommer, W., & Abdel Rahman, R. (2022). Linguistic and semantic influences on early vision: evidence from object perception and mental imagery. Cognitive Neuroscience of Language Embodiment and Relativity.
Maier, M., Leonhardt, Alexander, & Abdel Rahman, R. (2022). Bad robots? Humans rapidly attribute mental states during the perception of robot faces. KogWis 2022.
Leonhardt, A., Maier, M., & Abdel Rahman, R. (2021). The impact of affective knowledge on the perception and evaluation of robot faces. 5th Virtual Social Interactions (VSI) Conference. https://www.so-bots.com/s/VSI_5_VIRTUAL_2021_UPDATE_29June.pdf
Enge, A., Süß, F., & Abdel Rahman, R. (2023). Instant Effects of Semantic Information on Visual Perception. Journal of Neuroscience.