Friendly or hostile? What our brains see in robot faces

Paper DOI →

How do we decide whether to trust a robot? What makes us see intention in a mechanical face? Researchers from Science of Intelligence have found that our impressions of social robots are shaped not only  by their actual appearance but also by what we believe about their behavior. Their study “Neural dynamics of mental state attribution to social robot faces”, published in Social Cognitive and Affective Neuroscience, shows that when people are told a robot behaves helpfully or harmfully, they quickly begin attributing mental states and emotional expressions, even if the robot’s face remains unchanged.

Two experiments: Judging the same face differently

Researchers Martin Maier, Alexander Leonhardt, Florian Blume, Pia Bideau, Olaf Hellwich, and Rasha Abdel Rahman from SCIoI conducted two complementary experiments. In the first, an online study with 60 participants (30 German and 30 English speakers), robot faces were shown alongside short audio stories that described their actions as either positive, neutral, or negative. Afterwards, participants rated the robot’s facial expression and trustworthiness. The same neutral robot face was perceived as friendlier or more hostile depending on what the participant had previously learned about its behavior.

In the second experiment, EEG recordings were used to track how these impressions were reflected in the brain. After a learning phase where participants memorized the same behavioral stories, they viewed the robot faces again while their brain activity was recorded. This method allowed the researchers to observe the timing and stages of mental state attribution at the neural level.

From perception to evaluation: brain dynamics of mind attribution

The EEG data revealed a shift in neural activity triggered by the affective information. Specifically, participants showed stronger responses in what is known as the  N170 component – a marker of early face perception – when robot faces were paired with negative information. Later in the process, another component called the  LPP component, associated with evaluative and emotional processing, also increased. These effects mirror how the human brain processes other people’s faces. However, a key difference emerged: the early posterior negativity (EPN), typically associated with fast emotional responses, was not modulated by the robot information. In other words, robots triggered less immediate emotional engagement.

“Although the robot faces displayed objectively neutral expressions, after learning affective information participants projected emotional expressions onto them, perceiving them as friendly or hostile depending on their beliefs about the robot’s past behavior,” said Martin Maier, lead author of the study. “EEG recordings revealed that this information influenced both early perceptual and later evaluative stages of brain activity, similar to human face perception, except in fast emotional responses, where robots were processed differently.”

Perception shaped by context and expectation, even in art

This study aligns with prior work by Rasha Abdel Rahman and colleagues showing that even our perception of art can vary depending on what we know about the artist. In both cases, perception is not fixed, but is rather constructed in context. If we believe a person, or a robot, has done something harmful, we “just see it” in their face. If we believe the opposite, the same features can appear benign or even warm.

Such findings illustrate a basic principle of human cognition: expectations and prior knowledge actively shape perception. Whether the subject is a painting or a machine, what we think we know influences what we see.

Why this matters for social robots

As robots become more common in society, and are used in caregiving, education, service work, and public spaces, the way people perceive and judge them will become increasingly important. The study suggests that robots do not need to have emotions or minds for us to treat them as if they did. Merely describing their past behavior in moral or emotional terms is enough to shift the way we perceive and evaluate their faces.

This has consequences for design, communication, and ethics. If people instinctively assign intentionality and emotion based on stories or associations, designers and developers must consider not just the appearance of the robot, but also how contextual cues shape human interpretation.

A step toward understanding adaptive representations

The findings also support one of the core principles of intelligence formulated at Science of Intelligence and actively  explored in ongoing research: the principle of Adaptive Representations. This principle holds that the ability to adjust internal representations flexibly in response to goals, tasks, and context is a hallmark of intelligent behavior. In this case, participants changed how they perceived the exact same robot face based on prior knowledge, a clear case of representation adapting to memory and expectation.

Whether the agent is human, animal, or artificial, this kind of flexibility reflects intelligent behavior. The study contributes a concrete example of how even nonhuman agents are perceived through these adaptable cognitive frameworks.

Conclusion: robots seen, minds assigned

In summary, humans quickly and automatically attribute mental states to robot faces when told something about the robot’s behavior. These attributions appear in brain activity within a few hundred milliseconds. But unlike with human faces, robot faces do not seem to trigger the same immediate emotional responses. This cognitive-emotional gap could be key to understanding the limits of how “human” we perceive robots to be.

As social robots take on more roles in our lives, understanding how and why people assign them intentions and emotions will be central to building systems we can interact with transparently, fairly, and safely.

Research

An overview of our scientific work

See our Research Projects