Between gestures and robots: Jonas Frenkel on nonverbal cues and social intelligence

What makes a student lean forward in interest, or tune out completely? Cognitive Scientist Jonas Frenkel believes the answer lies not only in words, but rather in the choreography of gestures, gaze, and posture. At Science of Intelligence (SCIoI), he studies these invisible cues, using computational tools to  find out how they shape social learning and what they could reveal about the principles of human intelligence.

By framing these subtle exchanges within a larger effort to understand social intelligence, SCIoI’s researchers connect the classroom to a broader question: how do agents, whether human or artificial, share knowledge in ways that go beyond words? The same signals that guide a child through a puzzle or a student through a lesson also point to the deeper principles of how intelligence itself is transferred, accumulated, and scaled in social contexts.

In research terms, these gestures, gazes, and postures are forms of nonverbal communication, the focus of Jonas’ work. “Nonverbal communication is everywhere in teaching,” he says. “A glance, a nod, even how close a teacher stands, these things shape how students feel and how much they learn. Yet they’re so automatic we barely notice them, until they’re missing.”

Spotting the signals behind social learning

Humans, very much like animals, don’t have to discover everything themselves from scratch. Much of our intelligence comes from social learning. We watch, imitate, and interpret others’ behaviors, and nonverbal communication plays a central role in this process. In his project, Jonas investigates how algorithms can help us better understand the nonverbal signals that drive social learning. With his colleagues, he analyzes hours of classroom footage, breaking down interactions into fragments of gestures, gazes, and expressions, and trains computer vision models to spot them.

“In psychology, we usually code behavior manually, often in 20-minute stretches, and it’s incredibly time-consuming,” he explains. “An algorithm, on the other hand, has to learn frame by frame what matters. Humans make these judgments instantly. Machines don’t.”

The lab at University of Potsdam where Jonas works (SCIoI PI Rebecca Lazarides) uses methods such as eyetracking and electrodermal activity (EDA) to measure how students respond to different teaching styles, for instance when a teacher maintains eye contact instead of avoiding it, uses expressive gestures instead standing still, or moves closer to the students instead of keeping more distance. But the point isn’t simply to build a machine that “sees” like a human. Rather, it’s a way of testing theories: Are our assumptions about how people pick up on these subtle signals actually backed up? If the model succeeds, it supports the theory. If it fails, it shows that our understanding is incomplete. “In this way, the model becomes our scientific probe, helping us refine our theories and, ultimately, understand the principles of social intelligence itself,” says Jonas.

Why nonverbal matters

Psychologists have long debated how much of communication is nonverbal. Jonas is cautious about putting it into percentages, and says that “70 percent, 80 percent, those numbers don’t tell us much,” but he does emphasize that the effect of nonverbal communication is profound. “Strip away gesture, tone, and presence, and communication becomes less effective.”

“Think back to online schooling during the pandemic,” he says. “Many nonverbal signals fell away as lessons were held through video calls. That didn’t stop communication of course, but it led to more misunderstandings. It showed us how much learning can depend on a teacher’s enthusiastic gestures, eye contact, and even the smallest shift in body language.”

And while some evidence suggests teachers should minimize gestures to reduce distraction, other studies find that movement energizes learners. Jonas suspects both are true, and only fine-grained analysis will reveal how they balance.

From classrooms to robots

Studying classroom videos can reveal patterns, but it has limits: no two teachers act the same way twice, and no two student groups are ever identical. To push further, Jonas turns to robots, not as stand-ins for teachers, but as scientific instruments. “With a robot, I can keep everything constant and vary just one signal, like eye contact, gesture or distance, and see how learners respond,” he explains. “That level of consistency is impossible with humans.”

For his experiments, Jonas works with Pepper, the humanoid robot often used in educational studies. By embedding algorithms trained on classroom data, Jonas can program Pepper to adjust its behavior, for example by  gesturing more or less or maintaining or avoiding eye contact, to then measure the effects on student engagement. In this way, the robot provides a controlled setting to further examine whether the theories about nonverbal communication truly hold.

In practice, however, Pepper doesn’t always behave like the tidy scientific instrument it is meant to be. “One day Pepper turned toward a window and seemed fascinated by its own reflection, completely ignoring the students,” Jonas recalls. “Another time it stopped mid-lesson to announce, very politely: ‘I’m sorry, my motors are overheating.’ And then it shut down.”

He laughs. “When a laptop crashes, it’s annoying. But when Pepper looks at you, smiles, and then suddenly refuses to cooperate, it feels almost intentional. You can’t help but feel it’s being stubborn. That reaction shows how we project intention onto machines. Even the failures tell us something about the social nature of intelligence.”

Research, risk, and responsibility

At the same time, algorithms that can read engagement or emotion come with their share of challenges. If applied without care, they might drift into areas like monitoring or unwanted evaluation of people’s behavior. Jonas is well aware of these concerns, and they shape how he works in his studies.

“At SCIoI, our mission is not to produce quick commercial tools, but rather to ask what intelligence actually is, in humans, animals, and in machines,” he explains. That means reflecting on consequences from the very beginning. We discuss projects across psychology, computer science, philosophy, sociology, and robotics, and we deliberately avoid directions that could turn engagement-detection into surveillance tools.”

Reflection, however, does not stay within the lab. A central part of Jonas’s work is to bring these questions into the public sphere, where awareness itself can act as a safeguard. He participates in forums such as the Berlin University Alliance’s Open Space on AI and Ethics, where citizens and developers learn about fairness and responsibility in AI, or Berlin Science Week’s panel “Artificial Intelligence: Examples of AI gone wrong and Ethical Questions.” He also joined the SCIoI Fair for a talk on Social Intelligence and the application of robots and regularly engages with visiting groups at SCIoI — from teachers and students to international guests from science and industry. These encounters, often more conversational than technical, allow him to surface the ethical dilemmas of his field in front of those most likely to use or be affected by the technologies later on.

In his view, communication is a responsibility: a way of ensuring that the research community, potential developers, and the wider public share an understanding of both the possibilities and the pitfalls. Creating that awareness early, he argues, is one of the most effective ways to guide new technologies toward safe and socially meaningful applications.

Principles of social intelligence

These different strands of research, from coding gestures in classrooms to experimenting with robots and engaging in public debate, all point back to one idea: intelligence is deeply social. It is not something isolated in an individual mind but something that takes shape between people, through signals and responses. By studying how we read gestures and gazes, testing those insights with machines, and discussing their consequences openly, Jonas aims to find out the principles that make social learning possible in the first place.

“In the end, intelligence is not about abstract problem-solving alone,” he says. “An important part of it is finding your way in the social world: reading signals, building trust and understanding others. If we can understand how that works, even in part, we can come closer to the principles of social intelligence itself, and we can shape technologies that reflect these principles responsibly.”


Research

An overview of our scientific work

See our Research Projects