Making AI understandable, fair, and responsible: SCIoI at Berlin University Alliance’s Open Space #2 on AI & Ethics

Artificial intelligence has slipped quietly into daily life. It sorts our emails, recommends what we watch, plans production lines, and responds to our voice commands. Yet as AI becomes more powerful and more present, the public’s questions grow louder: How do we ensure AI is fair? Who is responsible when it fails? How do we prevent it from reinforcing bias, or public mistrust?

These were the central questions at the second edition of BUA Open Space: “AI & Ethics”, held on 25 September, 2024, at the Merantix AI Campus. Around 80 participants joined researchers, entrepreneurs, and civil society to debate how we might shape AI responsibly. Representing Science of Intelligence (SCIoI) were two researchers whose work sits precisely at this intersection: Dafna Burema and Jonas Frenkel. Their perspectives highlighted the scientific as well as the societal dimension behind intelligent systems.

“AI is a social product, and its failures are not only technical.”

As a sociologist, Dafna Burema studies how and why humans create artificial intelligence, and which values, intended or not, become embedded within it. During the panel, she explained to the audience that algorithmic mistakes rarely come out of nowhere:

“When AI fails, the reasons can be technical, but often they are social.”

If an algorithm wrongfully suspects individuals based on skin tone or misinterprets behavior, it  could be because the system was trained on biased or incomplete data. In other words: human prejudice becomes machine output. A fundamental problem, Dafna emphasized, is  a lack of human oversight:

“We need better documentation processes for the data we use, to foster transparency about where the data comes from.”

Dafna also studies how deeply AI is woven into everyday routines, even when we don’t notice it. From navigation apps and streaming recommendations to spam filters, opting out of AI is becoming almost impossible. Generative AI goes one step further: it creates new text, images, music, and video, enabling rapid translation, simplification, and visualization of scientific ideas.

This can make research more accessible, but it raises new questions about manipulation, ownership, and trust.

“Transparency is one of the most important criteria,” Dafna argues. “Many problems arise because people do not know they are interacting with AI or consuming AI-generated content.”

For her, the field of AI ethics is about examining these gray zones, defining guidelines, and asking fundamental questions: Do we want AI? Do we need it? And under what conditions?

“AI can support social learning, but it can also be misused.”

While Dafna examines AI as a social phenomenon, Jonas Frenkel approaches it through the lens of social interaction. At SCIoI, he investigates how nonverbal cues—a glance, a gesture, a shift in posture—shape learning and guide the subtle dynamics of communication between people.
His current research focuses on understanding how these signals work.. By analyzing classroom interactions and using controlled experiments with the robot Pepper, Jonas studies how eye contact, movement, and attention influence engagement.

“Nonverbal communication is everywhere in learning,” Jonas explained. “These tiny signals are essential for interaction. They help us understand each other and create the kind of support that makes learning possible.”

But insight into these mechanisms can also be misused. AI systems capable of interpreting such signals could be repurposed to monitor, rate, or influence people without their awareness, raising ethical questions about how “social intelligence” should be implemented in machines.

For Jonas, ethical safety depends on active awareness:

“We need transparency, clear boundaries, and public dialogue,” he emphasized. “It’s not enough that a few experts understand these systems. We need to explain how they work—and where they fail—so society can decide how they should and shouldn’t be used.”

Who is responsible for AI’s decisions? And how much regulation do we need?

Alongside Dafna and Jonas, Laura Möller (K.I.E.Z.) provided insights from the startup ecosystem, where ethical choices can have immediate public impact. Her stance was clear:

“If founders don’t take responsibility, they won’t survive for long.”

The evening quickly turned toward bigger structural questions:
– Should companies be regulated more strictly?
– Does the market self-correct?
– Do we need AI systems to supervise other AI systems?
– And what skills must citizens learn to navigate an AI-driven world?

Moderator Mads Pankow highlighted one statistic that set the tone: nearly two-thirds of people in Germany believe AI will make their lives worse. The panel, however, struck a more nuanced balance.

Dafna noted:

“AI is like the internet, there are good and bad sides. You can also use it to create beautiful things.”

Jonas added:

“AI can take over tasks I don’t want to do, like searching for a single error in pages of computer code or summarizing long texts.”

And Laura concluded, half-jokingly:

“By the way, there are now AI robots that can fold laundry.”

Why SCIoI’s perspective matters

The conversation at BUA Open Space underscored what SCIoI stands for: understanding intelligence—biological, artificial, and social—in all its complexity. Dafna and Jonas demonstrated how essential it is to connect cutting-edge research with public, political, and ethical debate.

Making AI understandable is no longer optional and making it fair and responsible is no longer theoretical. Through contributions like those at the Open Space, SCIoI researchers bring exactly this interdisciplinary perspective into public dialogue, showing that the future of AI is not only a technical project, but a societal one.


Research

An overview of our scientific work

See our Research Projects