Modern AI systems such as LLMs are pervasive and helpful, but do they really have the social intelligence to seamlessly and safely engage in interactions with humans? In this talk, Maarten Sap will delve into the limits of social intelligence of LLMs and how we can measure and anticipate their risks. He will introduce Sotopia, a new social simulation environment to evaluate the interaction abilities of LLMs as social AI agents. He will show how today’s most powerful models struggle to socially interact due to inability to deal with information asymmetry. He will then shift to how LLMs pose new ethical challenges in their interactions with users. Specifically, through their language modality and possible expressions of uncertainty, his work shows that LLMs tend to express overconfidence in their answers even when incorrect, which users tend to over-rely on. Finally, Maarten Sap will introduce ParticipAI, a new framework to anticipate future AI use cases and dilemmas. Through their framework, his work shows that lay users can help us anticipate the benefits and harms of allowing or not allowing an AI use case, paving the way for more democratic approaches to AI design, development, and governance. He will conclude with some thoughts on future directions towards socially aware and ethically informed AI.
This talk will take place as part of SCIoI member Jonas Frenkel’s seminar “Artificial Social Intelligence.” It aims to provide a comprehensive exploration of ASI, which involves the observation, analysis, and synthesis of social phenomena. It integrates synthetic sciences such as machine learning, computer vision, and robotics with cognitive science, psychology, neuroscience, and the humanities to focus on the perception, cognitive components, and behaviors linked to social intelligence.
This talk will take place in person at SCIoI.
Image created with DALL-E by Maria Ott