BEGIN:VCALENDAR
VERSION:2.0
PRODID:-//scienceofintelligence.de - ECPv6.15.12.2//NONSGML v1.0//EN
CALSCALE:GREGORIAN
METHOD:PUBLISH
X-ORIGINAL-URL:https://www.scienceofintelligence.de
X-WR-CALDESC:Events for scienceofintelligence.de
REFRESH-INTERVAL;VALUE=DURATION:PT1H
X-Robots-Tag:noindex
X-PUBLISHED-TTL:PT1H
BEGIN:VTIMEZONE
TZID:Europe/Berlin
BEGIN:DAYLIGHT
TZOFFSETFROM:+0100
TZOFFSETTO:+0200
TZNAME:CEST
DTSTART:20230326T010000
END:DAYLIGHT
BEGIN:STANDARD
TZOFFSETFROM:+0200
TZOFFSETTO:+0100
TZNAME:CET
DTSTART:20231029T010000
END:STANDARD
BEGIN:DAYLIGHT
TZOFFSETFROM:+0100
TZOFFSETTO:+0200
TZNAME:CEST
DTSTART:20240331T010000
END:DAYLIGHT
BEGIN:STANDARD
TZOFFSETFROM:+0200
TZOFFSETTO:+0100
TZNAME:CET
DTSTART:20241027T010000
END:STANDARD
BEGIN:DAYLIGHT
TZOFFSETFROM:+0100
TZOFFSETTO:+0200
TZNAME:CEST
DTSTART:20250330T010000
END:DAYLIGHT
BEGIN:STANDARD
TZOFFSETFROM:+0200
TZOFFSETTO:+0100
TZNAME:CET
DTSTART:20251026T010000
END:STANDARD
END:VTIMEZONE
BEGIN:VEVENT
DTSTART;TZID=Europe/Berlin:20241031T100000
DTEND;TZID=Europe/Berlin:20241031T110000
DTSTAMP:20260408T063222
CREATED:20241002T101830Z
LAST-MODIFIED:20250603T124704Z
UID:22386-1730368800-1730372400@www.scienceofintelligence.de
SUMMARY:POSTPONED: Anita Keshmirian (Forward College\, Berlin)\, “Many Minds\, Diverging Morals: Human Groups vs. AI in Moral Decision-Making”
DESCRIPTION:Abstract \n“Moral judgments are inherently social\, shaped by interactions with others in everyday life. Despite this\, psychological research has rarely examined the impact of social interactions on these judgments. In our study\, we explored the role of group dynamics in moral decision making by having small groups (4-5 participants) evaluate moral dilemmas first individually\, then collectively\, and finally individually a second time. Participants judged real-life and sacrificial moral dilemmas involving actions or inactions violating moral principles to benefit the greater good. Experiment 1 found that collective judgments were more utilitarian than individual judgments\, supporting the hypothesis that group deliberation temporarily reduces the emotional burden of violating moral norms. Experiment 2 measured participants’ state anxiety and moral judgments before\, during\, and after online interactions. Results again showed that collectives were more utilitarian\, reducing state anxiety during and after social interaction\, suggesting that stress reduction may explain the shift toward utilitarianism in group settings. \nWe replicated this experiment using multi-agent large language models (LLMs) to test how artificial agents make moral decisions. Preliminary findings revealed that\, unlike humans\, groups of LLM agents were less utilitarian than individual agents. Analysis of the agents’ interactions showed a consistent pattern of virtue-signaling\, with LLMs emphasizing deontological reasoning (focusing on moral rules) rather than utilitarian principles. This divergence from human behavior suggests that collective reasoning in AI systems is shaped by different dynamics\, likely due to how LLMs are trained to prioritize socially accepted norms. These results highlight important differences in moral decision-making between human and artificial intelligence\, offering new insights into the development of AI systems that more closely mirror human ethical reasoning\, particularly in complex\, real-world collective decision-making scenarios.” \nImage credit: ©SCIoI/ generated with DALL-E
URL:https://www.scienceofintelligence.de/event/anita-keshmirian-forward-college-berlin-many-minds-diverging-morals-human-groups-vs-ai-in-moral-decision-making/
LOCATION:Marchstraße 23\, 10587 Berlin\, Room 2.057
CATEGORIES:Thursday Morning Talk
ATTACH;FMTTYPE=image/webp:https://www.scienceofintelligence.de/wp-content/uploads/2024/10/TMT_Image_creativity_artificial2-e1727864193669.webp
END:VEVENT
END:VCALENDAR