BEGIN:VCALENDAR
VERSION:2.0
PRODID:-//scienceofintelligence.de - ECPv6.15.12.2//NONSGML v1.0//EN
CALSCALE:GREGORIAN
METHOD:PUBLISH
X-WR-CALNAME:scienceofintelligence.de
X-ORIGINAL-URL:https://www.scienceofintelligence.de
X-WR-CALDESC:Events for scienceofintelligence.de
REFRESH-INTERVAL;VALUE=DURATION:PT1H
X-Robots-Tag:noindex
X-PUBLISHED-TTL:PT1H
BEGIN:VTIMEZONE
TZID:Europe/Berlin
BEGIN:DAYLIGHT
TZOFFSETFROM:+0100
TZOFFSETTO:+0200
TZNAME:CEST
DTSTART:20240331T010000
END:DAYLIGHT
BEGIN:STANDARD
TZOFFSETFROM:+0200
TZOFFSETTO:+0100
TZNAME:CET
DTSTART:20241027T010000
END:STANDARD
BEGIN:DAYLIGHT
TZOFFSETFROM:+0100
TZOFFSETTO:+0200
TZNAME:CEST
DTSTART:20250330T010000
END:DAYLIGHT
BEGIN:STANDARD
TZOFFSETFROM:+0200
TZOFFSETTO:+0100
TZNAME:CET
DTSTART:20251026T010000
END:STANDARD
BEGIN:DAYLIGHT
TZOFFSETFROM:+0100
TZOFFSETTO:+0200
TZNAME:CEST
DTSTART:20260329T010000
END:DAYLIGHT
BEGIN:STANDARD
TZOFFSETFROM:+0200
TZOFFSETTO:+0100
TZNAME:CET
DTSTART:20261025T010000
END:STANDARD
END:VTIMEZONE
BEGIN:VEVENT
DTSTART;TZID=Europe/Berlin:20250116T100000
DTEND;TZID=Europe/Berlin:20250116T110000
DTSTAMP:20260408T014409
CREATED:20250106T095531Z
LAST-MODIFIED:20250107T153234Z
UID:22991-1737021600-1737025200@www.scienceofintelligence.de
SUMMARY:Anita Keshmirian (Forward College\, Berlin): "Many Minds\, Diverging Morals: Human Groups vs. AI in Moral Decision-Making"
DESCRIPTION:Moral judgments are inherently social\, shaped by interactions with others in everyday life. Despite this\, psychological research has rarely examined the impact of social interactions on these judgments. In our study\, we explored the role of group dynamics in moral decision-making by having small groups (4-5 participants) evaluate moral dilemmas first individually\, then collectively\, and finally individually a second time. Participants judged real-life and sacrificial moral dilemmas involving actions or inactions violating moral principles to benefit the greater good. Experiment 1 found that collective judgments were more utilitarian than individual judgments\, supporting the hypothesis that group deliberation temporarily reduces the emotional burden of violating moral norms. \nExperiment 2 measured participants’ state anxiety and moral judgments before\, during\, and after online interactions. Results again showed that collectives were more utilitarian\, reducing state anxiety during and after social interaction\, suggesting that stress reduction may explain the shift toward utilitarianism in group settings. We replicated this experiment using multi-agent large language models (LLMs) to test how artificial agents make moral decisions. Preliminary findings revealed that\, unlike humans\, groups of LLM agents were less utilitarian than individual agents. Analysis of the agents’ interactions showed a consistent pattern of virtue-signaling\, with LLMs emphasizing deontological reasoning (focusing on moral rules) rather than utilitarian principles. \nThis divergence from human behavior suggests that collective reasoning in AI systems is shaped by different dynamics\, likely due to how LLMs are trained to prioritize socially accepted norms. These results highlight important differences in moral decision-making between human and artificial intelligence\, offering new insights into the development of AI systems that more closely mirror human ethical reasoning\, particularly in complex\, real-world collective decision-making scenarios. \nImage created with DALL-E by Maria Ott
URL:https://www.scienceofintelligence.de/event/anita-keshmirian-many-minds-diverging-morals-human-groups-vs-ai-in-moral-decision-making/
LOCATION:SCIoI\, Marchstraße 23\, 10587 Berlin\, Room 2.057
CATEGORIES:Thursday Morning Talk
ATTACH;FMTTYPE=image/webp:https://www.scienceofintelligence.de/wp-content/uploads/2025/01/TMT_Anita_Keshmirian-2-e1736256383948.webp
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=Europe/Berlin:20250123T100000
DTEND;TZID=Europe/Berlin:20250123T110000
DTSTAMP:20260408T014409
CREATED:20250106T100435Z
LAST-MODIFIED:20250603T124327Z
UID:22994-1737626400-1737630000@www.scienceofintelligence.de
SUMMARY:Wannes Ooms (KU Leuven Centre for IT & IP Law -Imec): A General Introduction to the EU AI Act
DESCRIPTION:The EU AI Act introduces new obligations for providers and deployers of AI systems. In this presentation\, we will discuss the scope of the AI Act\, the different qualifications of AI systems under the act and the related obligations or requirements. We also provide a look ahead at key deadlines\, the status of standards and conformity assessments\, and other responsibilities along the AI value chain. \nThis event will take place in person and will be streamed via zoom. \nPhoto by Alex Knight on Unsplash
URL:https://www.scienceofintelligence.de/event/wannes-ooms-ku-leuven-centre-for-it-ip-law-imec-a-general-introduction-to-the-eu-ai-act/
LOCATION:Marchstraße 23\, 10587 Berlin\, Room 2.057
CATEGORIES:Thursday Morning Talk
ATTACH;FMTTYPE=image/jpeg:https://www.scienceofintelligence.de/wp-content/uploads/2025/01/alex-knight-2EJCSULRwC8-unsplash-scaled.jpg
END:VEVENT
END:VCALENDAR