BEGIN:VCALENDAR
VERSION:2.0
PRODID:-//scienceofintelligence.de - ECPv6.15.12.2//NONSGML v1.0//EN
CALSCALE:GREGORIAN
METHOD:PUBLISH
X-WR-CALNAME:scienceofintelligence.de
X-ORIGINAL-URL:https://www.scienceofintelligence.de
X-WR-CALDESC:Events for scienceofintelligence.de
REFRESH-INTERVAL;VALUE=DURATION:PT1H
X-Robots-Tag:noindex
X-PUBLISHED-TTL:PT1H
BEGIN:VTIMEZONE
TZID:Europe/Berlin
BEGIN:DAYLIGHT
TZOFFSETFROM:+0100
TZOFFSETTO:+0200
TZNAME:CEST
DTSTART:20240331T010000
END:DAYLIGHT
BEGIN:STANDARD
TZOFFSETFROM:+0200
TZOFFSETTO:+0100
TZNAME:CET
DTSTART:20241027T010000
END:STANDARD
BEGIN:DAYLIGHT
TZOFFSETFROM:+0100
TZOFFSETTO:+0200
TZNAME:CEST
DTSTART:20250330T010000
END:DAYLIGHT
BEGIN:STANDARD
TZOFFSETFROM:+0200
TZOFFSETTO:+0100
TZNAME:CET
DTSTART:20251026T010000
END:STANDARD
BEGIN:DAYLIGHT
TZOFFSETFROM:+0100
TZOFFSETTO:+0200
TZNAME:CEST
DTSTART:20260329T010000
END:DAYLIGHT
BEGIN:STANDARD
TZOFFSETFROM:+0200
TZOFFSETTO:+0100
TZNAME:CET
DTSTART:20261025T010000
END:STANDARD
END:VTIMEZONE
BEGIN:VEVENT
DTSTART;TZID=Europe/Berlin:20250917T133000
DTEND;TZID=Europe/Berlin:20250917T143000
DTSTAMP:20260429T182304
CREATED:20250823T143642Z
LAST-MODIFIED:20250902T105200Z
UID:26648-1758115800-1758119400@www.scienceofintelligence.de
SUMMARY:Sören Auer (TIB – Leibniz Information Centre for Science and Technology and University Library)\, "Neuro-symbolic AI for Open Science"
DESCRIPTION:Abstract: We explore how neuro-symbolic AI\, i.e.\, combining neural networks with symbolic knowledge representation\, can drive the next generation of open\, transparent\, and responsible scientific research. By combining the adaptability of machine learning with the interpretability of structured knowledge\, neuro-symbolic approaches offer powerful tools for enhancing reproducibility\, semantic interoperability\, and trust in AI-driven science. With examples such as the Open Research Knowledge Graph and TIB’s AI research assistant\, we highlight how these methods support machine-readable research outputs\, facilitate cross-disciplinary collaboration\, and align with the core values of open science\, ultimately shaping a more inclusive and accountable research ecosystem. \nThe talk is part of the 2025 Berlin Summer School on Artificial Intelligence and Society\, jointly organized by the Berlin Institute for the Foundations of Learning and Data (BIFOLD)\, the Weizenbaum Institute for the Networked Society\, and Science of Intelligence (SCIoI)\, will focus on a timely and critical topic: “Open Science and AI – Shaping the Future of Responsible Research.” \nTaking place from 15 to 18 September 2025\, this year’s Summer School invites early-career researchers and advanced Master’s students to explore how open data\, open collaboration\, and responsible practices can help build more transparent\, fair\, and trustworthy AI systems. \n  \nImage created with DALL-E by Maria Ott.
URL:https://www.scienceofintelligence.de/event/soren-auer-tib-leibniz-information-centre-for-science-and-technology-and-university-library-neuro-symbolic-ai-for-open-science/
LOCATION:TU-Campus EUREF\, EUREF-Campus 9\, 10829 Berlin-Schöneberg.      EUREF-Campus 10829 Germany
CATEGORIES:2025 Berlin Summer School on Artificial Intelligence and Society
ATTACH;FMTTYPE=image/png:https://www.scienceofintelligence.de/wp-content/uploads/2025/04/ChatGPT-Image-May-30-2025-01_17_03-PM.png
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=Europe/Berlin:20250917T100000
DTEND;TZID=Europe/Berlin:20250917T110000
DTSTAMP:20260429T182304
CREATED:20250825T115632Z
LAST-MODIFIED:20250902T105136Z
UID:26664-1758103200-1758106800@www.scienceofintelligence.de
SUMMARY:Katrin Frisch (Ombuds Committee for Research Integrity in Germany)\, "Maintaining Integrity when Using AI in Your Research"
DESCRIPTION:Abstract: Since the release of ChatGPT and other generative AI applications\, research institutions as well as various stakeholders such as research funders and publishers have been discussing how the use of AI in research should be dealt with from the perspective of research integrity. While a consensus on some matters was quickly reached\, other issues are still being debated. This has led to a rather heterogeneous policy landscape\, which will likely remain in flux due to the fast-paced nature of AI. Additionally\, a recent survey by Wiley found that nearly two-thirds of respondents indicated that the lack of clear guidelines prevents them from using generative AI to the extent that they would like. \nIn this presentation\, participants will get an overview of the status quo of the current debate on AI and research integrity. The presentation has a practical bend covering how to disclose different use cases of AI in publications and what to keep in mind when it comes to working with AI in research\, for example regarding peer review\, writing grants or using AI-generated images. The presentation will also raise awareness about further issues that are still under debate such as recommendations on which tools to use\, questions of access and ethical aspects. \nThe talk is part of the 2025 Berlin Summer School on Artificial Intelligence and Society\, jointly organized by the Berlin Institute for the Foundations of Learning and Data (BIFOLD)\, the Weizenbaum Institute for the Networked Society\, and Science of Intelligence (SCIoI)\, will focus on a timely and critical topic: “Open Science and AI – Shaping the Future of Responsible Research.” \nTaking place from 15 to 18 September 2025\, this year’s Summer School invites early-career researchers and advanced Master’s students to explore how open data\, open collaboration\, and responsible practices can help build more transparent\, fair\, and trustworthy AI systems. \nImage created with DALL-E by Maria Ott.
URL:https://www.scienceofintelligence.de/event/katrin-frisch-ombuds-committee-for-research-integrity-in-germany-maintaining-integrity-when-using-ai-in-your-research/
LOCATION:TU-Campus EUREF\, EUREF-Campus 9\, 10829 Berlin-Schöneberg.      EUREF-Campus 10829 Germany
CATEGORIES:2025 Berlin Summer School on Artificial Intelligence and Society
ATTACH;FMTTYPE=image/jpeg:https://www.scienceofintelligence.de/wp-content/uploads/2025/04/chatgtp7.jpg
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=Europe/Berlin:20250915T103000
DTEND;TZID=Europe/Berlin:20250915T113000
DTSTAMP:20260429T182304
CREATED:20250822T144435Z
LAST-MODIFIED:20250912T140019Z
UID:26633-1757932200-1757935800@www.scienceofintelligence.de
SUMMARY:Elena Simperl (King's College London)\, "Open Data Infrastructure in the Age of Generative AI"
DESCRIPTION:This event is for registered attendees. \nOpen data infrastructure refers to the systems\, frameworks\, and processes put in place to collect\, store\, manage\, and share data generated or held by government\, science\, and other public institutions. It is meant to ensure that public data is accessible\, high-quality\, secure\, and usable by a wide range of stakeholders\, including the public. \nFor more than a decade\, we have witnessed millions of datasets made available via such infrastructure\, advancing research\, policymaking\, and innovation. However\, open data infrastructure is still far from realising its potential; non-technical users face significant barriers in navigating complex datasets and extracting meaningful information to support their decisions. \nFurthermore\, the global AI race has put substantial strains on this infrastructure\, with data holders forced to re-examine their ability to sustain critical public services. \nIn this talk I will walk through some of my recent research into addressing these challenges. I will start with a series of user studies\, which explore how professionals in various data-related roles engage with chatbots to find\, make sense\, and use open data. \nDiving deeper to the accuracy issues suggested by these studies\, I will then describe two experiments\, which use machine unlearning and information leakage methods to understand if existing public authoritative sources of data are used by widely accessible generative AI tools. \nInformed by the findings\, my team developed PortalGPT\, a series of AI prototypes leveraging knowledge graphs\, large language models\, and retrieval-augmented generation to make open data more accessible and actionable for people with varying levels of data literacy. \nPortalGPT enhances dataset discovery by bridging the gap between user information needs and structured data queries and enables dataset exploration through interactive analysis tools. Through conversational natural language interactions\, users can seamlessly search\, analyse\, and explore knowledge from open data portals\, redefining the traditional methods of navigating and utilizing open datasets. \nThe talk is part of the 2025 Berlin Summer School on Artificial Intelligence and Society\, jointly organized by the Berlin Institute for the Foundations of Learning and Data (BIFOLD)\, the Weizenbaum Institute for the Networked Society\, and Science of Intelligence (SCIoI)\, will focus on a timely and critical topic: “Open Science and AI – Shaping the Future of Responsible Research.” \nTaking place from 15 to 18 September 2025\, this year’s Summer School invites early-career researchers and advanced Master’s students to explore how open data\, open collaboration\, and responsible practices can help build more transparent\, fair\, and trustworthy AI systems. \nImage generated with DALL-E by Maria Ott.
URL:https://www.scienceofintelligence.de/event/elena-simperl-kings-college-london-open-data-infrastructure-in-the-age-of-generative-ai/
LOCATION:TU-Campus EUREF\, EUREF-Campus 9\, 10829 Berlin-Schöneberg.      EUREF-Campus 10829 Germany
CATEGORIES:2025 Berlin Summer School on Artificial Intelligence and Society
ATTACH;FMTTYPE=image/webp:https://www.scienceofintelligence.de/wp-content/uploads/2025/08/Sepp_Hochreiter.webp
END:VEVENT
END:VCALENDAR