Abstract: We explore how neuro-symbolic AI, i.e., combining neural networks with symbolic knowledge representation, can drive the next generation of open, transparent, and responsible scientific research. By combining the adaptability of machine learning with the interpretability of structured knowledge, neuro-symbolic approaches offer powerful tools for enhancing reproducibility, semantic interoperability, and trust in AI-driven science. With examples such as the Open Research Knowledge Graph and TIB’s AI research assistant, we highlight how these methods support machine-readable research outputs, facilitate cross-disciplinary collaboration, and align with the core values of open science, ultimately shaping a more inclusive and accountable research ecosystem.
This talk is part of the 2025 Berlin Summer School on Artificial Intelligence and Society, check out the website for more info.
Image created with DALL-E by Maria Ott.