Abstract
The size and complexity of current Deep Artificial Neural Networks pose remarkable challenges to our attempts of explaining and understanding their workings. In this talk, I put forward a proposal for complementing existing efforts to that aim, inspired by research on cognitive ontology in philosophy of cognitive science. In particular, I suggest that, as much as we need theoretically and empirically grounded categories for cognitive tasks, cognitive capacities, and cognitive mechanisms in cognitive science, we need theoretically and empirically grounded categories for functional tasks, capacities and mechanisms when studying AI systems. The resulting functional ontologies, I argue, can play crucial roles in informing further research in explainable and interpretable AI, and in refining our understanding of such systems. I illustrate this proposal by examining recent research on the computational mechanisms underlying specific capacities in Large Language Models, showing how appeal to functional ontologies can further enrich such research.