Distinguished Speaker Series: Cameron Buckner (Univ. of Houston), Imagination and the Prospects for Empiricist Artificial Intelligence
Abstract: In current debates over deep-neural-network-based AI, deep learning researchers have adopted the mantle of philosophical empiricism and associationism, and its critics have taken up the side of philosophical rationalism and nativism. These rationalist critics, however, often interpret associationism and empiricism in a way which is too caricatured to fit the views of any significant thinker in the empiricist tradition. In particular, most empiricists were faculty theorists; while they generally eschewed innate knowledge, they appealed to a variety of domain-general innate faculties like memory, imagination, and attention to explain how the mind abstracts knowledge from experience. This dynamic is vividly illustrated in a centuries-displaced debate between David Hume and Jerry Fodor over the role of imagination in cognitive architecture. Fodor famously claimed that the ability to synthesize novel ideas and create new compositional representations is required for cognition. Fodor applauds Hume for agreeing on these points, but criticizes Hume’s use of the imagination to discharge these burdens. Fodor claims that such an appeal for an associationist is “cheating”, and notes that Hume never explains how the empiricist imagination actually works, merely assigning it a variety of essential functions to perform “as if by magic”.
More recently, deep learning researchers have claimed to create generative deep neural network models that perform one or more of the roles ascribed to the imagination by cognitive psychology and neuroscience. In this talk, I canvass these models and their achievements (especially Generative Adversarial Networks, Variational Autoencoders, and Generative Transformers) to arbitrate this dispute between Humean empiricism and Fodorian rationalism. Of particular interest will be various methods of latent space vector interpolation which appear to allow these models to create novel compositional representations, whether these methods still count as associationist in nature, and whether the purportedly crucial distinction between interpolation and extrapolation remains viable in the higher-order dimensional spaces over which these models operate.
Click here for Cameron Buckner’s Bio.