Abstract
This article explores the consequences that Fodor and Pylyshyn's 1988 criticism of connectionist representations could have with respect to recent developments in Machine Learning. The questioning that these authors raise is usually called the challenge of systematicity. In particular, the so-called compositionality principle will be analyzed here, which establishes that the meaning of a sentence is determined by the meaning of its parts, a principle that according to the aforementioned authors cannot be accounted for through distributed (subsymbolic) representations. To address the point, we present an overview the state of Cognitive Science and Artificial Intelligence (AI) at the end of the 20th century, where the distinction between symbolic and subsymbolic approaches is used; and the development of both disciplines during the first decades of the 21st century is analyzed. It is concluded that the statistical tools adopted by the field of Machine Learning maintain characteristics that allow these techniques to be divided through the symbolic/subsymbolic distinction, so that the arguments of Fodor and Pylyshyn can in principle be applied to these techniques. The consequences of this would be related to the so-called problem of epistemic opacity.
Comments
This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 3.0 Unported License.
Copyright (c) 2024 Revista de Filosofía de la Universidad de Costa Rica