Connectionism is a theory that has dramatically grown in importance in recent years. It refers to the study of the design and functioning of ‘neural nets’. Neural nets are computers in the sense that they accept an input of information, process it, and then provide an output. However they are not computers in the conventional manner. The computers that proliferate in modern society all work on the principle of serial processing. This means that they do one operation at a time, and whilst modern computers are able to do hundreds of thousands of calculations every second, this system still has some fundamental weaknesses.
For example serial processing computers are dependent entirely upon what the programmer instructs the computer to do; they are not capable of effective learning without extremely complex software. Secondly serial processing computers are quite slow if they have to search for something that is in their memory. This is because they must compare the target item with every other item in their memory. This means that serial processing computers are very accurate, but rely on brute computing power to over-come the inefficiencies inherent in the serial processing method.
These weaknesses indicate that serial processing computers are fundamentally different to the most complex computer on the planet, the human brain. We know that the brain is vastly slower than a modern serial computer, in terms of the rate that it can perform calculations, yet when faced with a stimulus, for example our Grand- Mother, we can almost instantly recall a vast amount of stored information about that individual. The human brain is also notoriously inaccurate, with us all experiencing memory lapses on a daily basis. The human brain does not process information in a serial fashion.
The brain is made up of a huge net of cells called neurones. These neurones are influenced by a continuous input from those neurones that proceed them in the net. Depending on whether that input is excitatory or inhibitory, a neurone will either be resting, or firing. When a neurone is firing it influences those neurones it is connected to further on in the chain. As such the neurone is, in its self, a very simple unit that sums its input, and automatically makes an output when that input reaches a certain point. Connectionism is an attempt to replicate this form of processing, through the use of neural nets.
Neural nets are made up of a grid of nodes, each node is the equivalent of one neurone. Normally there is an output layer of nodes. All the nodes are connected to the nodes in the proceeding and following layers. As this form of processing follows a number of paths through the net simultaneously it is termed ‘Parallel Distributed Processing’ (P. D. P. ). By altering the amount of excitatory input (connection weight) required to make a node ‘fire’ it is possible to program a neural net to respond to a certain input with a certain output. The ‘memory’ of the input is stored in the connection weights.
By using a learning algorithm it is possible for a neural net to alter its connection weights, and consequently learn, independently of a programmer. As such connectionism theoretically offers a whole new understanding of human learning and cognition, but up until now it has failed to live up to its early promise. There are a number of reasons why connectionism has failed to deliver. Firstly there are some fundamental differences between a neural net and the brain. Secondly algorithm that is able to learn or process information of sufficient complexity for connectionism to be considered as a genuine means of explaining human cognition.
Thirdly a number of more fundamental arguments, criticising connectionism’s claim to offer an alternative to classical cognition, have been forwarded. It is these arguments that I will examine in this essay, as they focus solely on connectionism and cognition. The theory of connectionism is advancing at a considerable pace, and whether connectionism has the potential to create a cognitive network has been the subject of a fierce debate. This debate is important because much of connectionism’s significance lies in its claim to represent a Kuhnian shift in our understanding of learning and cognition.
If it could be demonstrated that connectionism, as it is now understood, does not have the capacity to cognitize, then as a concept it would be robbed of much of its allure. The debate about whether connectionism has the capacity to provide a new explanation for cognition began with a paper by Fodor and Pylyshyn (1988)(1). In their paper Fodor and Pylyshyn (hereafter F + P) introduced the ‘systematicity’ argument. This states that, almost by definition, a functioning cognitive system must demonstrate systematicity. Systematicity, argue F + P is demonstrated by both humans and lesser organisms. 1 – Fodor, J. A. Pylyshyn, Z. W. (1988). Connectionism and cognitive architecture: A critical analysis. Cognition, 28, 3-71. For example a pigeon that can be conditioned to respond to a blue circle must have the capacity to be conditioned to respond to a red square. Likewise in humans it is inconceivable that someone would be able to think the thought, “John loves the girl”, without being able to think, “the girl loves John”. Such systematicity, argued F + P, is only possible through syntactic structure and effective syntax.
A syntactic structure is one that employs representations that are used in structure sensitive ways, ie. t is a structure that is based on some form of language. As F + P put it, “an empirically adequate cognitive theory must recognise. relations of syntactic and semantic constituency”. Early connectionist systems did not attempt to utilise syntactic structure representations. Rather each node represented some part of the input. Again using F and P’s example, the input, ‘John loves the girl’ would result in the ‘John’, ‘the girl’ and ‘love’ nodes becoming active, but the system would not be able to differentiate between John loving the girl, and the girl loving John.
As such the system has no concept of context, of object and subject. Such systems are described by F and P as merely associative. The classical artificial intelligence (A. I. ) concept of mentality and cognition, a concept referred to as classicism, is an excellent example of systematic cognition. Classicism asserts that cognition is a form of rule-governed symbol manipulation. In a classical system each input is tokened with a symbol, which can then be manipulated according to the rules, or program, which the system is using.
Whilst classicism adequately demonstrates systematicity, it does not offer an adequate explanation for human cognition. This is because, according to Smolensky, classicisms commitment to programmable, representation level rules contradicts the evidence that human cognition conforms to a representation without rules concept. Smolensky goes so far as to describe classicism as being in a Kuhnian crisis. F + P do not claim that neural nets are incapable of systematicity through syntactic structure and effective syntax.
Rather they claim that the only way in whichsyntactic structure is possible in a neural net is through an implementation of the classical concept of cognition. As such all connectionist systems are doomed to fail because they will either be merely associative, or a mere implementation of classicism. The charge that a connectionist system is a mere implementation of classical theory is a serious one. Connectionism’s claim that it marks a Kuhnian shift and offers a viable new way to understand cognition relies on its status as something novel.
If it can be proven that connectionism is really nothing but a rehash of the theory it is supposed to replace, then it will relegate connectionism to a place in the shadow of classicism. Those who criticise F + P’s analyse accept the systematicity aspect of their argument, and as such the only way for the critics to challenge F + P’s conclusions is to show that an effective syntactic structure is possible in a connectionist system. Acceptance of the systematicity argument also has considerable implications for the role of language in connectionism.
It raises language above being a mere example of a higher cognitive function. It makes language, in the loosest sense of the word, a prerequisite for genuine cognitive thought. Implementing language in a connectionist system is not simple. Language is, almost by definition, structured, whilst a connectionist system, if it to avoid the charge of being an implementation of classicism, has to remain largely unstructured. Smolensky has written widely on the implementation of language in connectionist systems, and he was the first to respond to F + P’s analysis.
Smolensky answered F + P’s analysis on two grounds, which he termed the distributed (weakly compositional) case and the distributed (stongly compositional) case. The weakly compositional case is a response to F + P’s description of connectionist inputs as represented in the activity of an individual neurone. Smolensky says this is a naive observation that doesn’t relate to modern connectionist systems. Smolensky’s (1) alternative form of representation utilises micro-features of the original input, creating a distributed representation.
Smolensky described it as, “a family of distributed activity patterns”. The example that Smolensky gives is a full cup of coffee. This would be represented by the activity in a number of nodes, each of which would represent a microfeature of the cup of coffee. Relevant microfeatures could include such things as ‘hot-liquid’, ‘porcelain curved surface’, and ‘finger sized handle’. 1 – Smolensky, P. Connectionism, constituency, and the language of thought. Smolensky argues that this offers an alternative to classical representation because an input is not tokened in the classical way.
The representation of coffee is context-independent, there is no single representation of coffee. Also the micro-features that are activated following the input of a cup of coffee might also be active when another input, such as a tea-pot is presented. This theory correlates with what we know about human perception, in which a stimulus is broken down to its constituent elements in the brain, before being reconstructed into what we perceive. However the theory has been widely criticised, Quinlan (1) describes it as, “both unworkable and untenable”.
The weakly compositional case is an explanation of within-level processing, this means that it doesn’t have to provide for the problem of a lack of role; of subject or object. However there must be a form of between-level processing that would provide for context if the theory is to answer F + P’s criticisms. Smolensky’s answer to this is tensor-product representations. In a tensor-product representation one vector represents the ‘filler’ (the constituent), and a second vector represents the role. The vectors are tensor-multiplied, and the result is a representation for both the filler and the role.