How important is the concept of modularity to cognitive neuropsychology - Assignment Example

Modularity is a central idea in cognitive neuropsychology and refers to the idea that the brain (and by isomorphism, the mind) are structured into cognitive modules. Such modules are considered to processes information in functionally distinct ways. Barrett & Kurzban (2006, p. 1) describe modularity as “the notion that mental phenomena arise from the operation of multiple distinct processes rather than a single undifferentiated one”. How accurate is the idea of modularity and how well can it be applied to cognitive neuropsychology?

We will write a custom essay sample on Any topic specifically for you For Only $13.90/page

order now

Does the evidence from dissociations provide evidence for modularity or could it be that the brain and mind organised in a more connectionist manner? Are the ‘multiple systems’ that Kolb & Wishaw talk of functionally (and anatomically) distinct, or is the architecture of the mind more inclined to interconnected spreading activation? To what extent does modularity apply; does it apply equally to basic and higher functions? The notion of modularity provides a useful theoretical tool for conceptualising the workings of the mind in a concrete way.

As such, it is useful for directing research into cognitive architecture and to further inform models of undisrupted cognition, for example the case of KF (Shallice & Warrington, 1970). KF’s symptoms (functional long-term memory, impaired short-term memory) were inconsistent with the prevailing model of memory (Atkinson & Shiffrin, 1968) leading to its revision. Converging operations (Garner, Hake & Eriksen, 1956) and the use of the dual task paradigm in cognitive psychology have provided more evidence of functional dissociation.

However, in terms of disrupted cognition after acquired brain injury, the strictest concept of modularity has been reassessed in recent times. Historically, the Fodorian (1983) notion of innate, domain specific processors, able to deal with only with encapsulated information in narrow and automatic ways may have limited theoretical progress by a ‘virtus dormitiva’; the overemphasis of an observed aspect of behaviour which is then regarded as the essence of the paradigm (Van der Heijden & Bem, 2002).

Fodor (2000) himself explained that modularity may explain little about central and higher processes of the mind (e. g. , thinking, reasoning, reading, writing, etc. ) and may only apply to peripheral systems. There has been a paradigm shift towards modularity in terms of function, rather than strict automaticity and encapsulation. The concept of modularity has been hugely important, so much so that it has led to the Massive Modularity Theory. The theory claims that not only are peripheral functions modular, but so too are central functions.

It proposes that a modular mind was tightly enmeshed with our evolution and conferred an advantage to our ancestors. Marr (1976, p. 485) concurs with the massive modularity idea by stating “any large computation should be split up and implemented as a collection of small sub-parts. If a process is not designed this way a small change in one place will have consequences in another”. A modularised mind may confer an advantage by allowing organisms to continue to function reasonably well in the absence of any sub-part, and that functional specialisation makes for effective information processing (Pinker, 1997).

This applies well to computer models of thought, but how well this applies to human thought is debatable, indeed what Marr describes above is exactly what happens in neurologically injured people. The consequences for behaviour tend to be diffuse and don’t always fit with modular theory. Cognitive neuropsychology traditionally classified the effects of brain injuries into syndromes based on similar symptoms, for example, Broca’s and Wernicke’s aphasias.

These were taken as some of the first evidence for modularity in the mind, as double dissociations (Teuber, 1955) provide strong evidence that speech production and comprehension are causally localised independently within the brain. Modularisation is reductionist, and so is more precise than the classifications of syndromes as it fractionates brain function to distinct localisations, some of which are impaired after acquired brain injury, while others are not. While perfect double dissociations may provide the most compelling evidence for modularity, they are relatively rare.

Double dissociation, dissociations and associations all stem from an ontological viewpoint that create circularity in theory and observations. Taking the example of Broca’s and Wernicke’s aphasia, Van Orden, Pennington, & Stone (2001)point out that the dissociation of conceptual knowledge in (Broca’s aphasiacs) from syntactical knowledge (in Wernicke’s aphasiacs) stems from a theory of reading that implies that these functions are separate in the first place. Thus, the theory drives the observations which is unscientific.

Shallice (1988) pointed out that modularity is a priori rather than a posteriori stating “if modules exist, then…. double dissociations are a relatively reliable way of uncovering them. Double dissociations do exist. Therefore modules exist. ” (p. 248). The conclusion from this is that the evidence of modularity from dissociations is conceptually flawed. Discussing transparency, the idea that pathological performance will inform as to which module has been damaged, Caramazza (1986) states that any assumptions about impaired cognitive performance post injury will reflect more than just the effect of modular disruption.

For example, how much of the neurological impairment is due to the true effect of the damage to those modules involved in the disrupted behaviour? Some could be due to individual variations in processing performance, the effect of compensatory efforts on the part of the patient and disruptions to processing pathways rather than the modules themselves. As neuropsychological evidence comes from case studies, it is easy to see how individual variations play a big part and it is almost impossible to causally disentangle them.

When it comes to trying to find dissociations in central functions, Ellis & Young (p. 16) use the metaphor of “trying to carve a meatloaf at its joints”; an impossible task. Similarly, much processing is below the level of consciousness. Consider the case of DB who despite the condition of blindsight, was able to make accurate judgements about stimuli presented in the blind part of the visual field (Weiskrantz, 1986). Many cases do not fit the modular theory, and it is postulated that only the very rare pure cases truly do.

Marshall & Newcombe (1973) argued the case for double dissociation of lexical and non-lexical words in surface and deep dyslexia, yet Shallice (1988) suggests that deep dyslexia is no longer thought to be a pure dissociation. Referring to the case of WB (Funnel, 1983) describes a case who had an intact lexical (PINT) naming but impaired non-lexical (BINT) naming. He could not pronounce pseudo words (due to impaired non-lexical function) when reading, yet was able to do so when asked to repeat them.

Similar findings with YD were made by Dickerson (1999) whose reading of multi-letter words was almost intact compared to single letter units. Modular theory takes no account of this kind of data driven information as represented knowledge is arbitrarily related to the stimulus in a given environment. It is a theory based on individual modules performing specific operations on the input they are designed to receive; therefore the stimulus is of prime importance.

Clearly, the evidence of dual process modularity in reading is open to alternative interpretation when one considers ‘top down’ as well as ‘bottom up’ process (Van Orden et al. , 2001). Dickerson & Johnson (2004) interpret the case of YAH in a connectionist rather than modular perspective. YAH’s main area of difficulty was with stored semantic knowledge with evidence that the problem arose as a result of failures in activation patterns.

The case was therefore interpreted in terms of interaction model (as a result of activation pattern problems) rather than a traditional model which would require many distinct deficits to explain YAH’s difficulties. Plaut & Shallice (1993), using a connectionist network computer program designed to pronounce words based on meaning (representative of double dissociation of abstract and concrete words such as found in deep dyslexia), discovered that their non-modular system also gave rise to double dissociations of abstract and concrete words.

Therefore, they conclude that double dissociations seen in patients do not automatically imply modularity. They also found that during their trials with the connectionist program, occasionally two equivalent lesions gave rise to opposite symptoms which produced statistically significant double dissociations between the two tasks. They refute functional modularity by questioning the finding that a particular process appeared to be specialised in two different ways. Shallice (1988, p 250) describes “six types of system capable of producing double dissociations when damaged….. odular, coupled, continuum of processing space, overlapping processing region, semi-module and micro-process dissociation”. Connectionist findings are damaging for the view that only modular systems are able to produce double dissociations. A large body of pro-modularity evidence has come from fMRI studies demonstrating that localised areas of the brain are activated during particular functions. However, Sirotin & Das (2009) claim that changes in blood flow (as detected by fMRI) do not reflect parallel changes in neuronal activity.

Indeed, the two can be completely unconnected according to their study. Conclusions and inferences from fMRI scans about localised modules must be carefully reinterpreted in the light of these data. In order to evaluate the importance of modularity to cognitive neuropsychology, it is necessary to consider some implications of its usefulness. Plaut (1995) argues that in order for theoretical progress to be made, clinical observations from case studies must also be considered in context with empirical data and computational models.

Ellis (1987) exemplifies the view that pure cases are exceedingly rare, and that the majority are mixed cases (of deep dyslexia in this case, though also with other deficits) and are much more difficult to interpret in terms of resolving the modularity debate. It is not possible to claim that modules do exist due to the evidence provided from connectionism and double dissociations. However, the conclusions from connectionism must also be interpreted with caution. For example, is a single lesion in a computer program a valid representation of human pathological function?

These may not represent the overall function of the brain due to the small scale of the program compared with human cognition according to Plaut. The lack of pure cases demonstrates very clearly that human processing may not be adequately represented by relatively simple computer models. As with so much of human behaviour, it is likely that general universal function (whether impaired or intact) occurs on a continuum or spectrum with great individual variance (Caramazza, 1986). Farrah & McClelland (1991) describe micro inferences and attractor networks rather than modules.

Similarly, Pinker (1997) prefers to describe modules as akin to “road kill sprawling messily over the bulges and crevasses of the brain” (p. 31) so perhaps the dogma of the past regarding the issue of modularity is one of terminology and concept formation. Hofstadter (2001) proposes an active symbols computer model of high level cognition which appears to link modularity and connectionism. He proposes that human high level processes are as a result of many individual computations in parallel (like modules) that are activated and moderated by other networks of activations that are co-occurring (connectionism).

It is likely that such an interactionist approach is likely to be able offer the most comprehensive explanation. However, the notorious difficulty in specifying exactly what constitutes a particular measurable concept or unit of cognition makes interpretation of case studies difficult, and makes comparisons between cases even more of a problem when it comes to making any meaningful conclusions about the architecture of the mind.

A final note is to consider what Sacks (1986) describes as cognitive neuropsychology’s constant search for deficits based around subtractivity. What tends to be ignored is that many cases of acquired brain injury result in the development of special compensatory skills. For example, he describes aphasiacs as having developed a heightened sense of recognition of tone and intonation of voice, so much so that he concludes these people are able to detect liars better than those without aphasia.

This is evidence of the plasticity of the brain and mind. How well the appearance of these new skills fits in with modular theory is debatable; is it the case that the modules existed premorbidly as underused back-ups to a main system, or are they a result of a spreading activation and the creation of neural network based on the new postmorbid functional requirements? This debate will probably continue as it this fact is infinitely hard to establish.