Computational Explanation and Mechanistic Explanation of Mind

Gualtiero Piccinini

8/4/2005

 

Department of Philosophy

University of Missouri – St. Louis
599 Lucas Hall (MC 73)
One University Blvd.
St. Louis, MO 63121-4499 USA

Email: piccininig@umsl.edu

 

 

Abstract

According to the computational theory of mind (CTM), mental capacities are explained by inner computations, which in biological organisms are realized in the brain.  Computational explanation is so popular and entrenched that it’s common for scientists and philosophers to assume CTM without argument.  But if we presuppose that neural processes are computations before investigating, we turn CTM into dogma.  If, instead, our theory is to be genuinely empirical and explanatory, it needs to be empirically testable.  To bring empirical evidence to bear on CTM, we need an appropriate notion of computational explanation. 

In order to ground an empirical theory of mind, as CTM was designed to be, a satisfactory notion of computational explanation should satisfy at least two requirements:  it should employ a robust notion of computation, such that there is a fact of the matter as to which computations are performed by which systems, and it should not be empirically vacuous, as it would be if CTM could be established a priori.  This paper explicates the notion of computational explanation and briefly discusses whether it plausibly applies to the mechanistic explanation of mind.

.

[A] psychological theory that attributes to an organism a state or process that the organism has no physiological mechanisms capable of realizing is ipso facto incorrect (Fodor 1968, p. 110).

 

Computational Explanation and Mental Capacities

When we explain the specific capacities of computing mechanisms, we appeal to the computations they perform.  For example, calculators—unlike, say, air conditioners—have the peculiar capacity of performing multiplications:  if we press appropriate buttons on a (well functioning) calculator in the appropriate order, the calculator yields an output that we interpret to be the product of the numbers represented by the input data.  Our most immediate explanation for this capacity is that under the relevant conditions, calculators perform an appropriate computation—a multiplication—on the input data.  This is a paradigmatic example of computational explanation.

Animals, and especially human beings, respond to their environments in extraordinarily subtle, specialized, and adaptive ways.  In explaining those capacities, we often appeal to mentalistic constructs such as perceptions, memories, intentions, etc.  We also recognize that the mechanisms that underlie mental capacities are neural mechanisms—no brains, no minds.[1]  But it is difficult to connect mentalistic constructs to their neural realizers—to see how perceptions, memories, intentions, and the like, could be realized by neural states and processes.  In various forms, this problem has haunted the sciences of mind and brain since their origin.

In the mid-20th century, Warren McCulloch and others devised an ingenious solution:  mental capacities are explained by computations realized in the brain (McCulloch and Pitts 1943, Wiener 1948, von Neumann 1958, see Piccinini 2004a for more on the origin of this view).  This is the computational theory of mind (CTM), which explains mental capacities more or less in the way we explain the capacities of computing mechanisms.  There are many versions of CTM:  “classical” versions, which tend to ignore the brain, and “connectionist” versions, which are ostensively inspired by neural mechanisms.  According to all of them, the brain is a computing mechanism, and its capacities—including its mental capacities—are explained by its computations.

CTM has encountered resistance.  Some neuroscientists are skeptical that the brain may be adequately characterized as a computing mechanism in the relevant sense (e.g., Gerard 1951, Rubel 1985, Perkel 1990, Edelman 1992, Globus 1992).  Some psychologists think computational explanation of mental capacities is inadequate (e.g., Gibson 1979, Varela, Thompson, Rosch 1991, Thelen and Smith 1994, Port and van Gelder 1995, Johnson and Erneling 1997, Ó Nualláin, Mc Kevitt and Mac Aogáin 1997, Erneling and Johnson 2005).  And some philosophers find it implausible that certain mental capacities—especially consciousness—may be explained by computation (e.g., Taube 1961, Block 1978, Putnam 1988, Maudlin 1989, Mellor 1989, Bringsjord 1995, Dreyfus 1998, Harnad 1996, Searle 1992, Penrose 1994, van Gelder 1995, Wright 1995, Horst 1996, Lucas 1996, Copeland 2000, Fetzer 2001).  Whether CTM can explain every aspect of every mental capacity remains controversial.  But without a doubt, CTM is a compelling theory.

Digital computers are more similar to minds than anything else known to us.  Computers can process information, perform calculations and inferences, and exhibit a dazzling variety of capacities, including that of guiding sophisticated robots.  Computers and minds are sufficiently analogous that CTM appeals to most of those who are searching for a mechanistic explanation of mind.  As a consequence, CTM has become the mainstream explanatory framework in psychology, neuroscience, and naturalistically inclined philosophy of mind (e.g., Newell and Simon 1976, Fodor 1975, Pylyshyn 1984, Churchland and Sejnowski 1992).  In some quarters, computational explanation is now so entrenched that it seems commonsensical.

More than half a century after CTM’s introduction, it’s all too easy—and all too common—to take for granted that mental capacities are explained by neural computations.  A recent book, which purports to defend CTM, begins by asking “how the computational events that take place within the spatial boundaries of your brain can be accounted for by computer science” (Baum 2004, p. 1, emphasis added).  But if we presuppose that neural processes are computational before investigating, we turn CTM into dogma.  If, instead, our theory is to be genuinely empirical and explanatory, it needs to be empirically testable.  To bring empirical evidence to bear on CTM, we need an appropriate notion of computational explanation. 

In order to ground an empirical theory of mind, as CTM was designed to be, a satisfactory notion of computational explanation should satisfy at least two requirements.  First, it should employ a robust notion of computation, such that there is a fact of the matter as to which computations are performed by which systems.  This might be called the robustness requirement.  Second, it should not be empirically vacuous, as it would be if CTM could be established a priori.  This might be called the non-vacuity requirement.  This paper explicates the notion of computational explanation and briefly discusses whether it plausibly applies to the mechanistic explanation of mind.

 

Computational Explanation and Representations

According to a popular view, a computational explanation is one that postulates the existence of internal representations within a mechanism and the appropriate manipulation of representations by the mechanism.  According to this view, which I call the semantic view of computational explanation, computations are individuated at least in part by their semantic properties.  A weaker variant is that computations are processes defined over representations.  The semantic view is stated more or less explicitly in the writings of many supporters of CTM (e.g., Cummins 1983, Churchland and Sejnowski 1992, Fodor 1998, Shagrir 2001).  The semantic view is appealing because it fits well both with our practice of treating the internal states of computing mechanisms as representations and with the representational character of those mentalistic constructs, such as perceptions and intentions, that are traditionally employed in explaining mental capacities.  But the semantic view of computational explanation faces insuperable difficulties.  Here I have room only for a few quick remarks; I have discussed the semantic view in detail elsewhere (Piccinini 2004b).

            For present purposes, we need to distinguish between what may be called essential representations and accidental ones.  Essential representations are individuated, at least in part, by their content.  In this sense, if two items represent different things, then they are different kinds of representations.  For instance, at least in ordinary parlance, mentalistic constructs are typically individuated, at least in part, by their content:  the concept of smoke is individuated by the fact that it represents smoke, while the concept of fire is individuated by the fact that it represents fire.[2]  Representations in this sense of the term have their content essentially:  you can’t change their content without changing what they are.  By contrast, accidental representations are individuated independently of their content; they represent one thing or another (or nothing at all) depending on whether they are interpreted and how.  Strings of letters of the English alphabet are representations of this kind:  they are individuated by the letters that form them, regardless of what they mean or even whether they mean anything at all.  For instance, the word “bello” means (roughly) war to speakers of Latin, beautiful to speakers of Italian, and nothing in particular to speakers of most other languages.[3]

            The main problem with the semantic construal of computational explanation is that it requires essential representations, but all it has available is accidental ones.  If we try to individuate computations by appealing to the semantic properties of accidental representations, we obtain an inadequate notion of computation.  For the same accidental representation may represent different things to different interpreters.  By the same token, a process that is individuated by reference to the semantic properties of accidental representations may be taken by different interpreters to compute different things—to constitute different computations—without changing anything in the process itself.  Just as speakers of different languages can interpret the same string of letters in different ways, under the semantic view (plus the notion of accidental representation) different observers could look at the same activity of the same mechanism and interpret it as two different computations.  But a process that changes identity simply by changing its observer is not the kind of process that can support scientific generalizations and explanations.  Such a notion of computational explanation fails to satisfy the robustness requirement.  To obtain a genuinely explanatory notion of computation, the semantic view requires the first notion of representation—essential representation.  In fact, those who have explicitly endorsed the semantic view have done so on the basis of the notion of essential representation (e.g., Burge 1986, Segal 1991).

            But there is no reason to believe that computational states, inputs, and outputs have their semantic properties essentially (i.e., that they are essential representations; cf. Egan 1995).  On the contrary, a careful look at how computational explanation is deployed by computer scientists reveals that computations are individuated by the strings of symbols on which they are defined and by the operations performed on those symbols, regardless of which, if any, interpretation is applied to the strings.  Psychologists and neuroscientists rarely distinguish between explanation that appeals to computation and explanation that appeals to representation; this has convinced many—especially philosophers of mind—that computational explanation of mind is essentially representational.  But this is a mistake.  Computational explanation appeals to inner computations, and computations are individuated independently of their semantic properties.  Whether computational states represent anything, and what they represent, is an entirely independent matter. 

The point is not that mental states are not representations, or that if they are representations, they must be accidental ones.  The point is also not that representations play no explanatory role within a theory of mind and brain; they may well play such a role (see below).  The point is simply that if mental or neural states are computational, they are not so in virtue of their semantic properties.  For this reason among others (cf. Piccinini 2004b), the semantic view of computational explanation is inadequate.

 

Computational Explanation and Computational Modeling

Another popular view is that a computational explanation is one that employs a computational model to describe the capacities of a system.  I will refer to this as the modeling view of computational explanation.  According to the modeling view, roughly speaking, anything that is described by a computation is also a computing mechanism that performs that computation.  Although this view is not as popular as the semantic view, it is present in the works of many supporters of CTM (e.g., Putnam 1967, Churchland and Sejnowski 1992; cf. Piccinini 2004c for discussion).  The modeling view is tempting because it appears to gain support from the widespread use of computational models in the sciences of mind and brain.  Nevertheless, it is even less satisfactory than the semantic view.

The main difficulty with the modeling view is that it turns so many things into computing mechanisms that it fails the non-vacuity requirement.  Paradigmatic computational explanations are used to explain peculiar capacities of peculiar mechanisms.  We normally use them to explain what calculators and computers do, but not to explain the capacities of most other mechanisms around us.  When we explain the capacities of air conditioners, lungs, and other physical systems, we employ many concepts, but we normally do not appeal to computations.  This gives rise to the widespread intuition that among physical systems, only a few special ones are computing mechanisms in an interesting sense.  This intuition is an important motivation behind CTM.  The idea is that the mental capacities that organisms exhibit—as opposed to their capacities to breathe, digest, or circulate blood—may be explained by appeal to neural computations.

            And yet, it is perfectly possible to build computational models of many physical processes, including respiration, digestion, and even galaxy formation.  According to the modeling view, this is sufficient to turn lungs, stomachs, and galaxies into computing mechanisms, in the same sense in which calculators are computing mechanisms and brains may or may not be.  As a consequence, many things—perhaps all things—are turned into computing mechanisms.  Some authors have accepted this consequence; they maintain that everything is a computing mechanism (e.g., Putnam 1967, Churchland and Sejnowski 1992, Wolfram 2002).  As we have seen, this is counterintuitive, because it conflicts with our restricted use of computational explanation.  Still, intuitions are often overridden by strong arguments.  Why not accept that everything is a computing mechanism?

The problem is that if everything is a computing mechanism (in the relevant sense), computational descriptions lose their explanatory character to the point that CTM is trivialized.  To determine with some precision which class of systems can be described computationally to which degree of accuracy is difficult (for more on this, see Piccinini 2004d).  Nevertheless, it is obvious that everything, including many neural systems, can be described computationally with some accuracy.  This fact (under the modeling view of computational explanation) establishes the truth of CTM a priori, without requiring empirical investigation.  But CTM was intended to be an empirical theory, grounded on an empirical hypothesis about the kinds of mechanisms that explain mental capacities.  If the versatility of computational description is sufficient to establish that neural mechanisms perform computations, then this cannot be an empirical hypothesis about the explanation of mental capacities—it is merely the trivial application to brains of a general thesis that applies to everything.  In other words, the modeling view of computational explanation renders CTM empirically vacuous.

            A further complication facing the modeling view is that the same physical system may be given many computational descriptions that are different in nontrivial respects, for instance because they employ different computational formalisms, different assumptions about the system, or different amounts of computational resources.  This makes the answer to the question of which computation is performed by a system indeterminate.  In other words, not only is everything computational, but also, according to the modeling view, everything performs as many computations as it has computational descriptions.  As a consequence, this notion of computational explanation fails the robustness requirement.  The modeling view—even more than the semantic view—is inadequate for assessing CTM.

A variant of the modeling view is that a computational explanation is one that appeals to the generation of outputs on the grounds of inputs and internal states.  But without some constraints on what counts as inputs and outputs of the appropriate kind, this variant faces the same problem.  Every capacity and behavior of every system can be interpreted as the generation of an output from an input and internal states.  Our question may be reformulated in the following way:  which input-output processes, among the many exhibited by physical systems, deserve to be called computational in the relevant sense?

 

Mechanistic Explanation

To make progress on this question, we should pay closer attention to the explanatory strategies employed in physiology and engineering.  For brains and computing mechanisms are, respectively, biological and artificial mechanisms; it is plausible that they can be understood by the same strategies that have proven successful for other biological and artificial mechanisms.

            There is consensus that the capacities of biological and artificial mechanisms are to be explained mechanistically (Bechtel and Richardson 1983, Machamer, Darden, and Craver 2000, Craver 2001, Glennan 2002).  Although different accounts of mechanistic explanation vary in their details, the following sketch is quite uncontroversial.  A mechanistic explanation involves the partition of a mechanism into components, the assignment of functions or capacities to those components, and the identification of organizational relations between the components.  For any capacity of a mechanism, a mechanistic explanation invokes appropriate functions of appropriate components of the mechanism, which, when appropriately organized under normal conditions, generate the capacity to be explained.  The components’ capacities to fulfill their functions may be explained by the same strategy, namely, in terms of the components’ components, functions, and organization.

For example, the capacity of a car to run is mechanistically explained by the following:  the car contains an engine, wheels, etc.; under normal conditions, the engine generates motive power, the power is transmitted to the wheels by appropriate components, and the wheels are connected to the rest of the car so as to carry it for the ride.  Given that the capacities of mechanisms are explained mechanistically, it remains to be seen how mechanistic explanation relates to computational explanation.

 

Computational Explanation, Functional Explanation, and Mechanistic Explanation

Computational explanation is not always distinguished explicitly from mechanistic explanation.  If we abstract away from some aspects of the components and their specific functions, a mechanistic explanation may be seen as explaining the capacities of a mechanism by appealing only to its internal states and processes plus its inputs, without concern with how the internal states and processes are physically implemented.  This “abstract” version of mechanistic explanation is often called functional analysis or functional explanation (Fodor 1968, Cummins 2000).  As we have seen, a variant of the modeling view construes computational explanation as explanation in terms of inputs and internal states—that is, as functional explanation.  Under this construal, it becomes possible to identify these two explanatory strategies.  In fact, many authors do not explicitly distinguish between computational explanation, functional explanation, and in some cases even mechanistic explanation (e.g., Fodor 1968, Dennett 1978, Marr 1982, Churchland and Sejnowski 1992, Eliasmith 2003). 

Identifying computational explanation with functional or mechanistic explanation may seem advantageous, because it appears to reconcile computational explanation with the well-established explanatory strategy that is in place in biology and engineering.  To be sure, neuroscientists, psychologists, and computer scientists explain the capacities of brains and computers by appealing to internal states and processes.  Nevertheless, a simple identification of computational explanation and functional explanation is based on an impoverished understanding of both explanatory strategies.  We have already seen above that computational explanation must be more than the appeal to inputs and internal states and processes, on pain of losing its specificity to a special class of mechanisms and trivializing CTM.  For an explanation to be genuinely computational, some constraints need to be put on the nature of the inputs, outputs, and internal states and processes. 

A related point applies to functional explanation.  The strength of functional explanation derives from the possibility of appealing to different kinds of internal states and processes.  These processes are as different as digestion, refrigeration, and illumination.  Of course, we could abstract away from the differences between all these processes and lump them all together under some generic notion of computational process.  But then, all functional explanations would look very much alike, explaining every capacity in terms of inner computations.  Most of the explanatory force of functional explanations, which depends on the differences between processes like digestion, refrigeration, and illumination, would be lost.

In other words, if functional explanation is the same as computational explanation, then every artifact and biological organ is a computing mechanism.  The brain is a computing mechanism in the same sense in which a stomach, a freezer, and a light bulb are computing mechanisms.  This is not only a counterintuitive result.  As before, this result trivializes CTM, which was designed to invoke a special activity (computation) to explain some special capacities (mental ones) as opposed to others.  To avoid this consequence, we should conclude that computation may well be a process that deserves to be explained functionally (or even better, mechanistically), but it should not be identified with every process of every mechanism.  Computation is only one process among others.

 

Computational Explanation as a Kind of Mechanistic Explanation

Once again, we can make progress by paying closer attention to the explanatory strategies employed by the relevant community of scientists—specifically, computer scientists and engineers.  In understanding and explaining the capacities of calculators of computers, computer scientists do not limit themselves to functional explanation:  they employ full-blown mechanistic explanation.  They analyze computing mechanisms into processors, memory units, etc., and they explain the computations performed by the mechanisms in terms of the functions performed by appropriately organized components.  But computer scientists employ mechanistic explanations that are specific to their field:  the components, functions, and organizations employed in computer science are of a distinct kind.  If this is correct, then the appropriate answer to our initial question about the nature of computational explanation is that it is a distinct kind of mechanistic explanation.  Which kind?

There are many kinds of computing mechanisms, each of which comes with its specific components, functions, and functional organization.  If we want to characterize computational explanations in a general way, we ought to identify features that are common to the mechanistic explanation of all computing mechanisms.

            The modern, mathematical notion of computation, which goes back to work by Alan Turing (1936-7) and others, can be formulated in terms of the mathematical notion of strings of symbols.  For example, letters of the English alphabet are symbols, and concatenations of letters (words) are strings of symbols.  More generally, symbols are states or particulars that belong to finitely many distinguishable types, and strings are concatenations of symbols.  In concrete computing mechanisms, strings of symbols are realized by concrete concatenations of digits.  Digits and strings of digits are states or particulars that are individuated by the fact that they are unambiguously distinguishable by the mechanisms that manipulate them.  As I have argued elsewhere, to a first approximation, concrete computing mechanisms are mechanisms that manipulate strings of digits in accordance with rules that are general—namely, they apply to all strings from the relevant alphabet—and that depend on the input strings (and perhaps internal states) for their application.[4]  A computational explanation, then, is a mechanistic explanation in which the inputs, outputs, and perhaps internal states of the system are strings of digits, and the processing of the strings can be accurately captured by appropriate rules.

            This analysis of computing mechanisms applies to both so-called digital computing mechanisms as well as those classes of connectionist networks whose input-output functions can be analyzed within the language of computability theory (e.g., McCulloch and Pitts 1943, Minsky and Papert 1969, Hopfield 1982, Rumelhart and McClelland 1986, Siegelmann 1999).  In other words, any connectionist network whose inputs and outputs can be characterized as strings of digits, and whose input-output function can be characterized by a fixed rule defined over the inputs (perhaps after a period of training), counts as a computing mechanism in the present sense.  The analysis does not apply to so-called analog computing mechanisms.  The reason for this omission is that analog computing mechanisms are a distinct class of mechanisms and deserve a separate analysis.  I will not discuss analog computing mechanisms here because they are not directly relevant to CTM.[5]

            With the present analysis of computational explanation in hand, we are ready to discuss whether and how computational explanation is relevant to the explanation of mental capacities in terms of computations realized in the brain.  Since we are interested in explaining mental capacities, we will primarily focus not on genetic or molecular neuroscience but on the levels that are most relevant to explaining mental capacities, that is, cellular and systems neuroscience.

 

Mechanistic explanation in Neuroscience

Analogously to mechanistic explanation in other fields, mechanistic explanation in neuroscience is about how different components of the brain are organized together so as to exhibit the activities of the whole.  But mechanistic explanation in neuroscience is also different from mechanistic explanation in most other domains, due to the peculiar functions performed by the brain.

The functions of the brain (and more generally of the nervous system) may be approximately described as the feedback control of the organism and its parts.[6]  In other words, the brain is in charge of bringing about a wide range of activities performed by the organism on the grounds of both its internal states and information received by the brain from both the rest of the organism and the environment.  The activities in question include, or course, not only those involving the interaction between the whole organism and the environment, as in walking, feeding, or sleeping, but also a wider range of activities ranging from breathing to digesting to the release of hormones into the bloodstream.

            Different functions come with different mechanistic explanations, and feedback control is no exception.  Given that the brain has this peculiar capacity, its mechanistic explanation requires the appeal to internal states that correlate with the rest of the body, the environment, and one another in appropriate ways.  In order for the brain to control the organism, the internal states must correlate with the activities of the organism in appropriate ways and must be connected with effectors (muscles and glands) in appropriate ways.  In order for the control to be based on feedback, the brain’s internal states must correlate with bodily and environmental variables in appropriate ways; in other words, there must be internal “representations” (this is the standard sense of the term in neuroscience).  In order for the control to take other internal states into account, different parts of the system must affect one another in appropriate ways.  Most importantly for present purposes, in order for the mutual influence between different parts of the system to be general and flexible, brains contain a medium-independent vehicle for transmitting signals.  By medium-independent vehicle, I mean a variable that can vary independently of the specific physical properties of any of the variables that must correlate with it (such as light waves, sound waves, muscle contractions, etc.). 

The medium-independent vehicles employed by brains are, of course, all-or-none events, known as action potentials or spikes, which are generated by neurons.  Spikes are organized in sequences called trains, whose properties vary from neuron to neuron and from condition to condition.  Spike trains from one neuron are often insufficient to produce a functionally relevant effect.  At least in large nervous systems, such as human nervous systems, spike trains from populations of several dozens of neurons are thought to be the minimal processing units (Shadlen and Newsome 1998).  Mechanistic explanation in neuroscience, at or above the levels that interest us here, consists in specifying how appropriately organized trains of spikes from different assemblies of neurons constitute the capacities of neural mechanisms, and how appropriately organized capacities of neural mechanisms constitute the brain’s capacities—including its mental capacities.

Undoubtedly, the term “computational explanation” may be used simply to refer to mechanistic explanations of brains’ capacities.  Perhaps this is even a good explication of the way some psychologists and neuroscientists employ this term today.  Given this usage, computational explanation in neuroscience need not be analogous to computational explanation in computer science, and it need not suggest that neural mechanisms perform computations in the sense in which computers and calculators do.

Nevertheless, the notion of computation was originally imported from computability theory into neuroscience, and from neuroscience into psychology and AI, precisely in order to state the strong hypothesis that brains perform computations in the sense that computers and calculators do.  Furthermore, it is still common to see references in neuroscience books to literature and results from computability theory as if they were relevant to neural computation (e.g., Churchland and Sejnowsky 1992, Koch 1999).  In light of this, it is appropriate to complete this discussion by asking whether neural mechanisms perform computations in the sense employed in computer science.  Given that we now have a characterization of both computational explanation in computer science and mechanistic explanation in neuroscience, we are in a position to reformulate the question about computational explanation of mind in a more explicit and precise way.  Are spike trains strings of digits, and is the neural generation and processing of spike trains computation?[7] 

 

Symbols, Strings, and Spikes

We have seen that within current neuroscience, the variables that are employed to explain mental capacities are spike trains generated by neuronal populations.  The question of computational explanation in neuroscience should thus be reformulated in terms of the properties of spike trains.  Indeed, if we reformulate the original CTM in terms of modern terminology, CTM was initially proposed as the hypothesis that spike trains are strings of digits (McCulloch and Pitts 1943).  This hypothesis can now be tested by looking at what neuroscientists have empirically discovered by studying spike trains.

            A close look at current neuroscience reveals that within biologically plausible models, spike trains are far from being described as strings.  I believe it is possible to identify principled reasons for why this is so, and I have attempted to do so elsewhere (Piccinini unpublished).  Here, I only have space to briefly summarize the difficulties that I see in treating spike trains as strings.

Since strings are made of atomic digits, it must be possible to decompose spike trains into atomic digits, namely events that have unambiguous functional significance within the mechanism during a functionally relevant time interval.  But the current mathematical theories of spike generation leave no room for doing this (cf. Dayan and Abbott 2001).  The best candidates for atomic digits, namely the presence and absence of a spike, have no determinate functional significance on their own; they only acquire functional significance within a spike train by contributing to an average firing rate.  Moreover, spikes and their absence, unlike atomic digits, are not events that occur within well-defined time intervals of functionally significant duration.

Even if the presence or absence of individual spikes (or any other component of spike trains) could be usefully treated as an atomic digit, however, there would remain difficulties in concatenating them intro strings.  In order to do so, it must be possible to determine unambiguously, at the very least, which digits belong to a string and which do not.  But again, within our current mathematical theories of spike trains, there is no non-arbitrary way to assign individual spikes (or groups thereof) to one string rather than another.

            As a matter of current practice, mechanistic explanation in neuroscience does not appeal to the manipulation of strings of digits by the nervous system.  As I briefly indicated, I believe there are principled reasons why this is so.  Therefore, since computational explanation (in the sense employed in computer science) requires the appeal to strings of digits, explanation in neuroscience is not computational.

            It is important to stress that none of this eliminates the important peculiarities that mechanistic explanations in neuroscience possess relative to other explanations.  For example, they are aimed at explaining the specific function of controlling organisms on the basis of feedback, and as a consequence, they appeal to medium-independent vehicles and they rely heavily on correlations between internal states of the organism and environmental states (“neural representations”).

 

Mechanistic Explanation of Mind

I have argued that once the notion of computational explanation that is relevant to CTM is in place, we have reasons to conclude that current neuroscientific explanations—including, of course, neuroscientific explanations of mental capacities—are not computational.  As far as we can now tell, based on our best science of neural mechanisms, mental capacities are explained by the processing of spike trains by neuronal populations, and the processing of spike trains is interestingly different from the processes studied by computer scientists and engineers.  In a loose sense, many people—including many neuroscientists—say and will continue to say that mental capacities are explained by neural computations.  This may be interpreted to mean simply that mental capacities are explained by the peculiar processes present in the brain.  But strictly speaking, the current neuroscience does not mesh with CTM—in either classical or connectionist incarnations.

Some philosophers may be tempted to reply that CTM is not threatened by the features of neuroscientific explanation, because CTM is “autonomous” from neuroscience (cf. Fodor 1997).  According to this line, CTM is a theory at a higher, more “abstract” level than the neural level(s); it is a psychological theory, not a neuroscientific one; it is unconstrained by the properties of the neural realizers.  Anyone who takes this autonomy line is liable to be criticized for expounding a reactionary theory—reactionary because immune to empirical revision (Churchland 1981).  I also believe that the autonomy reply presupposes an inadequate view of the relationship between psychology and neuroscience, but I do not have the space to argue this point here.  Finally, the autonomy reply goes against the spirit in which CTM was proposed, for CTM was originally proposed as a theory of how mental capacities are explained by neural mechanisms (McCulloch and Pitts 1943, Wiener 1948, von Neumann 1958).  This appeal to authority, of course, is unlikely to persuade those who are sympathetic to the autonomy reply.

For present purposes, the following counter-argument will have to suffice.  No matter how abstract or autonomous from neuroscience we construe CTM to be, CTM postulates computations that are realized in some physical substratum or another.  In ordinary biological organisms, the relevant physical substratum is the brain.  In general, not everything can realize everything else.  In other words, for any property A and putative realizers B1, …, Bn, any Bi realizes A if and only if having Bi constitutes having A (cf. Pereboom and Kornblith 1991).  For instance, something is a (well functioning) heater if and only if it possesses a property that constitutes the generation of heat—i.e., roughly speaking, a property that increases the average molecular kinetic energy of its surroundings.  By the same token, for something—such as a neural process—to realize a computation, there must be a level of mechanistic description at which neural activity constitutes the processing of strings of digits in accordance with appropriate rules.  This is the point behind the quote that opens this paper:  “[A] psychological theory that attributes to an organism a state or process that the organism has no physiological mechanisms capable of realizing is ipso facto incorrect” (Fodor 1968, p. 110).  As I have hinted, the empirical evidence is that neural mechanisms are incapable of realizing computations.[8]  Therefore, we should reject CTM.

At this point, some readers may feel at a loss.  Those of us who have been trained in the cognitive science tradition are so used to think of explanations of mental (or at least cognitive) capacities as computational that we might worry that without CTM, we lack any mechanistic explanation of mental capacities.  This worry is misplaced.  My argument is not based on some a priori objection to CTM, leaving us without any alternative to it.  My argument is based on the existence of a sophisticated science of neural mechanisms, which did not exist when CTM was originally proposed but has developed since then.  A goal of this science is the mechanistic explanation of mental capacities.  If we agree that the realizers of mental states and processes are neural, neuroscience is where we ought to look for mechanisms in terms of which to formulate our theories of mind.  This point is neutral between reductionism, anti-reductionism, or eliminativism about mental properties.  It is a straightforward consequence of the fact that if there are any mechanistic explananda of mental capacities, they are realized in the brain.  And while neural mechanisms may not be computational in the sense of computer science, enough is known about them to support a rich mechanistic theory of mind.

 

Acknowledgements

Thanks to José Bermudez, Diego Marconi, and especially Carl Craver for helpful comments on previous versions of this paper.

 

References

Adrian, E. D. (1928). The Basis of Sensation: The Action of the Sense Organs. New York, Norton.

Baum, E. B. (2004). What is Thought? Cambridge, MA, MIT Press.

Bechtel, W. and R. C. Richardson (1993). Discovering Complexity:  Decomposition and Localization as Scientific Research Strategies. Princeton, Princeton University Press.

Block, N. (1978). Troubles with Functionalism. Perception and Cognition:  Issues in the Foundations of Psychology. C. W. Savage. Minneapolis, University of Minnesota Press. 6: 261-325.

Bringsjord, S. (1995). "Computation, among Other Things, is Beneath Us." Minds and Machines 4: 469-488.

Burge, T. (1986). "Individualism and Psychology." Philosophical Review 95: 3-45.

Churchland, P. S. and T. J. Sejnowski (1992). The Computational Brain. Cambridge, MA, MIT Press.

Craver, C. (2001). "Role Functions, Mechanisms, and Hierarchy." Philosophy of Science 68(March 2001): 53-74.

Copeland, B. J. (2000a). "Narrow Versus Wide Mechanism: Including a Re-Examination of Turing's Views on the Mind-Machine Issue." The Journal of Philosophy XCVI(1): 5-32.

Cummins, R. (1983). The Nature of Psychological Explanation. Cambridge, MA, MIT Press.

Cummins, R. (2000). "How does it work?" vs. "What are the laws?" Two Conceptions of Psychological Explanation. Explanation and Cognition. F. C. Keil and R. A. Wilson. Cambridge, MA, MIT Press.

Port, R. F. and T. van Gelder, Eds. (1995). Mind and Motion: Explorations in the Dynamics of Cognition. Cambridge, MA, MIT Press.

Dayan, P. and L. F. Abbott (2001). Theoretical Neuroscience: Computational and Mathematical Modeling of Neural Systems. Cambridge, MA, MIT Press.

Dennett, D. C. (1978). Brainstorms. Cambridge, MA, MIT Press.

Dreyfus, H. L. (1998). "Response to My Critics." The Digital Phoenix: How Computers are Changing Philosophy. T. W. Bynum and J. H. Moor, Eds. Oxford, Blackwell: 193-212.

Edelman, G. M. (1992). Bright Air, Brilliant Fire: On the Matter of the Mind. New York, Basic Books.

Egan, F. (1995). "Computation and Content." Philosophical Review 104: 181-203.

Eliasmith, C. (2003). “Moving Beyond Metaphors: Understanding the Mind for What It Is.” Journal of Philosophy C(10): 493-520.

Erneling, C. E. and D. M. Johnson (2005). The Mind as a Scientific Object: Between Brain and Culture. Oxford, Oxford University Press.

Fetzer, J. H. (2001). Computers and Cognition: Why Minds are Not Machines. Kluwer, Dordrecht.

Fodor, J. A. (1968). Psychological Explanation. New York, Random House.

Fodor, J. A. (1997). Special Sciences: Still Autonomous after All These Years. Ridgeview, CA.

Fodor, J. A. (1998). Concepts. Oxford, Clarendon Press.

Garson, J. (2003). "The Introduction of Information into Neurobiology." Philosophy of Science 70: 926-936.

Gerard, R. W. (1951). "Some of the Problems Concerning Digital Notions in the Central Nervous System." Cybernetics:  Circular Causal and Feedback Mechanisms in Biological and Social Systems.  Transactions of the Seventh Conference. H. v. Foerster, M. Mead and H. L. Teuber, Eds. New York, Macy Foundation: 11-57.

Gibson, J. J. (1979). The Ecological Approach to Visual Perception. Boston, Houghton Mifflin.

Glennan, S. S. (2002). "Rethinking Mechanistic Explanation." Philosophy of Science 64: 605-206.

Globus, G. G. (1992). "Towards a Noncomputational Cognitive Neuroscience." Journal of Cognitive Neuroscience 4(4): 299-310.

Grush, R. (2003). “In Defense of some ‘Cartesian’ Assumptions Concerning the Brain and Its Operation.” Biology and Philosophy 18: 53-93.

Harnad, S. 1996. “Computation is just interpretable symbol manipulation; Cognition isn't.” Minds and Machines 4:379-90.

Haugeland, J. (1997). “What is Mind Design?” In J. Haugeland, ed., Mind Design II. Cambridge, MA, MIT Press: 1-28..

Hopfield, J. J. (1982). "Neural Networks and Physical Systems with Emergent Collective Computational Abilities." Proceedings of the National Academy of Sciences 79: 2554-2558.

Horst, S. W. (1996). Symbols, Computation, and Intentionality: A Critique of the Computational Theory of Mind. Berkeley, University of California Press.

Johnson, D. M. and C. E. Erneling, Eds. (1997). The Future of the Cognitive Revolution. New York, Oxford University Press.

Koch, C. (1999). Biophysics of Computation: Information Processing in Single Neurons. New York, Oxford University Press.

Lucas, J. R. (1996). "Minds, Machines, and Gödel: A Retrospect." Machines and Thought: The Legacy of Alan Turing. P. J. R. Millikan and A. Clark, Eds. Oxford, Clarendon.

Machamer, P. K., L. Darden, et al. (2000). "Thinking About Mechanisms." Philosophy of Science 67: 1-25.

Marr, D. (1982). Vision. New York, Freeman.

Maudlin, T. (1989). "Computation and Consciousness." Journal of Philosophy 86(8): 407-432.

McCulloch, W. S. and W. H. Pitts (1943). "A Logical Calculus of the Ideas Immanent in Nervous Activity." Bulletin of Mathematical Biophysics 7: 115-133.

Mellor, D. H. (1989). "How Much of the Mind is a Computer?" Computers, Brains and Minds. P. Slezak and W. R. Albury, Eds. Dordrecht, Kluwer: 47-69.

Minsky, M. and S. Papert (1969). Perceptrons. Cambridge, MA, MIT Press.

Newell, A. and H. A. Simon (1976). "Computer Science as an Empirical Enquiry: Symbols and Search." Communications of the ACM 19: 113-126.

Ó Nualláin, S., P. Mc Kevitt, et al., Eds. (1997). Two Sciences of Mind: Readings in Cognitive Science and Consciousness. Philadelphia, John Benjamins.

Searle, J. R. (1992). The Rediscovery of the Mind. Cambridge, MA, MIT Press.

Segal, G. (1991). "Defence of a Reasonable Individualism." Mind 100: 485-493.

Penrose, R. (1994). Shadows of the Mind. Oxford, Oxford University Press.

Pereboom, D. and H. Kornblith (1991). "The Metaphysics of Irreducibility." Philosophical Studies 63.

Perkel, D. H. (1990). Computational Neuroscience:  Scope and Structure. Computational Neuroscience. E. L. Schwartz. Cambridge, MA, MIT Press: 38-45.

Putnam, H. (1967). Psychological Predicates. Art, Philosophy, and Religion. Pittsburgh, PA, University of Pittsburgh Press.

Putnam, H. (1988). Representation and Reality. Cambridge, MA, MIT Press.

Siegelmann, H. T. (1999). Neural Networks and Analog Computation: Beyond the Turing Limit. Boston, MA, Birkhäuser.

Piccinini, G. (2003). Computations and Computers in the Sciences of Mind and Brain. Ph.D. Dissertation.  Pittsburgh, PA, University of Pittsburgh.  URL = <http://etd.library.pitt.edu/ETD/available/etd-08132003-155121/>

Piccinini, G. (2004a). "The First Computational Theory of Mind and Brain: A Close Look at McCulloch and Pitts's 'Logical Calculus of Ideas Immanent in Nervous Activity'." Synthese 141(2): 175-215.

Piccinini, G. (2004b). "Functionalism, Computationalism, and Mental Contents." Canadian Journal of Philosophy 34(3): 375-410.

Piccinini, G. (2004c). "Functionalism, Computationalism, and Mental States." Studies in the History and Philosophy of Science 35(4): 811-833.

Piccinini, G. (2004d). "Computational Modeling vs. Computational Explanation: Is Everything a Turing Machine, and Does It Matter to the Philosophy of Mind?" URL = <http://philsci-archive.pitt.edu/archive/00002017/>

Piccinini, G. (2004e). “Computers.”  URL = <http://philsci-archive.pitt.edu/archive/00002016/>

Piccinini, G. (unpublished). “Symbols, Strings, and Spikes.”

Port, R. F. and T. van Gelder, Eds. (1995). Mind and Motion: Explorations in the Dynamics of Cognition. Cambridge, MA, MIT Press.

Pylyshyn, Z. W. (1984). Computation and Cognition. Cambridge, MA, MIT Press.

Rubel, L. A. (1985). "The Brain as an Analog Computer." Journal of Theoretical Neurobiology 4: 73-81.

Rumelhart, D. E. and J. M. McClelland (1986). Parallel Distributed Processing. Cambridge, MA, MIT Press.

Searle, J. R. (1983). Intentionality: An Essay in the Philosophy of Mind. Cambridge, Cambridge University Press.

Shadlen, M. N. and W. T. Newsome (1998). "The Variable Discharge of Cortical Neurons: Implications for Connectivity, Computation, and Information Coding." Journal of Neuroscience 18(10): 3870-3896.

Shagrir, O. (2001). "Content, Computation and Externalism." Mind 110(438): 369-400.

Taube, M. (1961). Computers and Common Sense: The Myth of Thinking Machines. New York, Columbia University Press.

Thelen, E. and L. Smith (1994). A Dynamic Systems Approach to the Development of Cognition and Action. Cambridge, MA, MIT Press.

Turing, A. M. (1936-7 [1965]). On computable numbers, with an application to the Entscheidungsproblem. The Undecidable. M. Davis, Ed. Ewlett, Raven.

van Gelder, T. (1995). "What Might Cognition Be, if not Computation?" The Journal of Philosophy XCII(7): 345-381.

von Neumann, J. (1958). The Computer and the Brain. New Haven, Yale University Press.

Varela, F. J., E. Thompson, and E. Rosch (1991). The Embodied Mind: Cognitive Science and Human Experience. Cambridge, MA, MIT Press.

Wiener, N. (1948). Cybernetics or Control and Communication in the Animal and the Machine. Cambridge, MA, MIT Press.

Wilson, R. A. (2004). Boundaries of the Mind: The Individual in the Fragile Sciences. Camrbidge, Cambridge University Press.

Wolfram, S. (2002). A New Kind of Science. Champaign, IL, Wolfram Media.

Wright, C. (1995). "Intuitionists Are Not (Turing) Machines." Philosophia Mathematica 3(3): 86-102.



[1] Some philosophers have argued that the realizers of mental states and processes include not only the brain but also things outside it (e.g., Wilson 2004).  I will ignore this nuance, because this simplifies the exposition without affecting the points at issue in this paper in any fundamental way.

[2] I am using “concept” in a pre-theoretical sense.  Of course, there may be ways of individuating concepts independently of their content, ways that may be accessible to those who possess a scientific theory of concepts but not to ordinary speakers.  Furthermore, according to Diego Marconi, it may not be strictly correct to say that my concept smoke represents smoke, because my concept is not tokened in the presence of all and only instances of smoke.  This makes no difference to the present point:  what matters here is that concepts, at least pre-theoretically, are individuated by what they represent—whatever that may be.

[3] The distinction between essential and accidental representation should not be confused with the distinctions between original, intrinsic, and derived intentionality.  Derived intentionality is intentionality conferred on something by something that already has it; original intentionality is intentionality that is not derived (Haugeland 1997).  Something may have original intentionality without being an essential representation, because it may not have its content essentially.  Intrinsic intentionality is the intentionality of entities that are intentional regardless of their relations with anything else (Searle 1983).  Something may be an essential representation without having intrinsic intentionality, because its intentionality may be due to the relations it bears to other things.

[4] To be a bit more precise, for each computing mechanism, there is a finite alphabet out of which strings of digits can be formed and a fixed rule that specifies, for any input string on that alphabet (and for any internal state, if relevant), whether there is an output string defined for that input (internal state), and which output string that is.  If the rule defines no output for some inputs (internal states), the mechanism should produce no output for those inputs (internal states).  For more details, see Piccinini 2003.

[5] Pace suggestions to the contrary that are found in the literature (e.g., Churchland and Sejnowski 1992).  The main disanalogy is that while the vehicles of analog computations are continuous variables, the main vehicles of neural processes are spike trains, which are sequences of all-or-none events.  For an analysis of analog computers and their relation to digital computers, see Piccinini 2004e.

[6] The exact level of sophistication of this feedback control is irrelevant here.  See Grush 2003 for discussion of some options.

[7] Notice that the peculiarities of mechanistic explanations in neuroscience are not only logically independent of whether they constitute computational explanations in the sense of computer science; historically, their formulation preceded the invention of modern computability theory and computer design.  For example, the self-conscious use of the notion of information in neuroscience dates back to at least the 1920s (Adrian 1928, cited by Garson 2003, to which the present section is indebted).

[8] This way of putting the point may be a bit misleading.  The evidence is computation is not what neural mechanisms do in general.  From this, it doesn’t follow that no neural mechanism can perform computations.  Human beings are certainly capable of performing computations, and this is presumably explained by their neural mechanisms.  So, human brains, at least, must possess a level of organization at which some specific neural activity does realize the performance of computations.