Gualtiero Piccinini’s Research

NB: Comments on my work are always welcome

Updated: February 2011

My research covers several areas, such as computational theories of mind, cognitive neuroscience, intentionality, and consciousness.  The goal is to understand the mind, so all the pieces are connected in various ways.  My work may be approximately organized within the following categories.

·         Background of Computationalism

·         The Mechanistic Account of Computation

·         Reformulating Computationalism

·         From Cognitive Science to Cognitive Neuroscience

·         Intentionality

·         Consciousness and Introspection

Background of Computationalism

I have devoted considerable energy to understanding the original motivation of computational theories of mind.  The motivation is to find a mechanistic explanation of mental capacities.  It is important that we identify the relevant notion of mechanistic explanation.  I think the notion that serves our purposes is roughly the one worked out by Carl Craver and other philosophers of science.  But in order to show that mechanistic explanation in the relevant sense is what computationalists are after, we also need to dispel a number of confusions that have permeated the literature. 

In Alan Turing and the Mathematical Objection, I reconstruct Turing’s (usually misunderstood) reply to the mathematical objection.  (Jack Copeland invited me to present this paper at a workshop on hypercomputation in London; the proceedings were published in Minds and Machines, 2003.)  According to the mathematical objection, machines cannot think because they can only follow a finite set of rules, whereas minds can invent new rules.  This objection was initially raised by Kurt Gödel and was later made famous by John Lucas and Roger Penrose.  I argue that Turing’s views on intelligence were subtler than hitherto recognized.  He did not believe that a Turing machine could reproduce the human mind, but he did believe that a digital computer might, because a digital computer might have the ability of changing its programs in ways that correspond to the invention of new rules by human beings.  Although previously unknown or unappreciated, Turing’s reply to the mathematical objection is the most effective that I know of. 

In Allen Newell (in the New Dictionary of Scientific Biography, Thomson Gale, 2007), I briefly reconstruct the main ideas and contributions of one of the greatest and most sophisticated champions of the computational theory of mind.

In The First Computational Theory of Mind and Brain: A Close Look at McCulloch and Pitts’s ‘Logical Calculus of Ideas Immanent in Nervous Activity’ (in Synthese, 2004), I argue that contrary to popular belief, the first computational theory of mind and brain is not due to Turing but to McCulloch and Pitts.  I investigate the enormous impact their theory had, both positively, in offering a mechanistic explanation of mental capacities, and negatively, in introducing the mistaken idea that appealing to computation automatically explains the intentionality of mental states.  The mixture of computationalism and semantic language that one finds in McCulloch and Pitts carried over in the philosophical literature, where most authors believe computational states are individuated by their semantic properties.  I call this the semantic view of computation.

I look at the sources of the semantic view of computation in Functionalism, Computationalism, and Mental Contents (in Canadian Journal of Philosophy, 2004), where I argue that the semantic view of computation is untenable and question begging, and should be replaced by a non-semantic view of computation.

But a non-semantic view of computation is workable only if we don’t conflate functional/mechanistic descriptions and computational ones, as is often done in the literature.  In Functionalism, Computationalism, and Mental States (in Studies in the History and Philosophy of Science, 2004), I identify the origin of this conflation and attempt to eradicate it.  One of my conclusions is that computational descriptions are only one kind of functional/mechanistic descriptions among others.  In this paper, I also discuss and reject the popular view that there is a useful sense in which everything performs computations. 

The Mechanistic Account of Computation

I have developed a novel account of computation, the mechanistic account, which integrates conceptual resources from computability theory, computer design, and philosophy of science.  According to the mechanistic account, computing systems are mechanisms that have the function of manipulating medium-independent vehicles in accordance with a general rule that applies to all vehicles and depends on the inputs for its application.  A medium-independent vehicle is a vehicle defined simply in terms of differences between different portions of the vehicles along a relevant dimension, and not in terms of any of its more specific physical properties.  Thus, medium-independent vehicles can be implemented in different physical media.  The vehicles of digital computations are digits, which are one kind of medium-independent vehicle, but there are other kinds of medium-independent vehicles (e.g., continuous variables), which give rise to other kinds of computation.  A key term here in the mechanistic account is “function”:  computing mechanisms are individuated by their functional (non-semantic) properties, and functional properties are specified by mechanistic explanations.

NB: In the following four papers, I used the term “computation” for what I now call digital computation.  A more updated, general, and adequate account of computation is in Section 3 of “Information Processing, Computation, and Cognition” (written with Andrea Scarantino and published in 2011 in Journal of Biological Physics).

In Computation without Representation (in Philosophical Studies, 2008), I formulate the mechanistic account with respect to digital computational states, defend it on the grounds of computer science’s explanatory practices, and criticize existing arguments for the semantic account of computation.

In Computing Mechanisms (in Philosophy of Science, 2007), I formulate the mechanistic account more generally in terms of digital computing mechanisms and their mechanistic explanation, and argue that it is superior to other accounts of digital computing mechanisms because it has a number of desirable features.

In Computers (in Pacific Philosophical Quarterly, 2008), I argue that contrary to what many philosophers have maintained, there is a principled distinction between digital computers properly so called and other digital computing mechanisms, and I use the mechanistic account to draw such a distinction in terms of their functional properties.  I also exhibit the fruitfulness of the mechanistic account by analyzing in some detail some of the theoretically important functional differences between computers, such as programmability vs. program control, special purpose vs. general purpose, parallel vs. serial, and analog vs. digital.  There are quite a few surprises for some orthodox views here.

In Some Neural Networks Compute, Others Don’t (in Neural Networks, 2008), I look more closely at connectionist systems and the sense in which they perform (digital) computations.  Connectionist systems are often invoked as the most plausible model of neural and cognitive mechanisms.  Yet there is considerable confusion as to whether connectionism is a computational framework.  I show how the mechanistic account naturally accommodates paradigmatic connectionist systems and sheds light on the way in which they perform (digital) computations.  I also draw a distinction between connectionist systems that perform (digital) computations in a classical way, connectionist systems that perform (digital) computations in a non-classical way, and connectionist systems that don’t perform (digital) computations at all.  I believe brains fall into the last class.

In The Physical Church-Turing Thesis: Modest or Bold? (forthcoming in British Journal for the Philosophy of Science), I try to shed light on the Physical Church-Turing thesis, not to be confused with the original thesis put forward by Church and Turing.  I argue that existing formulations of the Physical Church-Turing thesis are too strong and too little concerned with practical usefulness to be relevant to the notion of computability that motivates the (original) Church-Turing thesis and the discipline of computer science.  Such formulations should be replaced by a more modest but more pertinent formulation, according to which anything that can be physically computed (in a sense specified in terms of usability) can be computed by some Turing machine.  Besides being more pertinent to computability, this is a more plausible thesis than traditional formulations, although its truth value remains unknown.

Reformulating Computationalism

With the mechanistic account in hand, we can go back to computationalism and begin to reformulate and resolve some old disputes.  A first step is to appreciate that computational theories of mind and brain have great elegance and explanatory power.  According to the mechanistic account, computational theories are bona fide mechanistic theories, explaining the capacities of a system in terms of their components and their functional organization.  Different versions of the computational theory differ in the kinds of mechanisms they postulate, and their explanatory power is a function of the capacities that can be exhibited by the mechanisms that they postulate.  In this respect, for instance, paradigmatic “connectionist” theories are less explanatorily powerful than paradigmatic “classical” theories.

In The Mind as the Software of the Brain? Revisiting Functionalism, Computationalism, and Computational Functionalism (in Philosophy and Phenomenological Research, 2010), I clarify the relationship between functionalism, computationalism, and computational functionalism.  I give clear content to the idea that functionalism and computationalism are logically independent views.  Functionalism is the view that the mind is the functional organization of the brain; computationalism is the view that the functional organization of the brain is computational; and computational functionalism is the conjunction of the two.  I explicate all the relevant notions, including functional organization, in mechanistic terms.  Thus, this paper offers a novel formulation of functionalism, which I call mechanistic functionalism.

In Computationalism, the Church-Turing Thesis, and the Church-Turing Fallacy (in Synthese, 2007), I address the main arguments to the effect that the Church-Turing thesis entails computationalism.  Although these arguments turn out to be unsound, I do point out some important connections between computation, the Church-Turing thesis, and theories of mind and brain.

In Computational Modeling vs. Computational Explanation: Is Everything a Turing Machine, and Does It Matter to the Philosophy of Mind? (in Australasian J Phil, 2007) I discuss in detail pancomputationalism—the view that everything performs computations.  Believe it or not, pancomputationalism has been defended or at least endorsed in some form by a large number of distinguished philosophers (and scientists), including Hilary Putnam, David Chalmers, and John Searle.  This is largely because until now, no one has produced an adequate account of the difference between things that compute and things that don’t.  To remedy this, I offer an account of the distinction between computational explanation (explaining a phenomenon in terms of the computations performed by the mechanism that is responsible for the phenomenon) and computational modeling (describing a phenomenon using computations).  By using this distinction, I dismiss the view that everything performs computations as either trivial or false.

In two papers written with Andrea Scarantino--“Computation vs. Information Processing: How They Are Different and Why It Matters” (in Studies in Hist. and Phil. of Sci., 2010) and “Information Processing, Computation, and Cognition” (in J. Biol. Phys., 2011)—we explore the relationships between information, computation, and cognition.  The first paper puts more emphasis on history, but otherwise the second paper is a revised, improved, and expanded version of the first, so I recommend reading the second one.  Bottom line:  computation is not the same as information processing, computing does not entail processing information, but processing information does entail computing (in the generic sense).  The notions of computation and information have different histories, are associated with different bodies of mathematical theory, and have different roles to play in a theory of cognition.  Furthermore, cognition requires both information processing (in several senses) and computation in the generic sense, although it remains to be seen which type of computation is involved in natural cognition.

In Computationalism in the Philosophy of Mind (in Philosophy Compass, 2009) and Computationalism (forthcoming in the Oxford Handbook of Philosophy and Cognitive Science), I review the recent literature on computationalism.  Among other things, I connect some of the dots between some of my previous papers.  The second paper is more complete, updated, and improved relative to the first, so I recommend the second one even though it’s a bit longer.  Notice that Section 6 of the paper posted here will be cut from the published version for reasons of space.

From Cognitive Science to Cognitive Neuroscience

 

The received view about psychological explanation is that it amounts of functional analysis, which is an explanatory style distinct and autonomous from mechanistic explanation.  In “Integrating Psychology and Neuroscience: Functional Analyses as Mechanism Sketches” (with Carl Craver, forthcoming in Synthese), we argues that contrary to the received view, functional analyses are mechanism sketches (i.e., elliptical mechanistic explanations), and therefore functional analyses are neither distinct nor autonomous from mechanistic explanations, and therefore psychological explanations can be seamlessly integrated with neuroscientific explanations.

The conceptual relations between computational explanations of mind and neighboring topics, such as modeling the mind, explaining the mind mechanistically, and understanding the brain, are not always carefully and correctly laid out.  I clarify some of these relations in Computational Explanation in Neuroscience (in Synthese, 2006).  The same essay also introduces some relevant articles in the same issue of Synthese, which I guest-edited.  The articles are by Carl Craver, Frances Egan and Robert Matthews, Oron Shagrir, and Rick Grush.

In “The Resilience of Computationalism,” (in Philosophy of Science, 2010), I review arguments against computationalism with emphasis on arguments from differences between neural processes and computations (which are not discussed either in my review articles or just about anywhere else in the philosophical literature), why they don’t work as they stand, and a promissory note on how they can be improved upon by employing the mechanistic account of computation.  (This paper is an expansion of a section of “Symbols, Strings, and Spikes”.)

The mechanistic account of computing mechanisms can be used to formulate computational theories sufficiently precisely as to make them testable on the grounds of evidence from neuroscience.  When this is done, I believe there is no (nontrivial) digital computational theory that survives the empirical test.  I give a brief analysis of the notion of computational explanation in neuroscience in Computational Explanation and Mechanistic Explanation of Mind (in Cartographies of the Mind: The Interface between Philosophy and Cognitive Science, edited by de Caro et al., 2007).  A more complete discussion of this topic is in Neural Computation and the Computational Theory of Cognition, co-authored with neuroscientist Sonya Bahar.  The conclusion is that digital forms of computationalism do not fit the scientific evidence.  Since digital computational theories have been the mainstream over the last decades, we need to consider the possibility that generally speaking, neural computation is sui generis and requires its own mathematical theory.

At this point, some philosophers will be tempted to reply that my attempt to use neuroscience to test computational theories is wrong-headed, because computational processes are at a different level than neural mechanisms.  According to this reply, computational theories are psychological descriptions, and psychology is autonomous from neuroscience.  I agree that psychology is autonomous from neuroscience in many respects, but it doesn’t follow that computational theories are not testable by neuroscience.  In fact, any nontrivial computational theories of cognition are committed to appropriate mechanisms that realize the computations.  If those mechanisms are not found in the brain, then computationalism is refuted, at least for biological organisms.  For more on how psychological explanations are just sketches of mechanisms, see Integrating Psychology and Neuroscience: Functional Analyses as Mechanism Sketches” (with Carl Craver, forthcoming in Synthese).

 

Intentionality

 

I’m very interested in intentionality.  One of the limitations of many current theories of mental content is that they are conjoined with computational theories of mind, where computation is understood as in the semantic view of computation.  But as I’ve argued in some of the papers mentioned above, computation per se does relatively little to advance our understanding of intentionality, and the computational theory of mind needs to be rejected anyway.  I think freeing our thinking about mental content from the baggage of the computational theory of mind paves the way for making progress in our theory of content.  We can find useful resources in cognitive neuroscience.

 

As a preliminary step, I have written Splitting Concepts (co-authored with Sam Scott, in Philosophy of Science, 2006, followed by Edouard Machery’s response, How to Split Concepts), where I argue that the traditional notion of concept might need to be split into several different notions, each of which explains different phenomena.  (There is not really any cognitive neuroscience in this paper, though.)

A follow-up to the above paper is “Two Kinds of Concept: Implicit and Explicit,” forthcoming in Dialogue in a symposium on Edouard Machery’s book Doing without Concepts.  In this paper I revise, articulate, and defend the view (originally proposed in the paper just above) that concepts split into two kinds, which I now call implicit and explicit.

I have also written Recovering What Is Said with Empty Names (co-authored with Sam Scott, in Canadian Journal of Philosophy, 2010).  Empty names, such as ‘Santa Claus’, are names that lack a referent.  The paper offers robust evidence, based on semantic intuitions elicited under appropriate circumstances, to the effect that empty names have meaning.  Such evidence refutes the many semantic theories (Millianism) that deny meaning to empty names.

Consciousness and Introspection

 

I’m also interested in consciousness.  Here too, I think many philosophers have underestimated the potential for progress that would come from paying serious attention to neuroscience.  For now, most of what I’ve written in this area pertains to the methodology of using introspective reports in science and the related topic of whether scientific methods should be public.

 

In Epistemic Divergence and the Publicity of Scientific Methods (in Studies in Hist and Phil of Sci, 2003), I revisit the venerable methodological principle that scientific methods ought to be public by offering a plausible formulation of the principle and defending it against a recent attack by Alvin Goldman.  Goldman’s attack was based in part on the view that the use of introspective reports in science constitutes a private method. 

In Data from Introspective Reports: Upgrading from Commonsense to Science (in J Consciousness Studies, 2003), I articulate and defend a middle way between Daniel Dennett’s “heterophenomenology” about introspective reports (scientists should be neutral as to their truth value) and Alvin Goldman and David Chalmers’s “first-person science” (scientists should trust introspective reports).  My alternative is that scientists can and should validate introspective reports by publicly available evidence and assumptions, in ways that I spell out in my paper.  Curiously, both Goldman and Dennett now claim to agree with me.  Goldman cites my article in his “Epistemology and the Evidential Status of Introspective Reports: Trust, Warrant, and Evidential Sources,” Journal of Consciousness Studies, 2004, 11.7-8, pp. 1-16.  Goldman’s position in that article appears to be closer to mine than his previously published one, and he has told me in conversation that our views are now close.  Dennett cites my article in his “Heterophenomenology

 Reconsidered,” in Phenomenology and the Cognitive Sciences.  Dennett says that my paper is an “unwitting re-invention of heterophenomenology”.  This is at best a stretch.  I disagree with many of the things Dennett says about heterophenomenology.  In addition, unlike Dennett, I explicitly articulate the means by which introspective reports can be validated as a source of scientific evidence about mental states.  I have outlined the differences between my view and Dennett’s heterophenomenology in How to Improve on Heterophenomenology: The Self-Measurement Methodology of First-Person Data” (in Journal of Consciousness Studies, 2010).

In First-Person Data, Publicity, and Self-Measurement, I revisit the topic of the above paper, with two main goals.  First, to respond more forcefully to the growing literature according to which first-person data are private yet legitimate, including literature by neo-introspectionists and neo-phenomenologists.  I argue that their view is both methodologically unacceptable and unjustified.  Second, to refine my justification for believing that first-person data are legitimate.  I argue that they should be seen as the outcome of a process of self-measurement, in which part of the subject who is the data’s source acts as a measurement instrument.  We can then apply to first-person data the same methodological and epistemological considerations that we apply to data from other measuring instruments.

I’ve also reviewed the nice book Describing Inner Experience? Proponent Meets Skeptic, by R. T. Hurlburt and E. Schwitzgebel, in Notre Dame Philosophical Reviews, 2008-04-25.  I discuss their book in greater length in Scientific Methods Ought to Be Public, and Descriptive Experience Sampling Is One of Them” (forthcoming in Journal of Consciousness Studies in a symposium on Describing Inner Experience? Proponent Meets Skeptic, by R. T. Hurlburt and E. Schwitzgebel, followed by a response by the authors).

The Ontology of Creature Consciousness: A Challenge for Philosophy (forthcoming in Behavioral and Brain Sciences, 30.1) is a commentary on the target article “Consciousness without a Cerebral Cortex: A Challenge for Neuroscience and Medicine,” by Björn Merker.  In his article, Merker defends a radical theory to the effect that the brainstem can sustain phenomenal consciousness by itself (without a cerebral cortex).  I appeal to Merker’s theory (regardless to whether it’s correct) to motivate the hypothesis that contrary to the common assumption by philosophers that creature consciousness has little to tell us about the ontology of phenomenal consciousness, creature consciousness is (at least partially) constitutive of phenomenal consciousness.  Thus, an adequate theory of consciousness should begin with an account of creature consciousness.