Functionalism and The Mechanical Mind

In this second article in the series, I will address the Functionalist approach to the mind/body problem and its relationship with the Strong AI stance.

Proponents of Strong AI assert that when we speak of mind we are speaking of nothing over and beyond the computational procedures carried out by the physical brain, procedures which result in certain behaviors. Strong AI supporters insist that if we can create algorithms which effectively simulate human behavior, then we have simulated consciousness, intelligence, and awareness. Strong AI supporters feel this unwavering conviction is justified based on the fact that their approach to the mind/body problem is couched in Functionalist theories of the mind. Functionalism and Strong AI are undoubtedly the most optimistic about the prospect of AI. However, a close analysis of Functionalism shows that this optimism is categorically unwarranted.

Functionalism is a philosophical theory of the mind that categorizes a mental state “not by its internal constitution,” but instead by “the way it functions, or the role it plays, in a system of which it is a part.” Mental states are not conceptualized as internal, subjective experiences of certain feelings, desires, or urges. Instead, the “identity of a mental state…(is)…determined by its causal relations to sensory stimulations.” For example, when asked what pain is, most of us would reply that it is an unpleasant sensation that has a particular quality. It is something that we subjectively and consciously experience. The Functionalist would reply that pain is “a state that tends to be caused by bodily injury” that causes wincing or moaning. Viewed from a Functionalist perspective, pain is “nothing over and beyond the responses produced by certain stimuli.” An external stimuli produces a certain response and this is all that there is to pain.

Functionalists conceptualize mental states in such a fashion because they ascribe to computational theories of mind. The mind can be regarded as a Turing Machine, “an idealized finite state digital computer.” When the machine is in a certain state, S1, and receives a certain input, I, the machine will proceed to state S2 and produce an output, O. And this is all that there is to the mind. Pure, albeit complex, computation. The mind is nothing more than the physical components of the brain, which act as mediators between inputs and outputs. Mental states, emotions, feelings, desires, urges, pains are nothing over and beyond particular and distinct outputs. Some Functionalists, the Eliminative Materialists, deny the existence of internal experiences altogether, charging that when people speak of something like pain, they are guilty of nominalism. For these theorists, mental states are simply words without ontological referents.

Today, Functionalism dominates the philosophy of mind debate, partly because it aspires to be as scientific as possible in its methods, analysis, and descriptions.

But philosophy is not scientific. It never was and it never intended to be.

The purpose of philosophy is to analyse the logical coherency of concepts, to ask abstract questions, to attempt to provide answers to metaphysical, existential, and ethical questions. The purpose of philosophy is to ensure that the scientists, and everyone else for that matter, are making sense. And the amalgam, the farrago, of rigorous science and abstract philosophy is a poisonous concoction. Science and Philosophy are utterly incommensurable.

But highly charged rhetoric never convinced anyone.  Well, obviously that statement is false. Let me try again. I do not wish to rely on a fervent harangue to win my point. I do not wish to be guilty of the fallacy of appealing to emotions. Admittedly, I may already be guilty of poisoning the well.

Hopefully, the reader will find the arguments below convincingly cogent.

One problem with Functionalism is what I will label the Diversity of Outputs problem. When an individual receives a certain input, a certain external stimuli, say one that signals bodily harm, that individual may or may not produce the same output as another individual. In other words, we all react to pain differently. Incidentally, there is also the Flat Affect Output problem. This is when an individual produces no output to a certain external input or stimuli. Whichever the case, it seems patently disingenuous to claim that an individual is or is not experiencing pain based on the conformity to or deviation from the accepted output modes.

A more serious problem, what I find to be the most damaging argument against Functionalism, is the Problem of Qualia. Functionalist theories “attempt to characterize mental states exclusively in relational, specifically causal terms.” But isn’t there an obvious and genuine feeling of what it is like to be in pain, an entirely subjective and conscious experience of pain? Many of us would answer in the affirmative. The myopic Functionalist fails to “capture the qualitative character, or ‘qualia,’ of experiential states such as perceptions, emotions, and bodily sensations.” There is something that I experience as pain and it is precisely this personal experience that is ignored, or even denied to exist, by the Functionalist.

The philosopher Ned Block proposed an interesting objection to Functionalism with his Chinese Nation Thought Experiment. Block “imagines that the population of China (chosen because its size approximates the number of neurons in a typical human brain) is recruited to duplicate an individual’s functional organization for a period of time, receiving the equivalent of sensory input from an artificial body and passing messages back and forth via satellite. Such a…system…would not have mental states with any qualitative character (other than the qualia possessed by the individuals themselves) and thus…states functionally equivalent to sensations or perceptions may lack their characteristic ‘feels.'” In other words, the entire population of China would act as neurons, arranging themselves in particular patterns and formations. When the population receives an input stimuli, they would rearrange themselves  and mimic the pattern or formation that actual neurons would assume when receiving the same input stimuli. But no one would conclude that this massive network of Chinese neurons, while matching the functional input and output patterns, actually experiences genuine sensations.

Functionalist theories are guilty of blatant reductionism. By asserting that pain, or any other sensation or experience, is reducible to or equivalent with the firing of neurons in the brain, they are ignoring and factoring out an essential part of what it means to be human. Namely, the experience of what it is like to be human.

The conceptualization of the mind by the Functionalist may not be comprehensively false, but it is woefully inadequate. Understanding the mind through computational models may yield insights, knowledge, and new understandings. But only in limited areas. Because a computational model of the mind excludes an invaluable and essential aspect of human nature, whatever answers it may provide will always be unsatisfactory and profoundly lacking. And with its tenets firmly couched in Functionalism, this is precisely why Strong AI is manifestly false.

Next Time:

Property Dualism and Weak AI

Work Cited

Levin, Janet, “Functionalism”, The Stanford Encyclopedia of Philosophy (Fall 2013 Edition), Edward N. Zalta (ed.), forthcoming URL = http://plato.stanford.edu/archives/fall2013/entries/functionalism.

gridteam

gridteam

gridteam

Latest Articles by gridteam (see all)

You Might Also Like