The Problem of Consciousness: Biological Naturalism and Weak AI

So far in this series I have addressed the philosophical stances of Cartesian Dualism and Functionalism and their respective relationships to The Platonic Stance and Strong AI, ultimately concluding that all four should be discounted as manifestly false. At this point, the robot-loving futurist may be pulling hair in consternation, bemoaning their unrealized Wall-E’s, or Terminators, if you happen to be so inclined.

There is some hope, however. We simply need to look elsewhere. Namely, to sound logic, reason, and analysis. This last article attempts to do exactly that. And while the ultimate conclusion may not be as woefully pessimistic as The Platonic Stance or as blindly optimistic as Strong AI, it is shrouded in the warm blanket of rationality. That should be comfortable enough.

All of the dispositions covered thus far fail due to one shared characteristic: intentional ignorance. The Platonic stance willfully ignores possibility; Strong AI willfully ignores experience. Furthermore, this intentional ignorance can be attributed to an implacable adherence to the philosophical categories of dualism and materialism. The central, often unspoken or unacknowledged, issue in the debate over AI is consciousness. What is it and how is it actualized. And “the greatest single philosophical obstacle to getting a satisfactory account of consciousness is our continued acceptance of a set of obsolete categories” (xi-xii).

Dualism and Materialism have reigned long enough in philosophical debates. It is time for a regime change. We should no longer espouse “the mistaken assumption that the notions of ‘mental’ and ‘physical’ of ‘dualsim’ and ‘monism,’ of ‘materialsim’ and ‘idealism’ are clear and respectable notions as they stand” (xii)

These classical notions of consciousness are inadequate. “Consciousness does not seem to be ‘physical’ in the way that other features of the brain, such as neuron firings, are physical” (xii). Consciousness is subjective, some would argue intersubjective, an intangible entity, wholly unlike observable phenomenon which produces objective data. Additionally, consciousness is not “reducible to physical processes by the usual sort of scientific analyses that have worked for such physical properties as heat and solidity” (xii). Pain is not reducible to the firing of C-fibers, or anything else for that matter. The scientific description of heat, which states that heat is nothing more than particles moving at rapid speeds, fails to account for the experience of heat. Imagine an alien race that understands particles and relative speed. These beings could easily grasp the concept of particles accelerating and releasing energy. But what if they did not experience the feeling of warmth? The scientific description could never explain this subjective experience due to its inherent reductionism.

The acceptance of materialism denies the manifest reality of certain inner, subjective states, while dualism rejects the scientific worldview. Philosophers have effectively charged themselves with solving an irresolvable dilemma. They are guilty of the fallacy of false dichotomy, accepting only two distinct possibilities, when in fact there may be other solutions.

In order to reach a sensible solution, we need to “reject both dualism and materialism, and accept that consciousness is both a qualitative, subjective ‘mental’ phenomenon, and at the same time a natural part of the ‘physical’ world” (xiv). Consciousness need not be this otherworldy, immaterial substance, intangible and indescribable; nor should it be relegated to the realm of empirical materialism, sterile and formulaic. Instead, consciousness should be regarded as a “natural, biological phenomenon…as much as part of our biological life as digestion, growth, or photosynthesis” (xiii).

This is the view of emergent properties, of non-event causation. This is Biological Naturalism.

Part of the reason that Strong AI invariably runs into such deep logical pitfalls is that it ascribes strictly and categorically to a linear model of causation. We shouldn’t be too hard on them for this; we all do it. Typically, we “suppose that all causal relations must be between discrete events ordered sequentially in time” (7). But this typical, linear view of causation need not be the the only way we understand cause and event relationships. Think of the solidity of a table. That solidity is “explained causally by the behavior of the molecules of which the table is composed. But the solidity of the table is not an extra event; it is just a feature of the table” (7). The same holds true for the brain and consciousness. We need not conceptualize consciousness as an extra event, an effect in the strictest sense of the word. Instead, we should think of “lower-level processes in the brain…(causing)…consciousness, but that state is not a separate entity from…(the)…brain; rather it is just a feature” of the brain (8). Formulated in this manner, consciousness is an emergent property of the brain. It is a feature which emerges from certain circumstances. A formal definition would be that an emergent property is a “property of a system…that is causally explained by the behavior of the elements in the system; but it is not a property of any individual element and it cannot be explained simply as a summation of the processes of those elements” (17,18).

This is precisely why Strong AI, functionalism, and any other model relying solely on computational models will never succeed at actually reproducing or generating consciousness. The best that they can hope to achieve is simulation. However, a “simulation of a mental state is no more a mental state than the simulation of an explosion is itself an explosion” (18).

Biological Naturalism and Weak AI argue that AI is theoretically possible, but we must first understand the problem of consciousness in terms of the biological processes in our brains that give rise to consciousness.

The caveat? We “don’t have a clear idea of how anything in the brain could cause conscious states” (193). Not presently, at least. But whereas the Platonic Stance comprehensively rules out ever finding these answers, Weak AI is cautiously optimistic.

For example, we could “try to find neural correlates of consciousness” (196). This would not, of course, explain causation, but it would at least be a good first step. Suppose that we find, and it is a big supposition at this point, some “specific neurobiological state or set of states, call it ‘state N,’ which seems to be invariably correlated with being conscious. Then the next step is to find out whether or not you can induce states of consciousness by inducing state N and whether or not you can stop states of consciousness by stopping state N” (193). Even if we did find such a state, however, the problem is that we have no theory to explain how it works or the necessary mechanisms for making it work. Much, if not all, of the attention is focused on neurons. But it is entirely possible that we are looking in the wrong place. The answer could be larger, such as neuronal maps; it could exist on a smaller level, such as micro-tubules. The reality of the situation is that, “at this point, we frankly have to confess our ignorance” (194).

Another well-guarded secret, much less spoken about, is the fact that “we do not have a unifying theoretical principle of neuroscience” (198). True, “we know a lot of facts about what actually goes on in the brain,” but we have absolutely no idea as to what “enables the brain to do what it does by way of causing, structuring, and organizing our mental life” (198).

Not yet, at least.

The Platonic Stance unduly crushes our hopes and dreams and wildest sci-fi fantasies, while Strong AI maliciously assures us that it is a foregone conclusion. Biological Naturalism and Weak AI may not eliminate or endorse artificial intelligence; it neither confirms nor denies the possibility of robot friends in the future; what it does is stick adamantly to facts, logic, and reason. And this is precisely what we should look for in any theory.

Work Cited

Searle, John R. The Mystery of Consciousness. New York: The New York Review of Books, 1997. Print.

gridteam

gridteam

gridteam

Latest Articles by gridteam (see all)

You Might Also Like