The Robotic Bull in the China Shop

The concept of artificial intelligence has captivated minds for decades. The list of novels, television shows, and films centered around the concept of self-aware robotic beings is too long to enumerate, and I’m sure one or two influential works has popped into your head while you were reading this sentence. The works vary in their take on the nature of these beings; sometimes the robots are presented as benevolent and superior; other times as inimical and contemptible. Despite the disparate posturing in regards to the moral character of the artificially intelligent robots, the novels, television shows, and films do have something in common: they all take it as a foregone conclusion that artificial intelligence is scientifically feasible.

This is a mistake. While the artists responsible for these various works obviously had to assume that artificial intelligence, henceforth abbreviated as AI, was both theoretically possible and conceptually coherent, scientists and philosophers have not been so eager to jump to the same conclusion, at least not unanimously. The question of AI is quite the contentious question; one responsible for many a heated debate over the past few decades. This is because it is a loaded question. At the very heart of the issue resides one of the most mysterious and equally contentious facets of human nature: consciousness. What does it mean to be conscious? Self-conscious? Aware? Intelligent? What do we mean when we speak of ‘mind?’ These questions are as old as the hills, philosophically speaking. Plato first attempted to provide answers to the mind/body problem, followed by Descartes and Hume, and countless others thereafter. And within this vast and variegated multitude, not a single univocal tenet or unassailable assumption.

Fortunately, we can break down the various camps into three distinct factions: Strong AI, Weak AI, and the Platonic Stance. Proponents of Strong AI assert that when we speak of mind we are speaking of nothing over and beyond the computational procedures carried out by the physical brain, procedures which result in certain behaviors. Strong AI supporters insist that if we can create algorithms which effectively simulate human behavior, then we have simulated consciousness, intelligence, and awareness. Weak AI proponents posit that, while it is possible to create programs and computational procedures which result in a specific and desired behavior, these programs and procedures are not simulating human consciousness, intelligence, and awareness. There is simply more to consciousness, and to the mind, than computation. Lastly, the Platonic Stance contends that no simulation, no matter how accurate or predictable, could ever recreate the mind or consciousness because these things are simply too mysterious for science to understand.

The purpose of this article is show that Strong AI is an untenable position. It is simply false. And that the kind of AI required for the robots that have captured our hearts and captivated our imaginations in so many works of fiction is still simply that in terms of our present day scientific understanding: a fiction. Weak AI, however, is a much more reasonable position, and one that allows for the possibility of AI, but only after significant, paradigmatic changes in the nature of our understanding. The Platonic stance should also be discounted and disregarded because it is nothing more than thinly veiled mysticism, and, somewhat paradoxically, it grants a rigidity and infallibility to scientific endeavors which are, at their core, inherently malleable and notoriously fallible. Simply because science cannot currently explain a particular phenomenon, it does not then necessarily follow that that particular phenomenon cannot be explained by science in the future. This is all that needs to be said to refute the Platonic Stance. We will now turn to John Searle’s infamous Chinese Room Argument in order to refute Strong AI.

So what is the Chinese Room Argument? It is a simple yet profound thought experiment. Imagine an individual in a room. There are no windows or doors, just one small slot whereby the individual can receive items and another slot whereby the individual can send out items. Also in this room is a complex yet comprehensive manual with very detailed instructions concerning what exactly the individual is supposed to do with the information received through the slot. Now, sheets of paper with various Chinese symbols begin filtering in through the ‘in’ slot. This is no surprise to the individual; the manual said this would happen. Once these papers come in, the individual consults the manual and finds the corresponding symbols in its pages. On the page the individual is shown another set of symbols which should be written down below the received symbols and the page sent out again through the ‘out’ slot. This goes on for some time. Papers coming in, the individual consulting the manual and writing symbols, and papers then going out. Researches on the other side of the room check the answers and are not surprised to find that they are always correct. What the individual in the room is doing by consulting the manual is essentially the same thing as a computer executing a program. The individual is presented with certain symbols, checks the instructions, performs the appropriate actions, and produces another set of symbols. Simple enough.

Now, let us suppose that the Chinese symbols going in to the room are questions about a certain Chinese fable. The symbols produced by the individual that are going out of the room are answers to those questions. Here’s the rub, the most important point, the beautiful brevity of the Chinese Room Argument. While the researches on the other side may be surprised by the accuracy and computational ability of the individual in the room, no one in their right mind would assert that the individual understands Chinese, the Chinese fable, the questions being asked, or the answers given. In fact, the individual possesses no more understanding or awareness of these things than a rock on Mars. The individual is simply following a very stale, formulaic, and precise program.

And this is precisely why Strong AI is false. We have no good reason to believe that a computer executing a program actually understands the program, the parameters of the program, the subject matter of the program, or the actions generated by the program. There is simply no basis for the assertion that the computer is conscious or aware or intelligent in the sense that everyday human beings are conscious, aware, or intelligent. It is indeed possible to write complex algorithms which enable a machine to perform impressive tasks, sometimes far exceeding the capabilities of human beings. But the crucial distinction is that the machine does not understand, is not aware, is not intelligent. It is an insentient machine.

All of this is not to say that AI is impossible or unattainable. It is simply to say that in order to genuinely simulate, or create, consciousness, we must look beyond our conceptualization of consciousness as nothing over and beyond simple computational procedures. We must, as Robert Penrose suggests, expand our current ideas of the laws of physics by looking at the mysterious and often paradoxical quantum world, a world which often acts in non-computational ways.

We should reject Strong AI as degrading to our essence as human beings. We must reject the Platonic Stance as insulting to our capacity to learn and grow and discover. If we are to take a stance, it should be the reasonable, dignified, and hopeful stance of Weak AI.

That is, as long as you believe that AI will produce Wall-E’s and not Terminators.

gridteam

gridteam

gridteam

Latest Articles by gridteam (see all)

You Might Also Like