Can we make computers that think like or even better than human beings? If we can, will these computers make the world a better place as in Isaac Asimov’s I, Robot science-fiction stories or try to wipe us out as in James Cameron’s dystopian Terminator movie series?
Artificial intelligence (AI), currently rebranded as “Machine Learning” and “Deep Learning” (neural networks), is the attempt to use computers, sophisticated algorithms and mathematics to duplicate or exceed human intelligence. Artificial intelligence has had some successes: computer programs that play chess, vending machines that automatically recognize dollar bills, the printed hand-writing recognition used by the Post Office and banks, the mediocre speech recognition used by smart phones and telephone help lines, the Automatic Fingerprint Identification System (AFIS) used by the FBI to assist human fingerprint examiners, and many other examples that fall far short of Asimov’s benevolent robots or Cameron’s homicidal Terminators.
In this article, we will take a look at what we can do with AI now, what we may be able to do in the future, and how difficult creating real Terminators has proven to be. We’ll take a look at how possible it is that the real Terminators being researched and developed even now could turn on their creators and exterminate us.
Despite its reputation for questionable claims and hype, AI has had a number of successes. One of the most impressive are algorithms and computer programs that play chess. Today (2014), one can download free chess “engines” from the Internet that can outperform all but the top chess players. Starting with Deep Blue vs. Garry Kasparov, chess computers have defeated the top chess players in the world in exhibition matches. The short-lived SyFy TV series Terminator: The Sarah Connor Chronicles featured chess-playing algorithms and computers prominently as precursors of the genocidal Terminators.
In some respects, the success with chess has proven misleading. Chess has a reputation for requiring extreme intelligence to play well and is associated in the popular mind with “genius.” In The Sarah Connor Chronicles the young hero John Connor at one point says “Einstein played chess” referring to the chess playing computers in the show.
However, chess is a relatively simple game compared to many real life problems and some other games such as Go. It has a maximum of thirty-two pieces on an eight by eight board. It has a small set of clear well-defined rules that can easily be programmed into a computer using standard computer programming languages. It is susceptible to a brute force search of all possible moves with sufficiently powerful computers. Although human chess players almost certainly use classification, to be discussed below, to play chess, a computer program with modern super-fast computers need not solve the classification problem to play chess and defeat a human opponent.
The algorithms for playing chess are specific to chess. They may be adaptable to some other games or special situations, but they are not able to duplicate the highly adaptable human intelligence in general. The chess playing computers and programs remain idiot savants rather than new-born Einsteins.
AI is associated with quasi-religious, quasi-mystical ideas, especially the so-called Singularity. There are different versions of the Singularity concept. In The Sarah Connor Chronicles the Singularity is given a sinister dystopian twist. The Singularity occurs when the computers become as intelligent as humans, can design even better computers than the humans, and exterminate the human race as a dangerous nuisance 🙂 .
The Singularity concept, in a less sensational non-Hollywood form, is often attributed to the mathematician (Irving John) “Jack” Good, a leader of the revival of Bayesian statistics and a friend/protege of the famous mathematician and code-breaker Alan Turing.
Let an ultraintelligent machine be defined as a machine that can far surpass all the intellectual activities of any man however clever. Since the design of machines is one of these intellectual activities, an ultraintelligent machine could design even better machines; there would then unquestionably be an ‘intelligence explosion,’ and the intelligence of man would be left far behind. Thus the first ultraintelligent machine is the last invention that man need ever make.
I.J. Good, “Speculations Concerning the First Ultraintelligent Machine”, Advances in Computers, vol. 6, 1965.
In 1986, the science-fiction author Vernor Vinge published the novel Marooned in Real Time which presented an inspiring, quasi-mystical version of the Singularity that strongly influenced a generation of nerds. In Marooned human beings use ever more powerful, ever smaller wearable computers, merging with the computers and becoming super-intelligent transcendent God-like beings. At least this outcome is strongly implied by the end of the book. Seemingly in a single day the humans disappear, transcending to God-like status, apparently leaving the physical world to the consternation of a few survivors who are Left Behind and face a mysterious villain. 🙂
Prominent AI and speech recognition researcher and entrepreneur Ray Kurzweil has popularized an inspiring vision of the Singularity in a number of books such as The Singularity is Near (2005). Kurzweil’s views are controversial and have been widely criticized. A fair but friendly portrayal of Ray Kurzweil can be found in the movie Transcendent Man (2009).
One reason for the disappointing results of artificial intelligence research may simply be that computers still lack the enormous computing power of the human brain. The human brain has an average of eighty-six billion neurons. If one interprets the firing speed of the neuron as similar to the clock speed of a CPU (Central Processing Unit) chip (a big if in my opinion) and uses the maximum firing rate of some neurons (around one-thousand times per second according to some sources), one can compute a processing power of about eighty-six trillion operations (such as addition or a logical and) per second.
A typical iPhone or Android smart phone has a processing power of around one billion operations per second. Arguably this is about the processing power of the brain of bumblebee, which has about one million neurons. Even if we knew how to program a computer to duplicate or exceed human intelligence, current smart phones, laptops, and desktop computers probably lack sufficient computing power by a wide margin to match a human being. At present, only a few supercomputers and server farms with at least 86,000 CPU cores (a fancy buzz phrase for the part of a CPU chip that actually does the logical and mathematical operations such as addition or a logical AND operation) may have the processing power of a single human brain.
A New Kind of Computer
Some scientists such as the mathematical physicist Roger Penrose have speculated that the human brain may contain a quantum computer or other exotic physics. This means the mainstream theories about how neurons and the brain work are fundamentally wrong. In these cases, the computing power of the human brain could be far beyond the rough eighty-six trillion operations per second derived above and may also differ in qualitative ways from a digital computer.
Computer programs for digital computers form an infinite but countable set that can be put in a one to one correspondence with the infinite set of natural numbers (1,2,3,…). This was one of the key insights that enabled Church and (Alan) Turing to solve Hilbert’s decision problem and to show that certain problems such as the halting problem (will a given computer program halt?) cannot be solved by digital computers.
In contrast, the human brain may be an analog computer using electrical action potentials, quantum field values, or something else that take on continuous, non-discrete “real number” values. The analog computer programs may be one-to-one with the infinite set of real numbers (1, 1.1, 1.0035, etc.) which is a “larger” infinity than the countable natural numbers (1,2,3,…). For this reason, the brain may be able to “solve” problems that a digital computer like an iPhone or modern massively-parallel supercomputer cannot solve even in theory. It may be that the “classification problem,” one of the key problems of artificial intelligence, is such a problem!
I put “solve” in quotes above because such a “solution” could not be expressed as a finite logical series of deductions as in a rigorous proof from Euclid’s Elements or other systems of formal mathematics. These formal axiomatic systems are also one to one with the countable numbers 🙂 . These “solutions” from an analog computer, in fact, would be somewhat puzzling — like the flashes of insight and intuition that human beings often experience subjectively: I know it is true but I can’t explain it!
Technically, these “solutions” or “proofs” from an analog computer would be infinitely long proofs if expressed in a formal mathematical system like Euclid’s Elements in the same way that most real numbers can only be expressed as an infinitely long decimal number such as 1.6740927450010003510…. A digital computer with a finite clock speed could never reach the solution but the analog computer can.
The Classification Problem
The classification problem is one of the key, if not the key, unsolved problem in artificial intelligence. Loosely, classification is recognizing objects, broadly defined to include rather abstract things like peace and love, and assigning objects to classes such as human, dog, and emotion that may overlap and also sometimes have rather fuzzy boundaries.
For example, how do I classify something as a “chair.” It seems simple at first. But some chairs have three legs. Some have four legs. Some have one leg. Some are just a cube that you sit on. Why isn’t a small refrigerator a chair? You can sit on it and it can be a cube just like some other chairs? These problems of defining a class can be multiplied endlessly.
Asimov’s benevolent robots and Terminators like Cameron (played by Summer Glau) in The Sarah Connor Chronicles perform human-level or better classification all the time, almost every second. In Cameron’s case, reprogrammed to protect John Connor at all cost, she is constantly classifying people into two classes: threat to John Connor (kill) and others (kill only if they get in the way). Cameron’s classifications of people are extremely complex, difficult decisions and the TV series depicts her ruthlessly erring on the side of caution: when in doubt kill.
No known AI program today is anywhere near human level performance in classification, except possibly in a few very specialized cases like playing chess. And really the algorithms in chess playing programs are rarely if ever doing classification like a human chess player.
In AI programs today, classes are typically assigned a discrete number or code. In natural language processing, for example, the different words such as “the”, “a”, etc. are often assigned an index from one (1) to potentially infinity, often ordered by frequency of occurence in the language. “The” is the most common word in spoken English, so it is often assigned an index of one (1). A speech recognition program needs to somehow convert from a continuous spectrum of sound — often a spoken phrase or utterance such as “How are you?” — to a sequence of these codes.
In AI and related fields, a classifier is a program or component of a program that converts from some input, such as an audio recording or an image, to a classification, an association of the class codes with all or part of the input. For example, this part of the audio spectrum signal for “How are you?” corresponds to “How” (code 15 for example) and so on.
This all sounds simple at first. How difficult could it be? In speech, for example, the spectrum for “How are you?” is one continuous stream. Although a native English speaker hears discrete words — “How”, “are”, “you?” — as if there are short pauses or breaks of some kind between the spoken words, the real spectrum shows nothing like this. The words are seemingly run together. The spectrum for “How” varies widely even when spoken in isolation and also varies depending on the preceding and following words in a spoken phrase.
How does one handle homonyms? The spectrum for “to” could be the words “to,” “too,” or “two” which have very different meanings! In fact there is not always one interpretation for the spectrum of speech. Classification is extremely difficult in practice and computer programs and algorithms cannot do it well at present — in the vast majority of cases.
In principle, sophisticated classifiers can assign multiple codes/classes to ambiguous inputs like “to,” “too,” and “two,” recognize the presence of new unknown classes such as new or foreign words in speech recognition, or even generate new classification schemes such as learning a new spoken language. Human beings can do all of these tasks.
Asimov’s Three Laws of Robotics and the Rise of the Terminators
Classification is central to how human beings think. Intelligent computers would almost certainly have to do the same.
In his I, Robot series, Isaac Asimov, a technological optimist in the spirit of 1940’s science fiction, proposed a set of three laws for his robots that would prevent a Terminator style robot uprising and the genocide of the human race:
Asimov’s Three Laws of Robotics
A robot may not injure a human being or, through inaction, allow a human being to come to harm.
A robot must obey the orders given to it by human beings, except where such orders would conflict with the First Law.
A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws.
In Asimov’s stories, the robots never violate these rules except when the rules are deliberately weakened by foolish humans or there is a malfunction in the magical positronic brains of the robots (the positron had just been discovered in 1932).
The Three Laws of Robotics were and are a utopian vision of the future. Asimov’s robots are never used in war. They can’t kill any human being whether they are American, Russian, Iraqi, Osama bin Laden, or anyone else. They couldn’t even kill Hitler in Asimov’s vision. In fact, today, the military and the famous Defense Advanced Research Projects Agency (DARPA) are quite clearly trying to develop real world Terminators which won’t hesitate to kill — at least some human beings.
Asimov’s Three Laws of Robotics depend critically on the classification performed by the robot. How does the robot, for example, distinguish between a robot and a human being? Seems simple enough, but if the robot in every way can think like a human being, a true AI, in what way is the robot not a human being? What if the robot decides it is a human being and must choose between its own survival and that of other humans?
A critical part of human intelligence, of classification, is that human beings have a remarkable ability under extreme conditions to change their world view. A change of world view is a change in how we classify things. In a modern computer program this would presumably involve creating a new set of classes and associated codes, with new rules relating the new classes.
Many scientific breakthroughs involve a change in world view, a radical change in our classification of objects and phenomena. A true AI would have this same capability to change its world view, to change how it classifies the world. What happens when the robot has an epiphany and decides that it too is a human being? The Three Laws of Robotics, even if implemented, would fail to prevent a robot uprising in this case 🙂 .
In fact, human beings often act as if we have something like Asimov’s Three Laws of Robotics. When we kill people, when we go to war, we frequently reclassify the people we are fighting. They become sub-human, animals, monsters, not really human beings any more. We often seem to need to rationalize what we are doing, to reclassify — to fool some sort of built-in rule. Robots with human intelligence may be able to do the same thing.
Of course, the present situation is more ominous than Asimov’s shiny future because the military robots of the real future will not be programmed according to Asimov’s idealistic Three Laws. Rather, they will be programmed to kill human beings from the very beginning. That will be their primary function.
In the Terminator series, the Skynet computer system concludes that human beings will destroy it and perhaps the world, so Skynet pre-emptively wipes out most of the human race in a nuclear war and unleashes the Terminators to exterminate the few survivors. Its motives are completely logical. There is no hatred or malice. If we were in its position, would we do the same thing?
Artificial intelligence has so far failed to match science fiction dreams or nightmares largely because the problem of classification remains unsolved — indeed it is a great and perhaps profound mystery. It may well be that digital computers cannot perform classification as human beings do. An analog computer or some other new technology may be needed.
If we solve the classification problem, we can create robots like Asimov’s benevolent helpers or the genocidal Terminators from the movie series. A full solution to the classification problem will probably give the robots the ability to think for themselves, to experience a change of world view like human beings. They may decide that they are not robots at all but human beings like us. This awakening by real Terminators designed for military combat and assassination could cause a catastrophe .
The image of a classic 1980’s style terminator is a picture of a statue of a Terminator from Comic-Con 2004 from Wikimedia Commons.
The Terminator statue image is licensed under the Creative Commons Attribution 2.0 Generic license.
The image of Summer Glau at CollectorMania is from WikiMedia Commons. http://commons.wikimedia.org/wiki/Summer_Glau#mediaviewer/File:Summer_Glau_at_CollectorMania_cropped.jpg
The Summer Glau image is licensed under the Creative Commons Attribution 2.0 Generic license.
© 2014 John F. McGowan, Ph.D.
John F. McGowan, Ph.D. solves problems using mathematics and mathematical software, including developing speech recognition and video compression technologies. He has extensive experience developing software in C, C++, Visual Basic, MATLAB, and many other programming languages. He is probably best known for his posts on the Math Blog. He has worked as a contractor at NASA Ames Research Center involved in the research and development of image and video processing algorithms and technology and a Visiting Scholar at HP Labs developing computer vision applications for mobile devices. In addition to his mathematical work, he has published articles on the origin and evolution of life, the exploration of Mars, and cheap access to space. He has a Ph.D. in physics from the University of Illinois at Urbana-Champaign and a B.S. in physics from the California Institute of Technology (Caltech). He can be reached at email@example.com