Credits
An abridged version of this paper appeared in R. Ennals and P. Molyneux (eds.), Managing with Advanced Information Technology. London:Springer-Verlag. 1993

2. What is Artificial Intelligence?

2.1. The Emergence of Machine Intelligence

The most efficient, innovative, and 'user-friendly' of all intelligent systems are Human Beings: they are good at using common sense yet at the same time may also have valuable and scarce specialist knowledge, they are good at finding inventive ways of solving 'hard' problems, they are usually good at explaining what they do and why they are going about things in a particular way, they learn from experience and can apply that learning to novel situations, and at their best they are friendly, helpful and co-operative.

So if we can automate these abilities on computers, then are human beings just Psion Organisers writ large? Far from it! There is a qualitative difference between the processing done in conventional computing and the kinds of reasoning that distinguishes us as human. Unlike calculating machines, humans think symbolically rather than numerically: they organise and manipulate structured concepts and ideas rather than numbers. AI technology, modelling human information processing, is also based on symbol manipulation in this sense, and in this resides the intelligence of AI systems and their ability to reason in a human-like manner.

The argument is cogently stated by pioneers of Artificial Intelligence, Allan Newell and Herbert Simon (1975), in their discussion of the so-called 'Physical Symbol System Hypothesis', which proposes that:

A physical symbol system has the necessary and sufficient means for general intelligent action.

A physical symbol system is quite simply a rigorously organised set of symbols designating objects, properties and relations in the world, together with a collection of processes which, operating upon the organised symbol structures, will generate new structures. Symbols may be, for example, atoms in a programming language like LISP or Prolog. A symbol identified by the character string 'widget' might then represent the concept widget; the symbols 'splange-nut' and 'has-part' the concept splange-nut and the relation has-as-a-part respectively. From atomic symbols one might then build symbol structures such as

(WIDGET HAS-PART SPLANGE-NUT)

and from such simple expressions, more complex expressions.

Now imagine a system defined in terms of such complex symbolic structures. Imagine, further, that the mutual relations between its constituent symbolic structures parallels perfectly the mutual relations that define our own knowledge states, such that its internal states are functionally isomorphic with our own. The internal states of the system, moreover, are causally connected to inputs, to each other, and to behaviour in such a way that its performance in some task domain mirrors the performance of a human being in that task domain. We might in that case be willing to say that, in spite of its differences from a human being in terms of the 'hardware' in which those functional states are physically realised, the system has nonetheless the means for human-like intelligent action.

The business of the AI researcher is, then, to construct good theories about the knowledge structures and cognitive processes of human beings. A good theory of how a subject went about solving a problem may then be viewed as a model of the problem solver's activities in tackling the task, and can be translated into a computer program. In this sense, Newell and Simon claim, "the theory performs the task it explains":

A good information processing theory of a good human chess player can play good chess; a good theory of how humans create novels will create novels; a goodtheory of how children read will likewise read and understand. (Newell and Simon, 1972, pp. 10-11)

In contrast, more traditional computing technologies—including data processing and word processing where the input may at first sight appear to be words or concepts or symbols—do not manipulate symbols: although they may handle text, they do not see characters but rather the ASCII codes behind the characters. Even apparently clever operations such as word search-and-replace, data retrieval or spelling- or style-checking are basically 'dumb' number-crunching operations.

2.2. Building Models of Mind

Artificial intelligence, then, distinguishes itself as an intellectual and engineering enterprise in its concern with simulating the cognitive skills that make us human: the ability to reason and solve problems, the ability to see and interpret what we see, the ability to speak and understand the speech we hear, the ability to plan and to execute those plans in an intelligent manner, the ability to learn, and so on. These are schematically summarised in the figure.

The AI scientist will typically have to address himself, in the first place, to questions regarding how human intelligence works:

  • how do we understand language?
  • how do we give explanations?
  • how to we perceive the world?
  • how do we reason from evidence to conclusions?
  • how do we co-ordinate hand and eye?
  • how do we solve problems?
  • how do we learn?
  • how do we make plans?
  • how do we walk around without knocking into things?

As answers slowly emerge, he will then address himself to the issue of how computers can be made more useful through the implementation of the resultant theories. But where will the answers have come from? With its aim of building models of mind functionally isomorphic with those of human beings, AI, though itself a comparatively young field of enquiry, predictably has its intellectual roots—as the above questions indicate—in many more traditional academic disciplines. Psychology, linguistics, the biological and brain sciences, mathematics, logic, and philosophy, though the academic division of intellectual labour may have forced them apart in the early years of this century, all have valuable contributions to make to the study of human intelligence. As Gunther Kress and Bob Hodge perceptively note, in another context:

disciplines exist for the sake of their subjects, not the other way round. If the boundary that has been drawn around a discipline proves a hindrance to the proper study of that subject matter, then it is the boundary that must change.
(Kress and Hodge, 1979, p.3)

The boundaries are now being redrawn around a field of endeavour that, with further input from more recent disciplines such as computer science and electronic engineering, is bringing together experts from each of these disciplines with the goal of forging new silicon-based brains that aim to perform as well as—and sometimes outperform—human beings in knowledge-based tasks.

From psychology, in particular, come models of human mental processing—of reasoning, perception, understanding, memory, learning, planning, perhaps even emotion—which may inform the design of artificial minds. From linguistics, precise models of linguistic knowledge and language use, which provide the theoretical basis for the design of systems that can communicate naturally with the outside world. Philosophy, on the other hand, in addition to providing us with the formal languages of logic with which to express our theories clearly and concisely, gives us the conceptual tools for reasoning about the nature of mental states and the forms that knowledge must take.

The intellectual forebears of AI are too numerous for there to be space in this chapter to detail their contributions. René Descartes, Gottfried Leibniz, Julien Offray de la Mettrie (author of L'Homme Machine), Charles Babbage, and Augustus de Morgan, among others, all have their places in the intellectual genealogy. Particularly deserving of mention, however, are the mathematicians George Boole, Alan Turing, and Alonzo Church, who in their various ways attempted to formulate 'principles of reasoning'.

George Boole (1815-64) was a self-taught mathematician and the inventor of symbolic logic, with which he hoped to be able to give account of 'the laws of thought'. In The Mathematical Analysis of Logic (1847) and The Laws of Thought (1854), Boole formulated precise definitions of the connectives and, or, not and implies, and invented class-inclusion logic and the two-valued 'Boolean' logic, all of which prefigure the basic principles of computer science and artificial intelligence.

The twentieth-century mathematicians Alonzo Church and Alan Turing may be jointly credited with demonstrating, each independently, that in principle any well-defined mental operation can be performed by a suitably programmed machine—what we would now call a 'universal Turing machine'. (Turing and Church are difficult reading. See Penrose, 1990, for a lucid discussion). Turing's career from the 1940s was, in fact, closely entwined with the development of the modern digital computer. After setting out the theoretical foundations of computing in the mid-1930s, he worked during World War II at Bletchley Park as one of a team of academics who had been assembled by the British government to try and crack the coded messages broadcast by the German armed forces. To help them in this task, they built what was arguably the world's first electronic computer. The machine, called Colossus was built two years before ENIAC, the first US computer, but it was cloaked in military secrecy and, being designed for code breaking, did not have a general purpose architecture. Alan Turing also worked, however, on the world's first commercially available electronic computer, the Ferranti Mark I.

Thus computers—the vogue technology of the late 1940s—were to become the newest metaphor for the mind. Newspapers of the time were full of articles about the 'superhuman brain' and 'electronic genius'. Swept up in the intellectual euphoria of the period, Turing published, in 1950, an entertaining and highly readable paper entitled 'Computing Machinery and Intelligence' in which he addressed the question "Can machines think?" Turing opens the paper with what he calls the 'Imitation Game' (later to be known as the 'Turing Test'): imagine a computer (C), a human being (H), and an interrogator (I) in separate rooms, with one teleprinter line from the interrogator to the human, another from the interrogator to the computer, and no other means of communication. Furthermore, the interrogator does not know which line goes to H and which to C. The object of the game is for the computer (C), in responding to I's questions, to fool I into believing that it is C and not H 'who' is the human being. The ultimate aim, that is, is to effectively program a computer so that it can convincingly imitate a human. The question and answer session, Turing suggested, might go something like this:


   I:  Please write me a sonnet on the subject of the Forth Bridge. 
   C:  Count me out on this one. I never could write poetry.
   I:  Add 34957 to 70764. 
   C:  (Pause about 30 seconds and then give as answer) 105621.
   I:  Do you play chess? 
   C:  Yes. 
   I:  I have K at my K1, and no other pieces. You have only K at K6 and R at R1.
       It is your move. What do you play? 
   C:  (After a pause of 15 seconds) R-R8 mate.

Notice that C gives a wrong answer to the subtraction sum in the dialogue above; successfully imitating a person involves mimicking human errors and lapses. If, after a reasonable number of questions, the interrogator cannot tell which line is connected to the human and which to the computer, then the computer might be said to think.

In the second part of the paper, Turing raises, and dismisses, some of the reasons (such as the argument that computers cannot be creative) why it might not be feasible to program a computer to pass his test.

An annual contest is still held for systems to compete in attempting to pass the 'Turing Test', though probably more in commemoration of its eponymous hero than in the expectation of an eventual success, since, despite thirty-five years of boasts and promises and predictions, passing the Turing Test is still far beyond the capabilities of any existing computer program. It will be a very long time indeed before we ever see a system capable of displaying general intelligence comparable to that of any human being.

Meanwhile applied AI has forged ahead with the more modest task of developing and producing, often in collaboration with government and industry, high-level tools to enhance the efficiency and productivity of knowledge-processing personnel in the workplace. An overview of some of the actual achievements and of some of the national and trans-national collaborative ventures is summarised in the next section.