Digression 1: Intentionality.

I wish to invoke the concept of 'intentional systems'. An intentional system is one whose behaviour can be -- at least much of the time -- explained and predicted by relying on ascriptions to the system of beliefs, desires, hopes, fears, intentions, hunches, and other such mental attitudes. That is, a particular thing is an intentional system only in relation to the strategies of someone who is trying to understand, explain or predict its behaviour.

Consider, by way of example, the case of a chess-playing computer, and the different strategies or stances one might adopt, as its opponent, in trying to predict its moves. First, there is the 'design stance': if one knew exactly how the computer, or its chess-playing program, were designed, one might in principle predict its designed response to any move one makes by following the instructions in the program. We would say, "Oh, it made this move because it was programmed to behave in such-and-such a way in response to that move and that configuration of pieces" just as we can say of a coffee machine, "It produced this cup of coffee because I put 10 pence into the slot and pressed that button".

Second, there is what we might call the 'physical stance', according to which our predictions are based on the actual physical state of the particular object, and are worked out by applying whatever knowledge we have of the laws of nature. It is from this stance that we might predict,say, the the malfunction of systems that are not behaving as designed. For example, "This coffee machine isn't working because it hasn't been plugged in".

But the chess-playing computers these days are inaccessible to prediction from either the design stance or the physical stance; they have become too complex for even their own designers to view from the design stance. A player's best hope of defeating such a machine in a chess match is to predict its responses by figuring out as best (s)he can what the best or most rational move would be, given the rules and goals of chess. Put another way, when one can no longer hope to beat the machine by utilizing one's knowledge of programming or of physics to anticipate its responses, one may still be able to avoid defeat by treating the machine rather in the way one would an intelligent human opponent. This third stance is the 'intentional stance'; and one is then viewing the computer as an 'intentional system'. One predicts its behaviour in such cases by ascribing to the system the possession of certain information and supposing it to be directed by certain goals, and then by working out the most reasonable or appropriate action on the basis of these ascriptions and suppositions.

Notice, however, that there is a difference between, on the one hand, explaining and predicting the behaviour of complex systems by ascribing beliefs and desires to them and, on the other hand, crediting such systems with actual beliefs and desires. We very often make this mistake with animals: we believe that dogs, for example, answer to their names or sit when told to sit, rather than simply respond appropriately to certain familiar vocal noises; in similar vein, we speak of mice being 'scared' of cats, of trapped flies 'wanting' to escape from webs. Users of ELIZA have all too often unwittingly fallen into the trap of believing that the computer really and truly understood their problems. To that extent, the progam passed what has become known as 'the Turing test' or what Turing himself called 'the imitation game'.