What Are Semantic Nets? A Little Light History

The concept of a semantic network is now fairly old in the literature of cognitive science, and has been developed in so many ways and for so many purposes in its thirty year history that in many instances the strongest connection between recent representational formalisms based on networks is their common ancestry. The term 'semantic network' as it is used now might therefore best be thought of as the name for a family of representational schemes rather than a single formalism. A little light history will clarify how the various types of network we shall be reviewing are related to one another.

From the time they first appeared in AI, semantic nets have been closely associated with issues of language understanding: the term itself was coined by Margaret Masterman (1961), in her work on machine translation, to represent meaning relations between conceptual primitives, and was later taken up by Ross Quillian who, in his Ph.D. thesis (1966), introduced it as a way of talking about the organization of human semantic memory, or memory for word concepts. The idea of a semantic network -- that is, of a network of associatively linked concepts -- has a very much longer history, however, dating back at least as far as Aristotle, and re-emerging in writings as diverse as those of the seventeenth-century 'universal language' movement' and the Gestalt psychologists, whose concerns were less with language per se than with the ways in which the knowledge underlying language is structured.

In the modern era, the late nineteenth-century British psychologist, Francis Galton, devised the word-association test as a means of mapping the organization of human memory. The test was subsequently taken up and refined in Germany by the psychologist Wilhelm Wundt, and may well have later influenced theories of word meanings and conceptual organization such as those proposed by the Swiss linguist Ferdinand de Saussure and by Otto Selz. In the word-association test, the subject is prompted with a word, and asked to respond with the first word that the prompt brings to mind. On the basis of the results of such tests, associative networks can be constructed, such as that in figure 6.1. below, in which the circles represent words and the lines connecting circles represent immediate associations of words. The smaller the number of links between two words, the more closer related in meaning the words are. Similarly, if a path can be traced between words in the network, then a meaning relation can be shown to exist between the words.

Figure 1: An associative network

It is Quillian's work, however, which marks the true beginnings of semantic networks in AI. Semantic networks were conceived specifically as a "representational format [that would] permit the 'meanings' of words to be stored, so that humanlike use of these meanings is possible" (Quillian, 1968, p.216), and, as with almost all mainline research in semantic nets since Quillian's original proposal, they were intended to represent the non-emotive, so-called 'objective' part of meaning: the properties of things, rather than of the way we may feel about them.

Quillian's basic assumption was that the meaning of a word could be represented by the set of its verbal associations. To see what this means, imagine that, in the course of reading a novel, you come across the word 'dugong' and the context doesn't make clear what the word refers to. So you look up the word in a dictionary, and there you find, not the object or the property or the action itself, but rather a definition made up of other words. In the present case, you find:

DUGONG: a herbivorous marine mammal of tropical coastal waters of the Old World, having flipper-like forelimbs and a deeply notched tail fin.

You still have no clear idea of what a dugong is, so you then look up each of the words making up the definition, and in turn each of the words making up the definition of each word in the definition of the original word, and so on, learning that 'herbivorous' means 'feeding on plants; plant-eating', that a 'flipper-like forelimb' is 'a wide, flat limb, as of a seal, adapted especially for swimming', that 'marine' means 'native to or formed by the sea', but that nonetheless a dugong is not a fish but a 'mammal' which is 'a member of the class Mammalia', in turn 'a class of vertebrate animals ... distinguished by self-regulating body-temperature, hair, and, in the female, milk-producing mammae', and so on, and so forth. As you follow through all the cross-references, so you build up a complex picture of the concept named by the word and of its relation to other concepts, say, those of manatee, whale, mammal, animal, life-form.

Clearly, such a mental representation exceeds the mere dictionary definitions of the words you have looked up: semantic networks, as do dictionaries more indirectly, reflect the complex manner in which human knowledge is structured, every concept being defined in terms of its place in a web of relationships between concepts. We might picture a person's knowledge as a map, with points or nodes representing individual concepts and labeled links (called arcs or pointers in some texts) connecting these nodes together. Just as we know where Trafalgar Square is because we know how to get there from Piccadilly Circus, or from Charing Cross, or from St James' Park, so too we know now what, for example, a dugong is because we 'know how to get there' from a tail fin, a herbivore, a flipper, and a mammal.

To get some feel for semantic nets, think of a common, but evocative, word, say 'castle'. Write it down in the middle of a sheet of paper. Now think of some words related to castle, say, 'king', or 'battlement'. Write down these words in a ring around 'castle', and join each of them with a line to 'castle'. Try and give each line a label that describes the relationship between the two words -- for example, the line linking 'king' and 'castle' might be labeled as 'lives in'. Continue outwards over the paper, writing down words relating to 'king', words relating to 'battlement', and so on. What you are constructing is, roughly, a semantic net.

The foregoing 'map' analogy should not be pushed too far. In Quillian's original semantic networks, a relation between two words might be shown to exist if, in an unguided breadth-first search from each word, there could be found a point of intersection of their respective verbal associations. We would not, by contrast, wish to find a route from Tower Bridge to Trafalgar Square by blindly sending out search parties in all directions from each location in the hope that they might eventually meet! While Quillian's early nets might have looked an attractive psychological model for the architecture of human semantic knowledge, they did not provide an adequate account of our ability to reason with that knowledge.

A couple of years later a psychologist, Allan Collins conducted a series of experiments along with Quillian to test the psychological plausibility of semantic networks as models, both of the organization of semantic memory and of human inferencing. The networks they used, such as that in figure 6.2., now gave far greater prominence than before to the hierarchical organization of knowledge. (Don't worry too much on the first reading about trying to understand what the network in figure 6.2. means; it will become clearer on second reading, after the network formalism has been explained.)

The network is displayed as a taxonomic tree or isa hierarchy (a term we introduced briefly in chapter 3). Each node (represented as an oval in the figure) is connected upwards to its superclass and downwards to its subclass. A canary, in this schema, is a bird and, more generally, an animal. A shark, too, is shown to be an animal, but it is not a bird, as there is no link up from the 'shark' to the 'bird' node. An ostrich is a bird because, like the canary, it is one of the children of the 'bird' node. The links sideways from each node state properties that are typically true of the class named at the node -- that birds can fly, for example, or that canaries can -- and properties of higher nodes are inherited by the lower nodes to which there are connected unless there is a property attached to a lower node that explicitly overrides it. Thus, we may infer from the tree that canaries can fly because birds typically can fly, whereas we are inhibited from making the same inference for ostriches since there is the explicit statement at the 'ostrich' node that it 'can't fly'.

Figure 2: A taxonomic tree. After Collins and Quillian (1969).

Collins and Quillian's experiments consisted in presenting subjects with sets of true and false sentences, and measuring their reaction time in deciding whether the sentences were true or false. Taking the number of links to be traversed between two nodes to be the measure of the semantic distance between the concepts, they predicted that a person would require more time to decide, for example, that "A canary is an animal" or "A canary has skin" than to decide that "A canary is a bird" or "A canary can sing", since in the former cases the search for the relevant information requires rising through more links in the hierarchy. The experimental results met their predictions. Table 6.1. illustrates the kinds of stimulus sentence, with approximate reaction times, that were used in these experiments. While subsequent research on reaction times to sentences, such as those by Conrad (1972), by Rosch (1977, 1983), and by Smith, Shoben and Rips (1974), raised doubts about the soundness of the model in the form it then had, Collins and Quillian's hierarchical nets were an important source of many good ideas for, and the direct forerunners of, more recent networks, particularly in the domain of language understanding (which we shall consider again in section 3 below.)

Table 1. Reaction times for sentences with differing semantic distances. From Collins and Quillian (1969).
True sentences Mean reaction
time in msec
S0 A canary is a canary 1.00
S1 A canary is a bird 1.17
S2 A canary is an animal 1.23
P0 A canary can sing 1.3
P1 A canary can fly 1.38
P2 A canary has skin 1.47

Around the same time as Collins and Quillian were working on their semantic net representations of word meanings, others were using net-like structures to express types of complex knowledge, though often without Quillian's emphasis on cognitive plausibility. Patrick Winston at MIT, for example, designed a program which, from net-like structural descriptions of physical structures such as an arch, could infer the concept of arch; Jaime Carbonell adapted Quillian's networks as a data structure for a program called SCHOLAR, which gave tuition on the geography of South America. Notable recent extensions of the semantic net formalism have been John Sowa's (1984) 'Conceptual Graphs', which replaces isa links with type labels indicating the class or type to which a named concept belongs, and Ronald Brachman and his colleagues' KL-ONE (Brachman and Schmolze, 1985).