An abridged version of this paper appeared in R. Ennals and P. Molyneux (eds.), Managing with Advanced Information Technology. London:Springer-Verlag. 1993

3. AI and KBS in the 1980s and 1990s: Looking Towards the Future

3.1. New Initiatives

Until around 1980, the AI enterprise was by and large confined within a small number of university departments and research laboratories. Although a great deal of quite outstanding basic research work was, from the early 1960s, being carried out at prestigious institutions such as Stanford University, the Massachusetts Institute of Technology, and Carnegie-Mellon University in the United States, and Edinburgh and Sussex universities in Britain, the work had had very little impact on the world of business and industry, had produced virtually no marketable products, and had excited very little public interest or awareness. In Britain, in particular, AI had suffered a severe setback in 1973 with the publication of the Lighthill Report which drew attention, not entirely unfairly, to the "lamentable failure" of its actual achievements to match up to the predictions for the discipline that had been made in the preceding twenty-five years.

The dramatic turn-around came in 1981 when Japan's Ministry of International Trade and Industry (MITI) announced the establishment of its Fifth Generation Computer Systems project -- a ten-year programme of advanced research and development in `Knowledge Information Processing Systems' (KIPS), with industrial collaboration from Fujitsu, Hitachi, NEC, Mistubishi, Toshiba, Matsushita, Oki, and Sharp. The Japanese government would invest $450 million, with industry matching and maybe doubling that figure. The following year saw the establishment of the Institute for New Generation Computer Technology (ICOT) in Tokyo to guide the R&D programme, Phase One (1982-5) of which was to be chiefly concerned with speech and picture understanding, Phase Two (1985 on) with parallel computing, expert systems and expert system tools.

Still smarting from Japan's market successes in the automobile and electronics industries and now fearing Japanese ascendancy in computer technology also, Britain and the United States were jolted into action. The US response came in the form of the `Strategic Computing Plan' (October, 1983) of the Defense Department's Advanced Research Projects Agency (DARPA), the formation of the Microelectronic and Computer Technology Corporation (MCC) to carry out collaborative research in computer-aided design, image processing and expert systems, including the monumental ten-year CYC project, as well as heightened activity in major universities and the research laboratories of such organisations such as Bolt Berenek Newman, Stanford Research Institute, Xerox Parc, and others.

The UK's response, following the recommendations of the Alvey Committee (chaired by John Alvey of British Telecom) in mid-1982 for a national programme for Advanced Information Technology, was a budget of [[sterling]]350m over five years for research in software engineering, man-machine interfaces, VLSI, and intelligent knowledge-based systems. Government would contribute two-thirds of the sum; industry was to contribute the remainder, together with the costs of transforming the outcomes of the research program into marketable products. Thus was born, in 1983, the Alvey Programme, and its AI component, Intelligent Knowledge-Based Systems. As well as a number of small projects in specific areas of AI, four large `demonstrator' projects were eventually established in the IKBS area: a voice-driven word-processor and desktop workstation, a knowledge-based decision support system for the Department of Health and Social Security, a range of mobile information systems including a route guidance system and a mobile electronic office, and a `design-to-product' system to demonstrate the automation of the total production process from inception through manufacture to field maintenance. (See Oakley & Owen, 1989, for a detailed assessment of the Alvey Programme).

Excitement was spreading throughout Europe. 1985 saw the launch of the European Strategic Programme for Research and Development in Information Technology (ESPRIT) to promote European co-operation in software engineering, microelectronics, robotics, advanced production techniques, and artificial intelligence. The programme, now in its third phase, is supporting major research projects in intelligent knowledge-based systems, multimedia, intelligent databases, machine translation, intelligent full-text browsing, neural networks, intelligent information selection and delivery, and many other advanced IT areas.

The Key Technology Center in Japan, meanwhile, in 1986, invested more than $100 million in the Japan Electronic Dictionary Research Institute, Ltd., for a seven-year programme of research in Natural Language Processing and Knowledge-Base Inference. The principal application areas envisaged in the project were intelligent word processing, intelligent office automation, machine translation, speech understanding, expert systems, computer-aided design, computer-aided manufacturing, decision-support systems, and computer-assisted instruction.

The consequent growth has been phenomenal. In the area of expert systems alone, there were, in 1984, less than twenty such systems used in a real operational role; by 1988 there were over 1400 expert systems in commercial use and a further 8000 under development; today, with good shells now commercially available for microcomputers, the number of actual systems in everyday use is beyond count. Artificial Intelligence has become a major industry, with a turnover of over half a billion dollars per year.

The following sections look briefly at just two of the technologies in which there is currently enormous interest; sections 3.4. and 3.5. focus on expert systems.

3.2. Language Technologies

Possibly the first non-numerical application for computer science, and certainly the motivation for the development and implementation of the first high-level programming language (COMIT, 1957) was the machine processing of 'natural' (i.e., human) languages. Andrew Booth, a Nuffield Fellow at London's Birkbeck College, had discussed the possibility of creating electronic dictionaries with Alan Turing, in the context of the latter's work in cryptanalysis at Bletchley Park; Booth later met Warren Weaver, at that time a vice president of the Rockefeller Foundation, and from these meetings came the suggestion of Machine Translation (MT) as an application of the new computing technology, and in 1948 Booth collaborated with Richard Richens on the first (French to English) MT system.

From those beginnings, MT has developed into a thriving multi-million dollar industry. Although the use of -- and indeed, the perceived need for -- commercial machine translation in business and industry has until now been limited, for many years such systems have been in use by governments and public bodies. The Commission of the European Communities has, since the mid-1970s, been using the SYSTRAN system for in-house translation, while the Canadian Weather Service has for as many years been using a system TAUM-METEO to translate weather forecasts daily from English to French.

Nearer to the day-to-day interests of business and industry, however, are the tools that, by enabling users to communicate naturally through ordinary speech or typed text, will bring advanced information technology within everybody's grasp. These include natural language interfaces to databases and expert systems, text editing tools, and document contents scanning systems which will skim and summarise the important points in texts.

But perhaps the most exciting innovation in language technology is the 'talkwriter'—a dictation system that will translate spoken free format language input into machine-readable text without the need of keyboard input. In other words, a typewriter or word processor that one can use to create documents quite simply by talking to it. A fully functional talkwriter would need to be capable of accepting, without prior training, natural (continuous) speech from any speaker, with an ability to process unlimited vocabulary and grammatical structures and to produce accurate high-quality textual output. General-purpose free-text dictation systems of this kind, however, are still a little way off, although the current state of technology suggests around 1995 for first useful commercial continuous speech talkwriters.

Yet a number of tolerably good systems already exist. Launched in March 1989, DragonDictate (manufactured by Dragon Systems, Boston), for example, has been successfully marketed to both general business users and to the disabled market (around 300 sales in each sector). Speaker-independent and capable of accepting unrestricted syntax, it has an active built-in vocabulary of 25,000 words, with space for 5000 more to be added by the user; it also allows access to a 50,000-word dictionary for retrieval when spelling words. `Writing' and editing are both voice-activated, and the system is always in `learning mode', so that a correction by the user (e.g., "choose five" from a list of alternatives offered by the system) causes the system to adjust its pairing of signal (digitised wave-form) to word, and to `remember' the new pairing. User reactions to date have been impressive enough for a British insurance company to be considering giving portable versions of DragonDictate to their loss adjusters, thus cutting out time-consuming process of getting dictaphone tapes typed, checked and corrected.

3.3. The 'Electronic Brain'

When, in 1946, Lord Mountbatten, then President of the Institution of Radio Engineers, talked about the development of "an electronic brain, which would perform functions analogous to those at present undertaken by the human brain", he clearly had in mind an architecture that would mimic the neural architecture of the brain itself:

It would be done by radio valves, activating each other in the way that brain cells do ... now that the memory machine and the electronic brain were upon us, it seemed that we were really facing a new revolution; not an industrial one, but a revolution of the mind, and the responsibilities facing the scientists today were formidable and serious. (The Times, November 1st, 1946; quoted in Boden et al, 1989. Italics mine).

Connectionsim, parallel distributed processing, neural networks, neural computing, and neurocomputing are all terms that are used more of less interchangeably to label an approach towards artificial intelligence whose basic assumption is that intelligence emerges from the interaction of a large number of very simple processing units, that are similar to neurons, thereby setting itself against the alternative `algorithmic' approach to AI which, until the mid-1980s, had been the dominant paradigm in AI research.

Neural networks offer a number of advantages over conventional AI systems. They can act as content-addressable memories. Like human beings, they can access information in memory based on nearly any attribute, or set of attributes, they are trying to retrieve. They are resistant to low-level `noise'; they are robust and (because representations are distributed across the system, rather than localised) degrade gracefully when damaged. They can learn (or `be trained'); learning is a matter of the network adjusting its `weights' in response to inputs, and many have in fact learned to perform tasks that have proven difficult for traditional AI. Further, they can make default assumptions in a natural way, by making rational `guesses'; and they can produce spontaneous generalisations, beyond the training set instances, as a natural by-product of the memory retrieval process.

The range of potential applications is, by virtue of their immense flexibility, bewilderingly wide -- from face recognition through speech processing to generating spontaneous profiles of a typical cocaine smuggler! They are still `new' technology, however. Having falling into neglect as a research area for almost twenty years, it is only since the mid-1980s that research funding has brought about a renaissance in neural computing, and only in the past couple of years that real-world applications have been studied. In 1986, however, DARPA launched a six year research initiative with a $390 million budget; by the mid-1990s we should be seeing the commercial pay-off of the technology.

3.4. Expert Systems

Undoubtedly the most important development in AI -- commercially, at least -- has been Expert Systems, computer programs encapsulating, in the form of hundreds or even thousands of rules, the knowledge and reasoning skills of one or more human experts such as to enable them to operate at the level of the human expert in some specialist domain. The technology arouse out of a discovery which was probably first made around 1960, and which crept into AI around the mid-1970s, but whose power was not fully recognised until around 1980. Marvin Minsky, of MIT, summarises the discovery in the following words:

today's expert systems demonstrate a marvellous fact we did not know twenty-five years ago: if you write down if-then rules for a lot of situations and put them together well, the resulting system can solve problems that people think are hard.
[Minsky, 1984: 244]

An `if-then' rule is simply a statement of conditions (the `if' part) followed by the conclusions or actions that follow if the conditions are satisfied. Here are a couple of examples:

if
(1) determination how to acquire the asset is known; and
(2) put any OBJRULES which meet the condition: lease is to be a modifiable option lease is mentioned in the rule into SET-1, and
(3) put any OBJRULES which meet the condition: lease is to be a straight lease is mentioned in the rule into SET-1,
then
DOBEFORE is assigned the values: (VALUE-OF-SET-1)
if
CANNOT-BORROW and PRESERVES-CREDIT and PRESERVES-CASH,
then
FINANCE-IT = TXTG1; UTILITY: 5

Expert systems need not store information uniquely in the form of `if-then' rules, however. The PROSPECTOR system, for example, which analyses geological data, codes much of its knowledge in a `semantic net' -- a tightly structured network of concepts and the links between them; other systems have used 'frames' or a computable form of predicate logic. As human knowledge takes many different forms, so there are many different ways of enriching rule-based systems such as to encode that knowledge in a computably tractable form.

And this brings us back to human beings and the attraction of expert systems.

Human experts by definition have specialised knowledge in their field, acquired both from extensive training and, more importantly, from years of practical experience of dealing first-hand with real problems. For this reason, experts tend to be scarce, expensive, invariably busy, and fallible; they are also, of course, mortal. In-house experts, too, may move on, or retire; and they take their expertise with them. Consequently, computer systems that are able to provide on-line expert assistance to people working in knowledge-intensive tasks have an inestimable valuable. As `consultants' in management, finance, strategic planning, medicine, engineering, computer-system configuration, and insurance and investment (to name but a few areas where expert systems are in regular use), such systems are capable of providing the support of expert knowledge that is relatively cheap (a full-time human consultant commands a much higher salary!), reliable (humans do make mistakes), portable (human experts are sometimes too busy to come on call), and untiring (human experts have to sleep sometimes; eventually they die). And, because of the modularity of such systems, they can be extended to become more proficient than any single human expert whose knowledge has been `written into' the rulebase.

Yet human experts do not simply apply their knowledge to problems; they can also explain exactly why they have made a decision or reached a particular conclusion. This facility is also built into the expert systems that simulate human expert performance: they can be interrogated at any moment and asked, for example, to display the rule they have just used or to account for their reasons in using that rule. That a system is able to explain its reasoning is in itself no guarantee that the human user will understand the explanation: if the advice is to be of use, it is important that the system be able to justify its reasoning process in a cognitively plausible manner, by working through a problem in much the same way as a human expert would. Expert systems can be made to reason either forwards, from initial evidence towards a conclusion, or backwards, from a hypothesis to the uncovering of the right kind of evidence that would support that hypothesis, or by a combination of the two. One significant factor which will determine whether a system will use forward or backward reasoning is the method used by the human expert; it is, as much as anything else, this flexibility, and consequently the ability of the system to simulate human reasoning processes, that makes the expert system such a crucially important new technology.

3.5. When is a Knowledge-Based System appropriate? Identifying Expert System applications

Among the questions you will reasonably be asking here are: How do I know what is an appropriate application domain for knowledge-based systems? How do I get a feasibility and requirements analysis done? What kinds of knowledge are appropriate for what kinds of problems? Who are the experts? In general, an expert system may well be the appropriate technology for the kinds of tasks already mentioned above: for project management, tax assessment, fault diagnosis, intelligent data analysis, financial planning and analysis, training, and so on. It may very well often be the case that AI techniques can also offer optimum solutions to non-AI tasks.

In more specific terms, Teknowledge Inc. suggests eight common situations where a knowledge-based system can be of value:

Excessive demands on human experts
The knowledge required to perform a particular task effectively is available only at a central location. Requests for advice are channelled to a small group of people who are always in demand. E.g., a key product design team may be spending an excessive portion of their time on the phone advising repair personnel.
Inaccessibility of expertise
A written document or flowchart is intended to facilitate the use of a program, procedure, or piece of equipment. However, it is so long and detailed that it is useless in practice. In actuality, the users develop folklore-like methods for accomplishing the task. They rely excessively on previous methods that were determined empirically to work. E.g., a flexible and sophisticated computer simulation program goes unused in favour of building expensive models because its user manual fills a shelf of three-ring binders.
Experts involved in time-consuming routine work
An organisation turns away work or loses business to competitors because an overworked human expert is required to make judgements or recommendations, even in routine cases. E.g., to reduce unnecessary, expensive work, a locomotive repair centre requires a supervisor to approve all diagnoses and recommendations before work is undertaken. Although costs are controlled, the average down time increases to a point where it is economical for customers to tow broken equipment to other centres for repair.
High (re)training overheads
Due to turnover in equipment or personnel, an excessive amount of time is spent training rather than doing. E.g., a company updates its line of test equipment each year, and field engineers must spend an average of two months annually attending training sessions.
Continuous routine monitoring
A large amount of mainly routine data must be scanned by a highly trained expert on a continuing basis. E.g., a high-energy physics laboratory employs a crew of 10 people to look for rare events in bubble chamber images.
Monitoring/integration of diverse information sources
A variety of information from heterogeneous sources must be monitored and integrated to determine the possibility of an important event. E.g., a government agency must constantly examine information from multiple sources to determine if a military threat is present.
Fast rational expert judgements are essential
A critical judgement must be made in a very short time interval to avoid a potential disaster. E.g., a nuclear power plant control centre must decide quickly to shut down or cut back a particular unit when a potential problem is detected.
Optimal solutions are prohibitively expensive
An optimal solution to a routing, planning, or configuration task is too expensive or time-consuming to determine. Instead, a minimally effective process of guesswork has been substituted. E.g., a computer company has to configure orders for its equipment, with the proper cables, components, and mounting arrangement, on an individual basis. Errors and delays in this process become a serious problem as orders increase.
(Adapted from `Evaluating Knowledge Engineering Applications,' Knowledge Engineering, Teknowledge Inc., Palo Alto, 1983; quoted in Rauch-Hindin, 1987, pp.41-42).

Consider, as an example, the case of project management, which patently is a knowledge-based task that satisfies an important subset of the above criteria. The problem with standard project management techniques, such as PERT and CPM, is that, although they indicate a theoretical critical path, they do not themselves manage projects. Projects are managed by people, and it is people who have to come to terms with the delays that may result from the often unanticipated complexity of a project and the interdependencies among its various aspects. Good project managers consequently command enviable salaries, and their skills are highly valued; yet it appears that many of their skills can be automated, thus both enhancing the performance of the manager and also providing instant feedback to engineers and designers working on the project. A pilot knowledge-based project management system, Callisto, developed by Carnegie-Mellon University and Digital Equipment Corporation, has been designed to take account of the interactions that occur during different phases of the project life cycle by scheduling activities to accomplish some task, monitoring the status of parallel activities to ascertain both plan and schedule changes required to meet project goals, and managing engineering change orders. This quite clearly distinguishes Callisto -- an `intelligent' system with some degree of agentivity -- from more passive conventional software such as, for example, Claris's CPM-based MacProject which, although it will neatly construct your schedule and spot the initial problems, will do no more than that.