Cognitive Engineering (1988)

G. Mancini, D. D. Woods & E. Hollnagel, (Eds.) (1988) Cognitive engineering in complex dynamic worlds. London: Harcourt Brace Jovanovich.

Preface
The contents of this book address the problem of how we should deploy the power available through developments in computational technology (e.g., object oriented programming, experts systems, natural language interfaces) to assist human performance in complex problem solving worlds, i.e., cognitive engineering. Effective use of computational power depends on understanding the problems to be solved and how people solve and fail to solve these problems. This understanding will lead to principle driven, rather than technology only driven, development of computational possibilities for intelligent and effective decision support, and can affect the very nature of the computational tools that are needed (e.g., techniques for reasoning about uncertainty; semantic reasoning engines).
The book contains a series of papers by a collection of international researchers (17 authors from eight countries) that address a wide range of fundamental questions in cognitive engineering such as: what should be the relationship between the machine and human portions of a human machine problem solving ensemble? How do people use today’s' support systems? What are the sources of performance failures in complex systems? What support do people need to function more expertly? How can machine reasoning help construct support systems for complex and dynamic worlds? Where do we need to extend machine-reasoning capabilities to cope with the demands of complex and dynamic worlds?
Cognitive tools: Instruments or prostheses?
The theme of this section of the book is cognitive engineering-the problem of how we should deploy the power available through developments in computational technology and artificial intelligence to assist human performance. Advances in machine intelligence raise the question of what should be the relationship between the machine and human portions of a human-machine problem solving ensemble. Each of the papers in the section examines some aspect of this question, and all point to ways to better couple machine power and human intelligence to improve performance in complex domains.
One metaphor that can be used to describe the challenge facing cognitive engineering is to view the new computer systems as tools and the human practitioner as a tool user. The question of the relationship between machine and human takes the form of what kind of tool is an intelligent machine. At one extreme, the machine can be a prosthesis that compensates for a deficiency in human reasoning or problem solving. This could be a local deficiency for the population of expected human practitioners or a global weakness in human reasoning. At the other extreme, the machine can be an instrument in the hands of a fundamentally competent but limited resource human practitioner. The machine amplifies the powers of the practitioner by providing increased or new kinds of resources (either knowledge resources or processing resources such as an expanded field of attention). The extra resources may allow restructuring of how the human performs the task, shifting performance onto a new higher plateau. While this dichotomy is partially polemic, it provides one way to frame the problems that confront cognitive engineering.
To achieve the goal of enhanced performance, cognitive engineering must first identify the sources of error that impair performance of the current problem solving system. Reason in “Cognitive Aids in Process Environments: Prostheses or Tools?” provides a summary of the cognitive underspecification approach to human error. Human cognition can make do with incomplete or highly uncertain input by filling in the gaps with high probability contextually relevant material based on experience. However, the result of this process is problematic especially in novel situations where the person by definition has limited experience. He also articulates a Catch-22 that is at the heart of cognitive engineering and system design in risky industries. Risky situations involve the conjunction of several circumstances unanticipated by the designer; the human-in-the-loop must compensate for unanticipated situations. However, the person will use knowledge or strategies based on his past experiences, which will not necessarily be relevant in novel cases.
Reason notes that human errors arise from mismatches between the properties of the technical system and properties of the human cognitive system. These mismatches turn what are normally strengths of human cognition into weaknesses. Introducing new technology without understanding its impact in a larger sense can expand “artificially induced” error tendencies. Reason points out that there are limits in human flexibility in dealing with situations that have not been anticipated or experienced before, what Woods in his commentary “Cognitive Engineering in Complex and Dynamic Worlds” calls unexpected variability. As a result, Reason claims that cognitive tools in the sense of prostheses are needed to offset the artificially induced human error tendencies. However, the study by Roth and her colleagues shows that even the latest machines can be extremely brittle problem solvers when the situation departs from those the designer expected. While there are many ways to plan for and prevent trouble from arising, we need cognitive engineering techniques that help human and machine problem solvers function successfully in the face of unanticipated variability-how to manage trouble. Ultimately, the path out of the Catch-22 may be to develop new ways to couple human and machine intelligence that go beyond either a highly automated system or a highly manual system-a major challenge for cognitive engineering.
Researchers and tool builders usually are concerned with the next system or technology so that we seldom ask how to domain practitioners make use of today’s support systems? De Keyser, in “How can Computer-Based Visual Displays Aid Operators,” provides an overview of a project that explicitly looked at how computer based display technology has been utilized in the field. This project is unique because she and her colleagues examined several kinds of complex and dynamic worlds and because they examined the same industrial plant longitudinally, when the computer based display technology was first introduced and three years later. De Keyser uses some of the results from this project to explicitly consider how display technology can assist the practitioner. She places great emphasis on the power of analogical representations, displays that perceptually integrate data to show rather than tell the user the critical information needed for good performance.
At one level, the study by Roth, Bennett and Woods on “Human Interaction with an 'Intelligent' Machine” is a rare opportunity to observe how actual practitioners make use of a new AI based system when solving actual problems under actual work conditions. The study explicitly considers the performance of the human-machine ensemble. The results show that the standard approach to expert system design, in which the user is assigned a passive data gathering role in service of the machine-the machine as prosthesis, is inadequate. Problem solving in this situation, as in complex situations in general, was characterized by novel situations outside of the machine's competence, special conditions, and underspecified instructions, all of which required substantial knowledge and active participation on the part of the human practitioner. But at another level, these results sound all too familiar. Similar breakdowns in performance have been observed in the past with other kinds of technologies. To correct or avoid these troubles a strong viable cognitive engineering is needed.
An increased ability to detect and correct errors, which is one part of error tolerant systems, is often cited as one way that new computational power can assist performance. However, there have been few studies of peoples' ability to detect their own or another's (either another person or a machine) errors. The paper by Rizzo Bagnara and Visciola on “Human Error Detection Processes,” provides one investigation that helps to close this gap in our knowledge. They studied how people detect errors that they have committed in the context a relatively simple task given the scale of many of the target applications of the papers in this book. Their investigations are closely tied to current thinking about taxonomies of human error (e.g., slips, rule based mistakes, knowledge based mistakes) and are required if cognitive engineering is to provide systems that enhance error recognition. For example, the attentional and error detection processes that Rizzo et at. investigates are important because of the role that they play in fixation prone or fixation resistant behaviour in natural problem solving habitats. Several investigators have found that this type of error, where behaviour persists in the face of discrepant evidence, is an important part of human performance in complex worlds. Among many interesting results, this study establishes the importance of what Rizzo calls error suspicious behaviour, especially in detecting cases of erroneous intentions (mistakes). Error suspicious behaviour is defined as cases where the human problem solver exhibits perplexity about system behaviour without suspecting that a specific type of erroneous result has occurred. It is interesting to speculate about the information from the world, the kind of experience of the world, and the internal knowledge structures that support this kind of behavior.
One technique for framing questions about the relationships between human and machine intelligence is to examine human-human relationships in multi-person problem solving or advisory situations. Muir in her paper on “Trust between Humans and Machines” takes this approach by examining the concept of trust and models of trust in human relationships. She uses the results to define trust between human and machine problem solvers and to propose ways to calibrate the human's trust in decision aids.
Framing the relationship of human and machine problem solver in this way leads to several provocative questions. One is how does the level of trust between human and machine problem solvers effect performance? If the practitioner's trust in the machine's competence or predictability is miscalibrated, the relationship between human and machine can be corrupted in two directions. Either a system will be underutilized or ignored when it could provide effective assistance or the practitioner will defer to the machine even in areas that challenge or exceed the machine's range of competence. Another question is how is trust established between human and machine? Trust or mistrust is based on cumulative experience with the other agent which provides evidence about enduring characteristics of the agent such as competence and predictability. This means that factors about how new technology is introduced into the work environment can play a critical role in building or undermining trust of the machine. If this stage of technology introduction is mishandled (for example, practitioners are exposed to the system before it is adequately debugged), the practitioner's trust in the machine's competence can be miscalibrated. Muir's analysis shows how variations in explanation and display facilities affect how the person will use the machine by affecting the person's ability to see how the machine works and therefore their level of calibration. Muir also points out how human information processing biases can affect how the evidence of experience is interpreted in the calibration process.
Boy takes up the question of the relationship between human and machine problem solver in the context of the design of decision aids in the paper “Operator Assistant Systems,” Boy's paper uses current paper based procedures and operations manuals as the starting point in looking towards new support systems. Behind his approach is the idea that the transition from paper based to computer based support systems is a shift in media. Performance aiding requires that one focuses on the cognitive support functions that a system can provide. Different kinds of media or technology may be more powerful than others in that they enable or enhance certain kinds of cognitive support functions. Different choices of media or technology may also represent trade-offs between the kinds of support functions that are provided to the practitioner.
Boy discusses several different levels of assistance where technology is used to create different cognitive tools. What is different about cognitive tools as compared to other kinds of tools is that, given today's computational technology, cognitive tools can have some level of autonomy. The levels of assistance reflect cognitive tools with different levels of autonomy. In this regard Boy is primarily drawing on the supervisory control metaphor in discussing the relationship of human and intelligent machine. The intelligent machine can be thought of as a control agent that can have different kinds of relationships to the human agent. Note that as the machine's level of autonomy increases the degree to which the human is taken out-of-the-loop. This brings us back to Reason's discussion of the Catch-22 of supervisory control in risky work environments and Roth et al.'s concept that the ability to handle unexpected variability is the measure of the effectiveness of a cognitive system. People are in the loop to handle the unexpected; the cognitive engineering question remains what knowledge and processing resources do they need to fulfil this role and how can these resources be provided.
Reasoning and intelligence
The following, and last, section of the book contains six contributions, assembled under the heading of “Reasoning and Intelligence.” As such, they complement the more application-oriented contributions in the two previous sections. Despite being more theoretical in their approach, these contributions are in no way characteristic of the free-floating theoretization of the classical academic tradition. One of the most tangible improvements of trying to apply AI methodology and techniques to practical problems, such as we see it in the field of expert systems and Intelligent Decision Support Systems, has been a narrowing of the gap between theory and practice. Unlike other scientific fields, as economics and mathematics, there has in the case of “intelligent” computers not been any practice without the theory. By that I mean that the practical application of AI required the existence of machines and theories (as programs), whereas in other fields the practice often predates the theories and develops following its own laws.


Despite having a common theme, the six contributions in this section are quite diverse. The first contribution, by Gallanti et al., addresses a very real and very important problem. One of the consequences of trying to build AI systems has been the realization that knowledge comes in many forms. Two of these, perhaps the most elementary, have been named shallow and deep knowledge. Shallow knowledge is basically empirical, i.e., based on operating experience, while deep knowledge is more formal, e.g., based on design knowledge, physical laws, first-order functional principles, etc. Although it would be misleading to consider the two types of knowledge as sharply distinct or even opposed to each other, they are used for different purposes, hence may need different treatment. The Gallanti et al. paper describes various schemes for knowledge representation, and demonstrates how these can be combined in a single approach.
The second contribution looks on what we do not know about IDSSs. This is discussed with reference to three broad categories being: (1) functional aspects of IDSSs, (2) models of decision making as a process, and (3) models of decision makers as users. This is followed by some thoughts on the possibilities for validating the functioning of an IDSS - Or an expert system in general. The conclusion is that although research, particularly in cognitive science and related areas, has come a long step forward since the late seventies, there are still many things that we do not know. The design and implementation of an IDSS is still as much a craft as it is a science and it is likely to remain so for some time, at least if past developments in related fields, e.g. human factors, can be taken as an indicator.
The third contribution, by Coombs and Hartley, takes a close look at the types of reasoning that can be used in expert systems. Although logic is considered the ideal for reasoning, it is a well-established fact that humans are quite bad at logical reasoning. While using pure deduction may be computation ally efficient, it is rarely useful as a replication of human reasoning in the situations where humans excel over machines. These situations are characterized by being ill-defined, by using imprecise knowledge (cf. below) and by having no clear cut criteria to guide the reasoning.
Humans, however, normally manage to find a solution through speculation and hypothesis making, and the challenge is to reproduce this in AI systems. Coombs and Hartley present a specific example of such an approach, called Model Generative Reasoning, and demonstrates its power in a simple, but realistic example.
The fourth and sixth contributions, both by Paolo Garbolino, consider some of the fundamental problems in the area of probabilistic (Bayesian) reasoning. Bayes' rule has, particularly in decision theory, attained a very special status and has in recent years been complemented by the so-called Dempster-Shafer's rule. The problem is how one can make inferences from uncertain data, something which will obviously be of use for experts-humans and machines alike. In his first paper, Garbolino discusses the more formal aspects of this, in particular the relation between the rules of Bayes, Shafer, and Jeffrey. The second paper provides a more provocative discussion of the relevance of Bayesian theory for AI. This is done with particular reference to the question of updating of knowledge and the maintenance of coherence. Although both papers are difficult to read, they are well worth the effort-particularly for those AI people who may be driven by the technology and who remain blissfully unaware of the philosophical basis for what they may be doing.
The fifth contribution, by Dubois and Prade, is sandwiched between the two contributions by Paolo Garbolino. This is done quite on purpose because they represent different views. The difference is particularly clear with respect to the issues of uncertainty and imprecision of knowledge. The Dubois and Prade paper concentrates on a discussion of various methods for knowledge representation and approximate reasoning. They conclude by pointing out the insufficiency of deductive reasoning as the only method of an expert system, and by highlighting the different goals of a logician and an expert system user: whereas the former is interested in valid conclusions, the latter is interested in precise results. This is, hopefully, a polemic distinction. In the world of IDSSs the interest is in results-advice, recommendations, decisions - that at the same time are the result of valid reasoning and precise with respect to the problem at hand.
Altogether this last section of the book shows that reasoning, even in a machine, is far from being a simple matter. Many of the fundamental problems have by now been defined, but most of them remain unsolved. If computers are to move fully from the second stage (information processors) to the third stage (knowledge processors) - -or even to the fourth stage (as knowing systems)-we must go beyond the simple mechanization of logic that has brought us to where we are now. Reasoning is far more than logic, and knowledge is far more than arranging data in a complex structure. The real world is always incompletely described and both reasoning and knowledge representation must be able to accommodate that. The contributions in this section describe some of the ways in which this may be done. Additional work is needed to find out whether these steps take us in the right direction, and how much further we have to go.