Coping with computers in the cockpit (1999)

Dekker, S. W. A. & Hollnagel, E. (Eds.), (1999). Coping with computers in the cockpit: Practical problems cloaked as progress. Aldershot, UK: Ashgate.

Introduction


Another book on aviation automation ? Well, perhaps this is not a book on aviation automation per se. It is a book, rather, on how an entire industry is coping with aviation automation. Or more precisely, on how it is coping with the human consequences of automation that it has fielded over the last two decades. The aviation domain, and the cockpit in particular, has always been seen to be on the forefront of technological and human-machine interface developments. From one angle, progress in the cockpit has been enormous, compared, say, with the technological state of the typical enroute air traffic control  centre. But from another angle, such perceived progress has merely been the wrapping around a large number of unanticipated practical problems—problems that are now the inevitable by-product of the automation technology that aeroplanes have adopted. Practical problems masked as progress, in other words.
    The aviation industry knows it. It is struggling to find ways to meaningfully educate and train operators for their completely different work in the automated environment. It is reconsidering who to select for these jobs and how. And now that current cockpit designs are firmly in place and its problems better-accounted for, it is regrouping to begin to regulate and certify cockpit equipment on the basis human factors  criteria. This while manufacturers are voicing continued concern over the lack of concrete and specific ideas for better feedback  design  in the next generation of flightdeck automation.
    One result of being ahead of the pack is that you will encounter and, hopefully, solve a host of new problems and thereby generate an experience that can be helpful to others. It is therefore quite ironic that many other industries are in fact (re)-discovering similar automation  related problems for themselves as they stumble ahead on technology-driven paths. For example, ship bridges are being invaded by heavily moded automation technology: standardised design  does not appear to exist and formal operator training  is neither required nor well-organised. The result is that ships have begun to show the same pattern of human-machine breakdowns and automation surprise s that were discovered in aviation years ago (see for example the grounding of the Royal Majesty, NTSB 1996). Hence the dire need for this book: not only is it relevant to exchange experiences and swap lessons across one industry - aviation - it is also critical to show how one industry has to cope with the consequences of its own automation to industries that are poised to adopt similar systems in their operational environments.


Practical problems galore
To some extent research efforts and operational experience are beginning to pay off. In itself, this book is an outflow of the increasing realisation that automation  is a mixed blessing and reflects operational, educational and regulatory countermeasures that were inspired by for example the 1996 U.S. Federal Aviation Administration report on human-automation interfaces onboard modern airliners (FAA , 1996). Closer to the ground, many organisations that deal with complex automation acknowledge that changes in technology can be a double-edged sword. One defence procurement agency for example, says that they must strike a balance between simpler equipment and highly automated equipment. The reason they cite is that the former imposes greater manpower burdens but the latter can create excessive demands on operator skills  and training . Such lessons learned indicates that old myths about automation (for instance that it reduces investments in human expertise ) are becoming unstuck.
    Nevertheless, almost all sectors of the aviation industry are still struggling in one way or another to adapt to the emerging realities of automation  technology—to which this entire book is testimony. The training  of pilots from the ab initio (zero-hour) level upward, for instance, has come to the fore as a key issue relative to automated flight decks (Nash, 1998). Does requisite time in single piston aircraft of light wing loading have anything to do with becoming a jet transport pilot in a world of near sonic, satellite-guided computer-managed flight at 35.000 feet? (Lehman, 1998). These questions emerge during a time when European operators and regulators are attempting to harmonise training and licensing standards across the continent and while North-American operators are gradually losing a major source of pilots (the military), with collegiate aviation programs working hard to fill the gap (NRC, 1997). Questions about preparing pilots for their new supervisory roles do not stop at the ab initio level. The debate about optimal training strategies pervades the airline induction (multi-crew, operational procedures ) and type-rating stages as well. A new pilot's first encounter with automation is often delayed to late in his or her training. This means it may fall together with the introduction to multi-crew and jet-transport flying, creating excessive learning demands. Telling pilots later on to be careful and not to fall into certain automation traps (a common ingredient in classroom teaching as well as computer-based training - CBT) does little to prevent them from falling into the traps anyway. The end result is that much of the real and exploratory learning about automation is pushed into line-flying.
    Automation also erodes the traditional distinction between technical and non-technical skills . This tradition assumes that interactions with the machine can be separated from crew co-ordination . But in fact almost every automated mishap indicates that the two are fundamentally interrelated. Breakdowns occur at the intersection between crew co-ordination and automation  operation. Crew resource management training  is often thought to be one answer and is by now mandatory. It is also regulated to include some attention to automation. But all too often CRM  is left as a non-technical afterthought on top of a parcel of technical skills that pilots are already supposed to have. Air carriers are coming to realise that such crew resource management training will never attain relevance or operational leverage.
    Another issue that affects broad sections of the aviation industry is the certification  of flight decks (and specifically flight management system s) on the basis of human factors  criteria (Harris, 1997; Courteney, 1998). One question is whether we should certify the process (e.g. judging the extent and quality  of human factors integration in the design  and development process) or the end-product. Meanwhile, manufacturers are working to reconcile the growing demand for user-friendly or human-centred technologies with the real and serious constraints that operate on their design processes. For example, they need to design one platform for multiple cultures or operating environments. But at the same time they are restricted by economic pressures and other limited resource horizons (see e.g. Schwartz, 1998). Another issue concerns standardisation and the reduction of mode  complexity onboard modern flight decks. Not all modes are used by all pilots or carriers. This is due to variations in operations and preferences. Still all these modes are available and can contribute to complexity and surprises for operators in certain situations (Woods & Sarter, 1998). One indication of the disarray in this area is that modes which achieve the same purpose have different names on different flight decks ( Billings , 1996).
    Air traffic control represents another large area in the aviation industry where new technology and automation  are purported to help with a variety of human performance problems and efficiency bottlenecks (e.g. Cooper, 1994). But the development of new air traffic management  infrastructures is often based on ill-explored assumptions about human performance. For example, a common thought is that human controllers perform best when left to manage only the exceptional situations that either computers or airspace users themselves cannot handle (RTCA, 1995). This notion directly contradicts earlier findings from supervisory control  studies (e.g. the 1976 Hoogovens' experience) where far-away operators were pushed into profound dilemmas of when and how to intervene in an ongoing process.
Technology alone cannot solve the problems that technology created
In all of these fields and areas of the aviation system we are easily fooled. Traditional promises of technology continue to sound luring and seem to offer progress towards yet greater safety  and efficiency. For example, enhanced ground proximity warning systems will all but eradicate the controlled flight into terrain accident. We become focused on local technological solutions for system-wide, intricate human-machine problems. It is often very tempting to apply a technological solution that targets only a single contributor in the latest highly complex accident. In fact, it is harder take co-ordinated directions that offer real progress in human-centred or task-centred automation  than to go with the technological; the latest box in the cockpit that can putatively solve for once and for all the elusive problems of human reliability .
    Many of our endeavours remain fundamentally technology centred. Ironically, even in dealing with the consequences of automation  that we have already, we emphasise pushing the technological frontier. We frame the debate of how to cope with computers in the cockpit in the technical language of the day. For example, can we not introduce more PC-based instrument flight training  to increase training effectiveness while reducing costs? Should we put Head-Up-Displays on all flight decks to improve pilot awareness in bad weather approaches? How can we effectively teach crew resource management skills  through computer-based training tools? With every technical question asked (and putatively answered), a vast new realm of cognitive issues and problems is both created and left unexplored. The result, the final design , may beleaguer and surprise the end-user, the practitioner. In turn the practitioners' befuddlement and surprise will be unexpected and puzzling to us. Why did they not like the state-of-the-art technology we offered them? The circle of miscommunication between developer and user is complete.
    One reason for this circle, for this lack of progress, is often seen to lie in the difficulties of technology transfer-that is, the transfer of research findings into usable or applicable ideas for system development  and system improvement. This book is one attempt to help bridge this gap. It provides yet another forum that brings together industry and scientific  research.


Investing in human expertise  and automation  development
The book echoes two intertwined themes. The first theme explores how and where we should invest in human expertise  in order to cope with computers in the cockpit today and tomorrow. It examines how practitioners can deal with the current generation of automated systems, given that these are likely to stay in cockpits for decades to come. It examines how to prepare practitioners for their fundamentally new work of resource management, supervision, delegation and monitoring. You will find many reports from the front on this theme. For example, various chapters converge on what forms cockpit resource management training  could take in an era where flying has become virtually equated with cockpit resource management (managing both human and automated resources to carry out a flight). There are chapters that target more specific phases in a pilot’s training career, for instance the ab initio phase and the transition training  phase. Yet another chapter makes recommendations on how an air carrier can proceduralise the use of automation  in terms of how different levels of automation  affect crewmember duties, without getting bogged down in details that are too prescriptive or too fleet-specific.
    The second theme explores what investments we must make in the development of automated systems. The industry would like to steer the development of additional cockpit equipment and air traffic management  systems in human-centred directions—but how? Current processes of development and manufacturing sometimes seem to fail to check for even the most basic human-computer interaction flaws in for example flight management system s coming off the line today. Two chapters examine whether and how certification  and increased regulation could help by setting up standards to certify new or additional cockpit equipment on the basis of human factors  criteria. Although these chapters represent current European state of the art in this respect, much work needs to be done and much more agreement needs to be reached, for example on whether to pursue quantitative measures of human error  and human performance in system assessment . Another chapter lays out how we could finally break away from the traditional but unhelpful, dichotomous notion of function allocation  in our debates about automation . As this automation is becoming more and more powerful, allocation of a priori decomposed functions misses the point. Also, such increasingly powerful automation needs to show feedback  about its behaviour, not just its state or currently active mode  - an issue targeted in a chapter on automation visualisation . Finally, one chapter looks into the problem of extracting empirical data about the future. As technology development goes ahead, in aviation and in many other fields of human endeavour, it becomes ever more important to be able to evaluate the human factors consequences of novel technological solutions before huge resources are committed and the point of no return has been reached. This chapter explains how to generate empirical data relating to human performance in systems that do not yet exist.


Real progress
As automation  has brought along many practical problems under the banner of continued progress, the aviation industry is struggling to cope with the human-machine legacy of two decades of largely technology-driven automation. The lessons learned so far and the lessons still to be learned, carry information not only for aviation, but for a number of industries that are opening their doors to similar problems. Real progress across multiple industries, not the kind that cloaks the sequential introduction of practical problems into different worlds of practice, can only be achieved through acknowledging the similarity in challenges that we have created for ourselves.