-
1
-
-
77950066947
-
The practical implementation of intelligence
-
for example, defines technology as (Englewood Cliffs: Prentice-Hall). Of course, the advent of a new technology also has a way of influencing, and to some extent reshaping, what people want
-
Frederick Ferré, for example, defines technology as "the practical implementation of intelligence", Philosophy of Technology (Englewood Cliffs: Prentice-Hall, 1988). Of course, the advent of a new technology also has a way of influencing, and to some extent reshaping, what people want.
-
(1988)
Philosophy of Technology
-
-
Ferré, F.1
-
2
-
-
0003952526
-
-
For an optimistic survey of what has been, and might yet be, accomplished in this area, see (Boston: MIT Press)
-
For an optimistic survey of what has been, and might yet be, accomplished in this area, see Raymond Kurzweil, The Age of Intelligent Machines (Boston: MIT Press, 1990).
-
(1990)
The Age of Intelligent Machines
-
-
Kurzweil, R.1
-
4
-
-
0027961711
-
Computing and accountability
-
That many different humans are typically involved can make it more difficult for each to recognize his/her share of the responsibility. For a fuller account of the difficulties in getting people to acknowledge responsibility when an operational computer is the product of many different hands, see, January
-
That many different humans are typically involved can make it more difficult for each to recognize his/her share of the responsibility. For a fuller account of the difficulties in getting people to acknowledge responsibility when an operational computer is the product of many different hands, see Helen Nissenbaum, "Computing and Accountability", Proceedings of the ACM, January 1994.
-
(1994)
Proceedings of the ACM
-
-
Nissenbaum, H.1
-
5
-
-
84876659383
-
-
James Moor has pointed out to me that programmed computers can evolve beyond their original programs and so whatever decision-making was originally made by humans may not be very germane. It seems to me, however, that if it is humans who programmed the self-reprogramming computer to be capable of this "evolution" (or programmed the computer that programmed the computer ...), then in some sense those humans must still bear the ultimate responsibility for what ensues
-
James Moor has pointed out to me that programmed computers can evolve beyond their original programs and so whatever decision-making was originally made by humans may not be very germane. It seems to me, however, that if it is humans who programmed the self-reprogramming computer to be capable of this "evolution" (or programmed the computer that programmed the computer ...), then in some sense those humans must still bear the ultimate responsibility for what ensues.
-
-
-
-
6
-
-
0003688270
-
-
See (Oxford: Oxford University Press) for a case against the possibility of machine consciousness. See the works of Daniel C. Dennett for the opposite case
-
See Roger Penrose, The Emperor's New Mind (Oxford: Oxford University Press, 1989) for a case against the possibility of machine consciousness. See the works of Daniel C. Dennett for the opposite case.
-
(1989)
The Emperor's New Mind
-
-
Penrose, R.1
-
7
-
-
0003480513
-
-
For a still more recent defense of eventual computer consciousness see (New York: Viking), chapter eleven. Kurzweil conjectures that by the year 2029, computers will be so evolved as to "claim to be conscious" and their claims will be "largely accepted"
-
For a still more recent defense of eventual computer consciousness see Ray Kurzweil, The Age of Spiritual Machines, When Computers Exceed Human Intelligence (New York: Viking, 1999), chapter eleven. Kurzweil conjectures that by the year 2029, computers will be so evolved as to "claim to be conscious" and their claims will be "largely accepted."
-
(1999)
The Age of Spiritual Machines, When Computers Exceed Human Intelligence
-
-
Kurzweil, R.1
-
10
-
-
77950085977
-
Is America ready to 'fly by wire'?
-
chair of Praxis Systems, which produces special high reliability software for Britain's Air Force, quoted in, April 2
-
Martyn Thomas, chair of Praxis Systems, which produces special high reliability software for Britain's Air Force, quoted in "Is America Ready to 'Fly by Wire'?", The Washington Post, April 2, 1989, p. C3.
-
(1989)
The Washington Post
-
-
Thomas, M.1
-
12
-
-
77950097136
-
-
reprinted (Englewood Cliffs: Prentice Hall), edited by Deborah Johnson and Helen Nissenbaum
-
reprinted in Computers, Ethics and Social Values (Englewood Cliffs: Prentice Hall, 1995), edited by Deborah Johnson and Helen Nissenbaum, p. 434.
-
(1995)
Computers, Ethics and Social Values
, pp. 434
-
-
-
14
-
-
84876604121
-
Has documented hundreds of software failure cases in the aerospace and other industries
-
S.R.I. International, a specialist in software engineering who, loc. cit
-
Peter Neumann, S.R.I. International, a specialist in software engineering who "has documented hundreds of software failure cases in the aerospace and other industries", The Washington Post, loc. cit.
-
The Washington Post
-
-
Neumann, P.1
-
15
-
-
84876632634
-
-
op. cit., loc. cit.
-
Littlewood and Strigini, op. cit., loc. cit., p. 435.
-
-
-
Littlewood1
Strigini2
-
16
-
-
77950096733
-
An authority on software reliability
-
Department of Statistics and Computational Mathematics at Liverpool University, as quoted, April 2, loc. cit
-
Mike Hennell, Department of Statistics and Computational Mathematics at Liverpool University, "an authority on software reliability", as quoted in The Washington Post, April 2, 1989, loc. cit.
-
(1989)
The Washington Post
-
-
Hennell, M.1
-
17
-
-
84876661056
-
-
op. cit., loc. cit
-
Littlewood and Strigini, op. cit., loc. cit, p. 433.
-
-
-
Littlewood1
Strigini2
-
18
-
-
84876649132
-
-
Johnson and Nissenbaum, op. cit.
-
David L. Parnas, A. John van Schouwen, and Shu Po Kwan, "Evaluation of Safety-Critical Software" in Johnson and Nissenbaum, op. cit., p. 441.
-
Evaluation of Safety-critical Software
, pp. 441
-
-
Parnas, D.L.1
Van Schouwen, A.J.2
Kwan, S.P.3
-
19
-
-
84876609614
-
-
op. cit.
-
Leveson, op. cit., p. 33.
-
-
-
Leveson1
-
20
-
-
84876600058
-
-
op. cit., loc. cit.
-
Littlewood and Strigini, op. cit., loc. cit., pp. 435-436.
-
-
-
Littlewood1
Strigini2
-
21
-
-
84876611843
-
-
op. cit.
-
Leveson, op. cit., p. 158.
-
-
-
Leveson1
-
22
-
-
84876618321
-
-
op. cit., 436
-
Leveson, op. cit., pp. 434, 436.
-
-
-
Leveson1
-
23
-
-
0023287803
-
Computer system reliability and nuclear war
-
originally, reprinted in Johnson and Nissenbaum, op. cit.
-
Alan Borning, "Computer System Reliability and Nuclear War", originally in Communications of the ACM 30, 2, reprinted in Johnson and Nissenbaum, op. cit., p. 410.
-
Communications of the ACM
, vol.30
, Issue.2
, pp. 410
-
-
Borning, A.1
-
24
-
-
84876598384
-
-
A terminological caveat Nancy Leveson has urged that " reliability" not be equated with "safety." For "software reliability is defined as compliance with the requirements specification, while most safety critical software errors can be traced to errors in the requirements that is, to misunderstandings about what the software should do." (op. cit., p. 29)
-
A terminological caveat Nancy Leveson has urged that " reliability" not be equated with "safety." For "software reliability is defined as compliance with the requirements specification, while most safety critical software errors can be traced to errors in the requirements that is, to misunderstandings about what the software should do." (op. cit., p. 29). Indeed, while "Reliability engineers often assume that reliability and safety are synonymous .. .this assumption is true only in special cases ... many accidents occur without any component failure-the individual components were operating exactly as specified or intended, that is, without failure. The opposite is also true-components may fail without a resulting accident." (op. cit., p. 164) These points are well-taken. For purposes of the present discussion, however, the expression "software reliability" is used in a less technical and more colloquial way: thus when we ask about the extent to which we can reasonably rely on software in safety-critical contexts, we are not merely concerned with the extent to which the program can be expected to operate as specified; we are also concerned with the extent to which we can determine whether a program has been (and/or can be) specified in a way that is sufficiently responsive to the circumstances that could endanger human life and limb. On this point, Alan Borning has noted that "There are many examples of errors arising from incorrect or incomplete specification. One such example is a false alert in the early days of the nuclear age .. .when on October 5, 1960, the warning system at NORAD indicated that the United States was under massive attack by Soviet missiles with a certainty of 99.9 percent. It turned out that the Ballistic Missile Early Warning System (BMEWS) radar in Thule, Greenland, had spotted the rising moon. Nobody had thought about the moon when specifying how the system should act." (op. cit., loc. cit., p. 410).
-
-
-
-
25
-
-
77950099757
-
Software and safety
-
April 2
-
"Software and Safety" in The Washington Post, April 2, 1989, p. C3.
-
(1989)
The Washington Post
-
-
-
26
-
-
0027961711
-
Computing and accountability
-
Here I quote from, January
-
Here I quote from Helen Nissenbaum's "Computing and Accountability", Communications of the ACM, January 1994;
-
(1994)
Communications of the ACM
-
-
Nissenbaum's, H.1
-
27
-
-
0027634119
-
An investigation of the therac-25 accidents
-
Nissenbaum draws on the definitive account of
-
Nissenbaum draws on the definitive account of Leveson and Turner, "An Investigation of the Therac-25 Accidents", Computer, 1993. 26(7), pp. 18-41.
-
(1993)
Computer
, vol.26
, Issue.7
, pp. 18-41
-
-
Leveson1
Turner2
-
28
-
-
84876605144
-
Medical devices: The therac-25 story
-
Leveson also analyzes this incident at length in Appendix A of her book, cited previously
-
Leveson also analyzes this incident at length in Appendix A ("Medical Devices: The Therac-25 Story") of her book, Safeware: System Safely and Computers, cited previously.
-
Safeware: System Safely and Computers
-
-
-
29
-
-
84876648962
-
-
Here it might be suggested that we resort to physical "barriers" or limitations: e.g., that we only deploy machines whose hardware design is such that very large doses of radiation will not be administered, no matter how "unreliable" the software putatively "governing" the machine. This wouldn't stop patients from receiving more moderate overdoses accumulating to their detriment over longer periods of time, but it could avert a single catastrophe. The extent to which such strategies can be helpful will depend on the specifics of the physical system at issue. In the case of a jet plane on automatic pilot, for example, there seems to be no comparable "barrier" solution to insure, for example, that the plane not crash into mountainsides or tall buildings
-
Here it might be suggested that we resort to physical "barriers" or limitations: e.g., that we only deploy machines whose hardware design is such that very large doses of radiation will not be administered, no matter how "unreliable" the software putatively "governing" the machine. This wouldn't stop patients from receiving more moderate overdoses accumulating to their detriment over longer periods of time, but it could avert a single catastrophe. The extent to which such strategies can be helpful will depend on the specifics of the physical system at issue. In the case of a jet plane on automatic pilot, for example, there seems to be no comparable "barrier" solution to insure, for example, that the plane not crash into mountainsides or tall buildings.
-
-
-
-
30
-
-
84876605685
-
-
op. cit
-
Ian Barbour, op. cit, p. 170;
-
-
-
Barbour, I.1
-
31
-
-
77950068378
-
-
see also, op. cit. (New York: Random House)
-
see also Dreyfus and Dreyfus, op. cit. To Engineer is Human (New York: Random House, 1992), pp. 194-195.
-
(1992)
To Engineer Is Human
, pp. 194-195
-
-
Dreyfus1
Dreyfus2
-
32
-
-
84876652896
-
-
In the well-known O'Henry story, "The Gift of the Magi", two people exchange gifts at great cost to themselves; unfortunately, the cost each has incurred renders the gift received from the other no longer of any material use. If end-results didn't matter at all, the story would lose most of its poignance and pathos; it would just be another laughable version of the "Alphonse and Gaston" routine: two silly men ushering one another through the door with pompous ceremoniousness are nevertheless unable to go anywhere because neither will go until the other goes first
-
In the well-known O'Henry story, "The Gift of the Magi", two people exchange gifts at great cost to themselves; unfortunately, the cost each has incurred renders the gift received from the other no longer of any material use. If end-results didn't matter at all, the story would lose most of its poignance and pathos; it would just be another laughable version of the "Alphonse and Gaston" routine: two silly men ushering one another through the door with pompous ceremoniousness are nevertheless unable to go anywhere because neither will go until the other goes first.
-
-
-
-
33
-
-
84934349488
-
The inalienability of autonomy
-
Here I follow my essay, Fall 1984
-
Here I follow my essay, "The Inalienability of Autonomy", Philosophy & Public Affairs 13(4): 271-298, Fall 1984.
-
Philosophy & Public Affairs
, vol.13
, Issue.4
, pp. 271-298
-
-
-
34
-
-
84876637008
-
-
The problem of boredom is not to be overlooked, however; while problems arising from fully automated systems can be mitigated if humans also have a role to play, the "human-machine interface" must be carefully designed. If human operators have nothing to do but intervene in rare, catastrophic emergency situations, they may become too inactive and inattentive to be able to respond quickly and skillfully if and when the time arrives. This suggests that while not constantly intervening and overriding, the humans who oversee a computer's safety-critical operation must nevertheless have meaningfully active roles (e.g., putting questions to the computer, collecting data on the state of the machine) to keep them adequately informed and sufficiently alert. See, for example, Nancy Leveson, op. cit., Chapter 5, section 2 ("The Need for Humans in Automated Systems") and Chapter 6 ("The Role of Humans in Automated Systems")
-
The problem of boredom is not to be overlooked, however; while problems arising from fully automated systems can be mitigated if humans also have a role to play, the "human-machine interface" must be carefully designed. If human operators have nothing to do but intervene in rare, catastrophic emergency situations, they may become too inactive and inattentive to be able to respond quickly and skillfully if and when the time arrives. This suggests that while not constantly intervening and overriding, the humans who oversee a computer's safety-critical operation must nevertheless have meaningfully active roles (e.g., putting questions to the computer, collecting data on the state of the machine) to keep them adequately informed and sufficiently alert. See, for example, Nancy Leveson, op. cit., Chapter 5, section 2 ("The Need for Humans in Automated Systems") and Chapter 6 ("The Role of Humans in Automated Systems").
-
-
-
-
35
-
-
84876656377
-
-
It has been suggested to me that, given the reliability of the computer, we should suppose not that the plane is heading into the side of the mountain, but that the pilot is hallucinating. No doubt there are cases in which this might be so. And in any event it might be sensible to design the system so that the override option could only be activated with the concurrence of two or more authorized crew members (who would first have to input their respective personal code numbers)
-
It has been suggested to me that, given the reliability of the computer, we should suppose not that the plane is heading into the side of the mountain, but that the pilot is hallucinating. No doubt there are cases in which this might be so. And in any event it might be sensible to design the system so that the override option could only be activated with the concurrence of two or more authorized crew members (who would first have to input their respective personal code numbers). But in the presently imagined case, we have only to note several other factors that would make the hypothesis non-credible. Suppose the pilot who takes himself to be seeing the plane heading toward the side of the mountain seeks corroboration-not only from the copilot, but from flight attendants, passengers who have flown planes, et al-and they all report that they see the plane heading toward the side of the mountain. Of course, the only way for the plane to clear the mountains is for it to be presently cruising above 12,000 feet! Absent information about a special drug administered to everyone aboard that could have this kind of hallucinogenic effect, and given the aforementioned facts about the essential non-debuggability of complex functional programs, it is far more reasonable to suppose that the program is malfunctioning than that all these people are hallucinating. Someone might suggest that the pilot is hallucinating about the existence and reliability of these other witnesses. But so long as we are allowed to indulge in skeptical doubt so radical, the "extreme reliability" of the flight-control software itself should not be spared from the category of the potentially delusional.
-
-
-
-
36
-
-
0022945393
-
Conceptual design of a human error tolerant interface for complex engineering systems
-
"Human operators are included in complex systems because, unlike computers, they are adaptable and flexible ... Humans are able to look at tasks as a whole and to adapt both the goals and the methods to achieve them. Thus, humans evolve and develop skills and performance patterns that fit the peculiarities of a system very effectively, and they are able to use problem solving and creativity to cope with unusual and unforeseen situations. For example, the pilot of a Boeing 767 made use of his experience as an amateur glider pilot to land his aircraft safely after a series of equipment failures and maintenance errors caused the plane to run out of fuel while in flight over Canada. Humans can exercise judgment and are unsurpassed in recognizing patterns, making associative leaps, and operating in ill-structured, ambiguous situations." Leveson, op. cit, pp. 100-101, citing the work of in G. Mancini, G. Johannsen and L. Martensson, eds. (Pergamon Press, New York)
-
"Human operators are included in complex systems because, unlike computers, they are adaptable and flexible ... Humans are able to look at tasks as a whole and to adapt both the goals and the methods to achieve them. Thus, humans evolve and develop skills and performance patterns that fit the peculiarities of a system very effectively, and they are able to use problem solving and creativity to cope with unusual and unforeseen situations. For example, the pilot of a Boeing 767 made use of his experience as an amateur glider pilot to land his aircraft safely after a series of equipment failures and maintenance errors caused the plane to run out of fuel while in flight over Canada. Humans can exercise judgment and are unsurpassed in recognizing patterns, making associative leaps, and operating in ill-structured, ambiguous situations." Leveson, op. cit, pp. 100-101, citing the work of W.B. Rouse and N.M. Morris, "Conceptual design of a human error tolerant interface for complex engineering systems" in G. Mancini, G. Johannsen and L. Martensson, eds. Analysis, Design, and Evaluation of Man-Machine Systems, pp. 281-286 (Pergamon Press, New York, 1986).
-
(1986)
Analysis, Design, and Evaluation of Man-machine Systems
, pp. 281-286
-
-
Rouse, W.B.1
Morris, N.M.2
-
37
-
-
84876654145
-
-
Here I follow the line of argument presented in my article, "The Inalienability of Autonomy", loc. cit, where it was directed against abdication in favor of fellow-humans, to make the case against abdication in favor of sophisticated programmed computers as well
-
Here I follow the line of argument presented in my article, "The Inalienability of Autonomy", loc. cit, where it was directed against abdication in favor of fellow-humans, to make the case against abdication in favor of sophisticated programmed computers as well.
-
-
-
-
38
-
-
84876599274
-
-
Leveson has suggested a number of possible guidelines for safer design of the "HMI"-i.e., the "human-machine interface." Among these are-"Design the HMI to augment human abilities, not replace them." "Involve operators in design decisions and safety analysis throughout development." "Design for error tolerance: (a) make errors reversible ... (b) provide time to reverse them, and (c) provide compensating (reversing) actions
-
Leveson has suggested a number of possible guidelines for safer design of the "HMI"-i.e., the "human-machine interface." Among these are-"Design the HMI to augment human abilities, not replace them." "Involve operators in design decisions and safety analysis throughout development." "Design for error tolerance: (a) make errors reversible ... (b) provide time to reverse them, and (c) provide compensating (reversing) actions." "Provide adequate feedback to keep operator in the loop." "Allow the operator to maintain manual involvement and to update mental models, maintain skills, and preserve self-confidence." "Do not permit overrides of potentially safety-critical failures ... until all data has been displayed and perhaps not until the operator has acknowledged seeing it." "Train operators to understand how the system functions and to think flexibly when solving problems." "Train for general strategies (rather than specific responses) to develop skills for dealing with unanticipated events." "Provide practice in problem solving." op. cit, pp. 485-488.
-
Provide Practice in Problem Solving
, pp. 485-488
-
-
-
39
-
-
84876612329
-
-
For a recent application of this same idea we have only to turn to an Op.-Ed. page essay, February 1 by Valery Yarynich, "a retired colonel in the Russian Strategic Rocket Forces, who spent his career working on command and control systems." Yarynich was commenting on a previous piece that alleged the existence of a secret computerized launching system that "in theory would enable Russia to fire its nuclear arsenal even if its top commanders had been killed." He was at pains to deny that any such "doomsday machine" had been set up. Instead, Yarynich insisted that the capacity to strike back even after top leaders had been incapacitated did obtain but in the form of a special crew of human beings, situated deep underground and subject to a three-point system of checks and balances. I believe that Yarynich was rightly uncomfortable with the idea of a computerized weapons system beyond all human reconsideration
-
For a recent application of this same idea we have only to turn to an Op.-Ed. page essay NY Times, February 1, 1994, p. A 17) by Valery Yarynich, "a retired colonel in the Russian Strategic Rocket Forces, who spent his career working on command and control systems." Yarynich was commenting on a previous piece that alleged the existence of a secret computerized launching system that "in theory would enable Russia to fire its nuclear arsenal even if its top commanders had been killed." He was at pains to deny that any such "doomsday machine" had been set up. Instead, Yarynich insisted that the capacity to strike back even after top leaders had been incapacitated did obtain but in the form of a special crew of human beings, situated deep underground and subject to a three-point system of checks and balances. I believe that Yarynich was rightly uncomfortable with the idea of a computerized weapons system beyond all human reconsideration.
-
(1994)
NY Times
-
-
-
41
-
-
2942694706
-
-
reprinted, edited by Deborah Johnson and John W. Snapper
-
reprinted in Ethical Issues in the Use of Computers, edited by Deborah Johnson and John W. Snapper.
-
Ethical Issues in the Use of Computers
-
-
-
42
-
-
84876651724
-
-
Johnson and Snapper. Subsequent citations are to this edition
-
p. 121 in Johnson and Snapper. Subsequent citations are to this edition.
-
-
-
-
45
-
-
84876632852
-
-
It might be objected that the conclusions of this paper fail to take seriously the paper's own admonition against the prejudice that "only flesh and blood, carbon-based, organic entities" could be responsible moral agents. See. I believe that nothing in the foregoing discussion tells against the possibility that computers could evolve to the point of being plausibly regarded as moral agents in their own right, though it must also be acknowledged that they are not there yet. The thrust of the argument against vesting irrevocable authority in computers is based, as I suggest above, on the same reflections that tell against vesting such authority in a fellow human
-
It might be objected that the conclusions of this paper fail to take seriously the paper's own admonition against the prejudice that "only flesh and blood, carbon-based, organic entities" could be responsible moral agents. See Clarification: "Decision-making"; " Responsibility", p. 173. I believe that nothing in the foregoing discussion tells against the possibility that computers could evolve to the point of being plausibly regarded as moral agents in their own right, though it must also be acknowledged that they are not there yet. The thrust of the argument against vesting irrevocable authority in computers is based, as I suggest above, on the same reflections that tell against vesting such authority in a fellow human.
-
Clarification: "Decision-making"; "Responsibility"
, pp. 173
-
-
-
46
-
-
84876599032
-
-
Here it might be suggested that humans would not be serving the superior machines but merely just trusting them. Perhaps so. But the scenario envisioned has those "superior" machines so much wiser and more reliable that they would be revered by humans as the ultimate authority on everything that mattered. Humans would be not merely taking advice but in a sense obeying the "superior" beings. As noted above, this would be much like the relationship some people have to a deity: service, not in the sense of slavery, but in the sense of reverent obedience
-
Here it might be suggested that humans would not be serving the superior machines but merely just trusting them. Perhaps so. But the scenario envisioned has those "superior" machines so much wiser and more reliable that they would be revered by humans as the ultimate authority on everything that mattered. Humans would be not merely taking advice but in a sense obeying the "superior" beings. As noted above, this would be much like the relationship some people have to a deity: service, not in the sense of slavery, but in the sense of reverent obedience.
-
-
-
|