-
1
-
-
84916610260
-
‘How It Went Down’
-
7 May
-
Graham Allison, ‘How It Went Down’, Time, 7 May 2012.
-
(2012)
Time
-
-
Allison, G.1
-
8
-
-
0003507478
-
-
Reading, MA: Addison-Wesley, ‘[Decision theory does not] present a descriptive theory of actual behavior. Neither [does it] present a positive theory of behavior for a superintelligent, fictitious being; nowhere in our analysis shall we refer to the behavior of an “idealized, rational, and economic man”, a man who always acts in a perfectly consistent manner as if somehow there were embedded in his nature a coherent set of evaluation patterns that cover any and all eventualities. Rather the approach we take prescribes how an individual who is faced with a problem of choice under uncertainty should go about choosing a course of action that is consistent with his personal basic judgments and preferences. He must consciously police the consistency of his subjective inputs and calculate their implications for action. Such an approach is designed to help us reason and act a bit more systematically-when we choose to do so!’
-
Howard Raiffa, Decision Analysis: Introductory Lectures on Choices under Uncertainty (Reading, MA: Addison-Wesley 1968) p.10: ‘[Decision theory does not] present a descriptive theory of actual behavior. Neither [does it] present a positive theory of behavior for a superintelligent, fictitious being; nowhere in our analysis shall we refer to the behavior of an “idealized, rational, and economic man”, a man who always acts in a perfectly consistent manner as if somehow there were embedded in his nature a coherent set of evaluation patterns that cover any and all eventualities. Rather the approach we take prescribes how an individual who is faced with a problem of choice under uncertainty should go about choosing a course of action that is consistent with his personal basic judgments and preferences. He must consciously police the consistency of his subjective inputs and calculate their implications for action. Such an approach is designed to help us reason and act a bit more systematically-when we choose to do so!’
-
(1968)
Decision Analysis: Introductory Lectures on Choices under Uncertainty
, pp. 10
-
-
Raiffa, H.1
-
9
-
-
0003926805
-
-
Structured analytic techniques can also mitigate biases. Some relevant biases are intentional, such as the way that some analysts are said to hedge their estimates in order to avoid criticism for mistaken predictions. More generally, individuals encounter a wide range of cognitive constraints when assessing probability and risk. See, for instance, London: Earthscan
-
Structured analytic techniques can also mitigate biases. Some relevant biases are intentional, such as the way that some analysts are said to hedge their estimates in order to avoid criticism for mistaken predictions. More generally, individuals encounter a wide range of cognitive constraints when assessing probability and risk. See, for instance, Paul Slovic, The Perception of Risk (London: Earthscan 2000)
-
(2000)
The Perception of Risk
-
-
Slovic, P.1
-
12
-
-
30244499173
-
‘Words of Estimative Probability’
-
Debates about estimative probability and intelligence analysis are long-standing; the seminal article is by
-
Debates about estimative probability and intelligence analysis are long-standing; the seminal article is by Sherman Kent, ‘Words of Estimative Probability’, Studies in Intelligence 8/4 (1964) pp.49-65.
-
(1964)
Studies in Intelligence
, vol.8
, Issue.4
, pp. 49-65
-
-
Kent, S.1
-
13
-
-
84871288066
-
‘What is Being Published in Intelligence? A Study of Two Scholarly Journals’
-
Yet those debates remain unresolved, and they comprise a small slice of current literature
-
Yet those debates remain unresolved, and they comprise a small slice of current literature. Miron Varouhakis, ‘What is Being Published in Intelligence? A Study of Two Scholarly Journals’, International Journal of Intelligence and CounterIntelligence 26/1 (2013) p.183
-
(2013)
International Journal of Intelligence and CounterIntelligence
, vol.26
, Issue.1
, pp. 183
-
-
Varouhakis, M.1
-
14
-
-
84874426582
-
‘Thoughts on the State of Intelligence Studies: A Survey Report’
-
Shows that fewer than 7 per cent of published studies in two prominent intelligence journals focus on analysis. In a recent survey of intelligence scholars, the theoretical foundations for analysis placed among the most under-researched topics in the field
-
Shows that fewer than 7 per cent of published studies in two prominent intelligence journals focus on analysis. In a recent survey of intelligence scholars, the theoretical foundations for analysis placed among the most under-researched topics in the field: Loch K. Johnson and Allison M. Shelton, ‘Thoughts on the State of Intelligence Studies: A Survey Report’, Intelligence and National Security 28/1 (2013) p.112.
-
(2013)
Intelligence and National Security
, vol.28
, Issue.1
, pp. 112
-
-
Johnson, L.K.1
Shelton, A.M.2
-
15
-
-
84865786997
-
‘Policing Uncertainty: Intelligence, Security and Risk’
-
for a similar distinction. Ignorance is another important concept, denoting situations where it is not even possible to define all possible answers to an estimative question. This situation often arises in intelligence analysis, but it is beyond the scope of this article
-
Mark Phythian, ‘Policing Uncertainty: Intelligence, Security and Risk’, Intelligence and National Security 27/2 (2012) pp.187-205, for a similar distinction. Ignorance is another important concept, denoting situations where it is not even possible to define all possible answers to an estimative question. This situation often arises in intelligence analysis, but it is beyond the scope of this article.
-
(2012)
Intelligence and National Security
, vol.27
, Issue.2
, pp. 187-205
-
-
Phythian, M.1
-
16
-
-
0003719825
-
-
Sometimes subjective probability is called ‘personal probability’ to emphasize that it captures an individual’s beliefs about the world rather than objective frequencies determined via controlled experiments. As Frank Lad describes, the subjectivist approach to probability ‘represents your assessment of your own personal uncertain knowledge about any event that interests you. There is no condition that events be repeatable… In the proper syntax of the subjectivist formulation, you might well ask me and I might well ask you, “What is your probability for a specified event?” It is proposed that there is a distinct (and generally different) correct answer to this question for each person who responds to it. We are each sanctioned to look within ourselves to find our own answer. Your answer can be evaluated as correct or incorrect only in terms of whether or not you answer honestly’, NY: Wiley
-
Sometimes subjective probability is called ‘personal probability’ to emphasize that it captures an individual’s beliefs about the world rather than objective frequencies determined via controlled experiments. As Frank Lad describes, the subjectivist approach to probability ‘represents your assessment of your own personal uncertain knowledge about any event that interests you. There is no condition that events be repeatable… In the proper syntax of the subjectivist formulation, you might well ask me and I might well ask you, “What is your probability for a specified event?” It is proposed that there is a distinct (and generally different) correct answer to this question for each person who responds to it. We are each sanctioned to look within ourselves to find our own answer. Your answer can be evaluated as correct or incorrect only in terms of whether or not you answer honestly’. Frank Lad, Operational Subjective Statistical Methods (NY: Wiley 1996) pp.8-9.
-
(1996)
Operational Subjective Statistical Methods
, pp. 8-9
-
-
Lad, F.1
-
18
-
-
84916610259
-
-
To control for time preferences, one would ideally make it so that the potential resolutions of these gambles and their payoffs occurred at the same time. Thus, if the gamble involved the odds of regime change in a foreign country by the end of the year, the experimenter would draw from the urn at year’s end or when the regime change occurred, at which point any payoff would be made
-
To control for time preferences, one would ideally make it so that the potential resolutions of these gambles and their payoffs occurred at the same time. Thus, if the gamble involved the odds of regime change in a foreign country by the end of the year, the experimenter would draw from the urn at year’s end or when the regime change occurred, at which point any payoff would be made.
-
-
-
-
19
-
-
34248048327
-
‘Learning from Terrorism Prediction Markets’
-
Adam Meirowitz and Joshua A. Tucker, ‘Learning from Terrorism Prediction Markets’, Perspectives on Politics 2/2 (2004) pp.331-6.
-
(2004)
Perspectives on Politics
, vol.2
, Issue.2
, pp. 331-336
-
-
Meirowitz, A.1
Tucker, J.A.2
-
21
-
-
84916610258
-
-
The separate question of how decision makers should determine when to act versus waiting to gather additional information is a central topic in the section that follows
-
The separate question of how decision makers should determine when to act versus waiting to gather additional information is a central topic in the section that follows.
-
-
-
-
22
-
-
84916610257
-
-
The discussion below applies broadly to debating any question that has a yes-or-no answer. Matters get more complicated with broader questions that admit many possibilities, for example ‘Where is bin Laden currently living?’ These situations can be addressed by decomposing the issue into binary components, such as ‘Is bin Laden living in location A?’, ‘Is bin Laden living in location B?’, and so on
-
The discussion below applies broadly to debating any question that has a yes-or-no answer. Matters get more complicated with broader questions that admit many possibilities, for example ‘Where is bin Laden currently living?’ These situations can be addressed by decomposing the issue into binary components, such as ‘Is bin Laden living in location A?’, ‘Is bin Laden living in location B?’, and so on.
-
-
-
-
23
-
-
84916610256
-
-
By extension, this arrangement covers situations where the best estimate could lie anywhere between 40 and 80 per cent, and you do not believe that any options within this range are more likely than others, since the contents of the urn could be combined to construct any intermediate mix of colors
-
By extension, this arrangement covers situations where the best estimate could lie anywhere between 40 and 80 per cent, and you do not believe that any options within this range are more likely than others, since the contents of the urn could be combined to construct any intermediate mix of colors.
-
-
-
-
24
-
-
84957363402
-
‘Risk, Ambiguity, and the Savage Axioms’
-
Ellsberg’s seminal example involved deciding whether to choose from an urn with exactly 50 red marbles and 50 black marbles to win a prize versus an urn where the mix of colors was unknown. Subjects will often pay non-trivial amounts to select from the urn with known risk, which makes no rational sense. If people are willing to pay more in order to gamble on drawing a red marble from the urn with a 50/50 distribution, this implies that they believe there is less than a 50 per cent chance of drawing a red marble from the ambiguous urn. But by the same logic, they will also be willing to pay more in order to gamble on drawing a black marble from the 50/50 urn, which implies they believe there is less than a 50 per cent chance of drawing a black marble from the ambiguous urn. These two statements cannot be true simultaneously. This is known as the ‘Ellsberg paradox’. See
-
Ellsberg’s seminal example involved deciding whether to choose from an urn with exactly 50 red marbles and 50 black marbles to win a prize versus an urn where the mix of colors was unknown. Subjects will often pay non-trivial amounts to select from the urn with known risk, which makes no rational sense. If people are willing to pay more in order to gamble on drawing a red marble from the urn with a 50/50 distribution, this implies that they believe there is less than a 50 per cent chance of drawing a red marble from the ambiguous urn. But by the same logic, they will also be willing to pay more in order to gamble on drawing a black marble from the 50/50 urn, which implies they believe there is less than a 50 per cent chance of drawing a black marble from the ambiguous urn. These two statements cannot be true simultaneously. This is known as the ‘Ellsberg paradox’. See Daniel Ellsberg, ‘Risk, Ambiguity, and the Savage Axioms’, Quarterly Journal of Economics 75/4 (1961) pp.643-69.
-
(1961)
Quarterly Journal of Economics
, vol.75
, Issue.4
, pp. 643-669
-
-
Ellsberg, D.1
-
26
-
-
84916610254
-
‘The Less Than Rational Regulation of Ambiguous Risks’
-
26 April, These contrasting examples show how risk preferences depend on a decision maker’s views of whether it would be worse to make an error of commission or omission-in decision theory more generally, a key concept is balancing the risks of Type I and Type II errors. 21It is especially important to disentangle these questions when it comes to intelligence analysis, a field in which one of the cardinal rules is that it is the analyst’s role to provide information that facilitates decision making, but not to interfere with making the decision itself. Massaging probabilistic estimates in light of potential policy responses inherently blurs this line
-
W. Kip Viscusi and Richard J. Zeckhauser, ‘The Less Than Rational Regulation of Ambiguous Risks’, University of Chicago Law School Conference, 26 April 2013. These contrasting examples show how risk preferences depend on a decision maker’s views of whether it would be worse to make an error of commission or omission-in decision theory more generally, a key concept is balancing the risks of Type I and Type II errors. 21It is especially important to disentangle these questions when it comes to intelligence analysis, a field in which one of the cardinal rules is that it is the analyst’s role to provide information that facilitates decision making, but not to interfere with making the decision itself. Massaging probabilistic estimates in light of potential policy responses inherently blurs this line.
-
(2013)
University of Chicago Law School Conference
-
-
Kip Viscusi, W.1
Zeckhauser, R.J.2
-
27
-
-
84916610253
-
-
Throughout this discussion, the information available to one analyst is assumed to be available to all, hence analysts are basing their estimates on the same evidence. The importance of this point is discussed below
-
Throughout this discussion, the information available to one analyst is assumed to be available to all, hence analysts are basing their estimates on the same evidence. The importance of this point is discussed below.
-
-
-
-
29
-
-
84916610252
-
-
MITRE, Case #12-4439, show that intelligence estimates predicting that an event will occur with 50 per cent odds tend to be especially inaccurate
-
Paul Lehner et al., ‘Using Inferred Probabilities to Measure the Accuracy of Imprecise Forecasts’ (MITRE 2012) Case #12-4439, show that intelligence estimates predicting that an event will occur with 50 per cent odds tend to be especially inaccurate.
-
(2012)
‘Using Inferred Probabilities to Measure the Accuracy of Imprecise Forecasts’
-
-
Lehner, P.1
-
30
-
-
84916610251
-
-
In this case, some people believed the analyst fell prey to confirmation bias and thus interpreted the evidence over-optimistically. In other cases, of course, analysts involved with intelligence collection may deserve extra credibility given their knowledge of the relevant material
-
In this case, some people believed the analyst fell prey to confirmation bias and thus interpreted the evidence over-optimistically. In other cases, of course, analysts involved with intelligence collection may deserve extra credibility given their knowledge of the relevant material.
-
-
-
-
31
-
-
84916610250
-
-
This is not to deny that there may be instances where the views of ‘Red Teams’ or ‘devil’s advocates’ will turn out to be more accurate than the consensus opinion (and more generally, that these tools are useful for helping to challenge and refine other estimates). It is to say that, on balance, one should expect an analysis to be less credible if it is biased, and Red Teams are explicitly tasked to slant their views
-
This is not to deny that there may be instances where the views of ‘Red Teams’ or ‘devil’s advocates’ will turn out to be more accurate than the consensus opinion (and more generally, that these tools are useful for helping to challenge and refine other estimates). It is to say that, on balance, one should expect an analysis to be less credible if it is biased, and Red Teams are explicitly tasked to slant their views.
-
-
-
-
34
-
-
33645580856
-
‘Reports, Politics, and Intelligence Failures: The Case of Iraq’
-
It is important to note that prior assumptions do not always lead analysts towards the right conclusions. Robert Jervis argues, for instance, that flawed assessments of Iraq’s potential weapons of mass destruction programs were driven by the assumed plausibility that Saddam Hussein would pursue such capabilities. The point of assessing priors is thus not to reify them, but rather to make their role in the analysis explicit (and, where possible, to submit such assumptions to structured critique). See
-
It is important to note that prior assumptions do not always lead analysts towards the right conclusions. Robert Jervis argues, for instance, that flawed assessments of Iraq’s potential weapons of mass destruction programs were driven by the assumed plausibility that Saddam Hussein would pursue such capabilities. The point of assessing priors is thus not to reify them, but rather to make their role in the analysis explicit (and, where possible, to submit such assumptions to structured critique). See Robert Jervis, ‘Reports, Politics, and Intelligence Failures: The Case of Iraq’, Journal of Strategic Studies 29/1 (2006) pp.3-52.
-
(2006)
Journal of Strategic Studies
, vol.29
, Issue.1
, pp. 3-52
-
-
Jervis, R.1
-
35
-
-
0002149921
-
‘Combining Overlapping Information’
-
For example, imagine that two analysts are assigned to assess the likelihood that a certain state will attack its neighbor by the end of the year. These analysts share the prior assumption that the odds of this happening are relatively low (say, about 5 per cent). They independently encounter different pieces of information suggesting that these chances are higher than they originally anticipated. Analyst A learns that the country has been secretly importing massive quantities of armaments, and Analyst B learns that the country has been conducting large, unannounced training exercises for its air force. Based on this information, our analysts respectively estimate a 30 per cent (A) and a 40 per cent (B) chance of war breaking out by the end of the year. In this instance, it would be problematic to think that the odds of war are somewhere between 30 and 40 per cent, because if the analysts had been exposed to and properly incorporated each others’ information, both their respective estimates would presumably have been higher. On such processes, see
-
For example, imagine that two analysts are assigned to assess the likelihood that a certain state will attack its neighbor by the end of the year. These analysts share the prior assumption that the odds of this happening are relatively low (say, about 5 per cent). They independently encounter different pieces of information suggesting that these chances are higher than they originally anticipated. Analyst A learns that the country has been secretly importing massive quantities of armaments, and Analyst B learns that the country has been conducting large, unannounced training exercises for its air force. Based on this information, our analysts respectively estimate a 30 per cent (A) and a 40 per cent (B) chance of war breaking out by the end of the year. In this instance, it would be problematic to think that the odds of war are somewhere between 30 and 40 per cent, because if the analysts had been exposed to and properly incorporated each others’ information, both their respective estimates would presumably have been higher. On such processes, see Richard Zeckhauser, ‘Combining Overlapping Information’, Journal of the American Statistical Association 66/333 (1971) pp.91-2.
-
(1971)
Journal of the American Statistical Association
, vol.66
, Issue.333
, pp. 91-92
-
-
Zeckhauser, R.1
-
36
-
-
85046733695
-
‘Determinants of Diagnostic Hypothesis Generation: Effects of Information, Base Rates, and Experience’
-
Scholars have shown this to be the case in several fields including medicine, law, and climate change. See, for instance
-
Scholars have shown this to be the case in several fields including medicine, law, and climate change. See, for instance, Elke U. Weber et al., ‘Determinants of Diagnostic Hypothesis Generation: Effects of Information, Base Rates, and Experience’, Journal of Experimental Psychology: Learning, Memory, and Cognition 19/5 (1993) pp.1151-64
-
(1993)
Journal of Experimental Psychology: Learning, Memory, and Cognition
, vol.19
, Issue.5
, pp. 1151-1164
-
-
Weber, E.U.1
-
37
-
-
84867684421
-
‘Predicting Civil Jury Verdicts: How Attorneys Use (and Misuse) a Second Opinion’
-
Jonas Jacobson et al., ‘Predicting Civil Jury Verdicts: How Attorneys Use (and Misuse) a Second Opinion’, Journal of Empirical Legal Studies 8/1 (2011) pp.99-119.
-
(2011)
Journal of Empirical Legal Studies
, vol.8
, Issue.1
, pp. 99-119
-
-
Jacobson, J.1
-
38
-
-
34247529903
-
‘Availability: A Heuristic for Judging Frequency and Probability’
-
This tendency follows from the availability heuristic, one of the most thoroughly-documented biases in behavioral decision, which shows that people inflate the probability of events that are easier to bring to mind; see, In the context of intelligence analysis, this implies that analysts may conflate the predictive value of a piece of information with how easily they are able to interpret it
-
This tendency follows from the availability heuristic, one of the most thoroughly-documented biases in behavioral decision, which shows that people inflate the probability of events that are easier to bring to mind; see Amos Tversky and Daniel Kahneman, ‘Availability: A Heuristic for Judging Frequency and Probability’, Cognitive Psychology 5 (1973) pp.207-32. In the context of intelligence analysis, this implies that analysts may conflate the predictive value of a piece of information with how easily they are able to interpret it.
-
(1973)
Cognitive Psychology
, vol.5
, pp. 207-232
-
-
Tversky, A.1
Kahneman, D.2
-
39
-
-
84916610248
-
-
An analyst’s past record of making successful predictions may also inform the credibility of her estimates. Recent research demonstrates that some people are systematically better than others at political forecasting. However, contemporary systems for evaluating analyst performance are relatively underdeveloped, especially since analysts tend not to specify probabilistic estimates in a manner that can be rated objectively. Moreover, when decision makers consider the past performance of their advisors, they may be tempted to extrapolate from a handful of experiences that offer little basis for judging analytic skill (or from personal qualities that are not relevant for estimating likelihood). Determining how to draw sound inferences about an analyst’s credibility from their past performance is thus an area where further research can have practical benefit. In the interim, we focus on evaluating the logic of each analyst’s assessment per se
-
An analyst’s past record of making successful predictions may also inform the credibility of her estimates. Recent research demonstrates that some people are systematically better than others at political forecasting. However, contemporary systems for evaluating analyst performance are relatively underdeveloped, especially since analysts tend not to specify probabilistic estimates in a manner that can be rated objectively. Moreover, when decision makers consider the past performance of their advisors, they may be tempted to extrapolate from a handful of experiences that offer little basis for judging analytic skill (or from personal qualities that are not relevant for estimating likelihood). Determining how to draw sound inferences about an analyst’s credibility from their past performance is thus an area where further research can have practical benefit. In the interim, we focus on evaluating the logic of each analyst’s assessment per se.
-
-
-
-
41
-
-
84875584655
-
‘The Good Judgment Project: A Large Scale Test of Different Methods of Combining Expert Predictions’
-
Lyle Ungar et al., ‘The Good Judgment Project: A Large Scale Test of Different Methods of Combining Expert Predictions’, AAAI Technical Report FS-12-06 (2012).
-
(2012)
AAAI Technical Report FS-12-06
-
-
Ungar, L.1
-
42
-
-
0003976307
-
-
Some relevant techniques include: asking individuals to rate the credibility of their own predictions and using these ratings as weights; asking individuals to debate among themselves whose estimates seem most credible and thereby determine appropriate weighting by consensus; and denoting a member of the team to assign weights to each estimate after evaluating the reasoning that analysts present. For a review, see, NY: Cambridge University Press
-
Some relevant techniques include: asking individuals to rate the credibility of their own predictions and using these ratings as weights; asking individuals to debate among themselves whose estimates seem most credible and thereby determine appropriate weighting by consensus; and denoting a member of the team to assign weights to each estimate after evaluating the reasoning that analysts present. For a review, see Detlof von Winterfeldt and Ward Edwards, Decision Analysis and Behavioral Research (NY: Cambridge University Press 1986) pp.133-6.
-
(1986)
Decision Analysis and Behavioral Research
, pp. 133-136
-
-
Von Winterfeldt, D.1
Edwards, W.2
-
43
-
-
84902265506
-
-
Office of the Director of National Intelligence [ODNI], Washington, DC: ODNI
-
Office of the Director of National Intelligence [ODNI], US National Intelligence: An Overview (Washington, DC: ODNI 2011) p.60.
-
(2011)
US National Intelligence: An Overview
, pp. 60
-
-
-
44
-
-
84871324227
-
‘Assessing Uncertainty in Intelligence’
-
For example, Bergen prints a quote from Director of National Intelligence James Clapper referring to estimates of the likelihood that bin Laden was living in Abbottabad as ‘percentage [s] of confidence’ (Manhunt, p.197). On the dangers of conflating likelihood and confidence in intelligence analysis more generally, see
-
For example, Bergen prints a quote from Director of National Intelligence James Clapper referring to estimates of the likelihood that bin Laden was living in Abbottabad as ‘percentage [s] of confidence’ (Manhunt, p.197). On the dangers of conflating likelihood and confidence in intelligence analysis more generally, see Jeffrey A. Friedman and Richard Zeckhauser, ‘Assessing Uncertainty in Intelligence’, Intelligence and National Security 27/6 (2012) pp.835-41.
-
(2012)
Intelligence and National Security
, vol.27
, Issue.6
, pp. 835-841
-
-
Friedman, J.A.1
Zeckhauser, R.2
-
46
-
-
84916610247
-
-
To repeat, these statements are equivalent only from the standpoint of how decision makers should choose among options right now. In reality, decision makers must weigh immediate action against the potential costs and benefits of delaying to gather new information. In this respect, the estimates ‘between 0 and 100 per cent’ and ‘50 per cent’ may indeed have different interpretations, as the former literally relays no information (and thus a decision maker might be inclined to search for additional intelligence) while an estimate of ‘50 per cent’ could represent a careful weighing of evidence that is unlikely to shift much moving forward. This distinction shows why it is important to assess both likelihood and confidence, and why estimates of confidence should be tied directly to questions about whether decision makers should find it worthwhile to gather additional information. This is the subject of discussion below
-
To repeat, these statements are equivalent only from the standpoint of how decision makers should choose among options right now. In reality, decision makers must weigh immediate action against the potential costs and benefits of delaying to gather new information. In this respect, the estimates ‘between 0 and 100 per cent’ and ‘50 per cent’ may indeed have different interpretations, as the former literally relays no information (and thus a decision maker might be inclined to search for additional intelligence) while an estimate of ‘50 per cent’ could represent a careful weighing of evidence that is unlikely to shift much moving forward. This distinction shows why it is important to assess both likelihood and confidence, and why estimates of confidence should be tied directly to questions about whether decision makers should find it worthwhile to gather additional information. This is the subject of discussion below.
-
-
-
-
47
-
-
0003529526
-
-
This is one of the central subjects of decision theory, ch.6, and Raiffa, Decision Analysis, ch.7
-
This is one of the central subjects of decision theory. Winkler, Introduction to Bayesian Inference and Decision, ch.6, and Raiffa, Decision Analysis, ch.7.
-
Introduction to Bayesian Inference and Decision
-
-
Winkler1
-
48
-
-
84916610246
-
-
This holds constant the idea that the situation on the ground might change within the next month: bin Laden might have learned that he was being watched and might have fled, for instance, and that was something which the president reportedly worried about. This, however, is a matter of how much the state of the world might change, which is different from thinking about how assessments of the current situation might respond to additional information
-
This holds constant the idea that the situation on the ground might change within the next month: bin Laden might have learned that he was being watched and might have fled, for instance, and that was something which the president reportedly worried about. This, however, is a matter of how much the state of the world might change, which is different from thinking about how assessments of the current situation might respond to additional information.
-
-
-
-
49
-
-
84871327765
-
‘The Hazards of SingleOutcome Forecasting’
-
For broader discussions of this point, see
-
For broader discussions of this point, see Willis C. Armstrong et al., ‘The Hazards of SingleOutcome Forecasting’, Studies in Intelligence 28/3 (1984) pp.57-70
-
(1984)
Studies in Intelligence
, vol.28
, Issue.3
, pp. 57-70
-
-
Armstrong, W.C.1
-
51
-
-
0001428625
-
‘Decision Analysis: Practice and Promise’
-
Ronald Howard argues that this is perhaps the fundamental takeaway from decision theory: ‘I tell my students that if they learn nothing else about decision analysis from their studies, this distinction will have been worth the price of admission. A good outcome is a future state of the world that we prize relative to other possibilities. A good decision is an action we take that is logically consistent with the alternatives we perceive, the information we have, and the preferences we feel. In an uncertain world, good decisions can lead to bad outcomes, and vice versa. If you listen carefully to ordinary speech, you will see that this distinction is usually not observed. If a bad outcome follows an action, people say that they made a bad decision. Making the distinction allows us to separate action from consequence and hence improve the quality of action.’
-
Ronald Howard argues that this is perhaps the fundamental takeaway from decision theory: ‘I tell my students that if they learn nothing else about decision analysis from their studies, this distinction will have been worth the price of admission. A good outcome is a future state of the world that we prize relative to other possibilities. A good decision is an action we take that is logically consistent with the alternatives we perceive, the information we have, and the preferences we feel. In an uncertain world, good decisions can lead to bad outcomes, and vice versa. If you listen carefully to ordinary speech, you will see that this distinction is usually not observed. If a bad outcome follows an action, people say that they made a bad decision. Making the distinction allows us to separate action from consequence and hence improve the quality of action.’ Ronald Howard, ‘Decision Analysis: Practice and Promise’, Management Science 34/6 (1988) p.682.
-
(1988)
Management Science
, vol.34
, Issue.6
, pp. 682
-
-
Howard, R.1
|