-
1
-
-
76949090380
-
Picture Our Thoughts: We're Looking for Too Much in Brain Scans
-
(August 17)
-
Jonah Lehrer, "Picture Our Thoughts: We're Looking for Too Much in Brain Scans," The Boston Globe (August 17, 2008).
-
(2008)
The Boston Globe
-
-
Lehrer, J.1
-
2
-
-
0035860542
-
An fMRI Investigation of Emotional Engagement in Moral Judgment
-
Joshua D. Greene, R. Brian Sommerville, Leigh E. Nystrom, John M. Darley, and Jonathan D. Cohen, "An fMRI Investigation of Emotional Engagement in Moral Judgment," Science 293 (2001): 2105-8.
-
(2001)
Science
, vol.293
, pp. 2105-2108
-
-
Greene, J.D.1
Sommerville, R.B.2
Nystrom, L.E.3
Darley, J.M.4
Cohen, J.D.5
-
3
-
-
5144233175
-
The Neural Bases of Cognitive Conflict and Control in Moral Judgment
-
Joshua D. Greene, Leigh E. Nystrom, Andrew D. Engell, John M. Darley, and Jonathan D. Cohen, "The Neural Bases of Cognitive Conflict and Control in Moral Judgment," Neuron 44 (2004): 389-400.
-
(2004)
Neuron
, vol.44
, pp. 389-400
-
-
Greene, J.D.1
Nystrom, L.E.2
Engell, A.D.3
Darley, J.M.4
Cohen, J.D.5
-
4
-
-
42449127529
-
Cognitive Load Selectively Interferes with Utilitarian Moral Judgment
-
Joshua D. Greene, Sylvia A. Morelli, Kelly Lowenberg, Leigh E. Nystrom, and Jonathan D. Cohen, "Cognitive Load Selectively Interferes with Utilitarian Moral Judgment," Cognition 107 (2008): 1144-54.
-
(2008)
Cognition
, vol.107
, pp. 1144-1154
-
-
Greene, J.D.1
Morelli, S.A.2
Lowenberg, K.3
Nystrom, L.E.4
Cohen, J.D.5
-
5
-
-
36048990157
-
Ethics and Intuitions
-
Peter Singer, "Ethics and Intuitions," The Journal of Ethics 9 (2005): 331-52.
-
(2005)
The Journal of Ethics
, vol.9
, pp. 331-352
-
-
Singer, P.1
-
6
-
-
0142246358
-
From Neural 'Is' to Moral 'Ought': What Are the Moral Implications of Neuroscientific Moral Psychology?
-
Joshua D. Greene, "From Neural 'Is' to Moral 'Ought': What Are the Moral Implications of Neuroscientific Moral Psychology?" Nature Reviews Neuroscience 4 (2003): 847-50.
-
(2003)
Nature Reviews Neuroscience
, vol.4
, pp. 847-850
-
-
Greene, J.D.1
-
7
-
-
67649216767
-
The Secret Joke of Kant's Soul
-
ed. Walter Sinnott-Armstrong (Cambridge, Mass.: MIT Press)
-
Joshua D. Greene, "The Secret Joke of Kant's Soul," in Moral Psychology, Vol. 3: The Neuroscience of Morality: Emotion, Brain Disorders, and Development, ed. Walter Sinnott-Armstrong (Cambridge, Mass.: MIT Press, 2008), pp. 35-79.
-
(2008)
Moral Psychology, The Neuroscience of Morality: Emotion, Brain Disorders, and Development
, vol.3
, pp. 35-79
-
-
Greene, J.D.1
-
8
-
-
84902743640
-
The Problem of Abortion and the Doctrine of Double Effect
-
See Philippa Foot, "The Problem of Abortion and the Doctrine of Double Effect," Oxford Review 5 (1967): 5-15.
-
(1967)
Oxford Review
, vol.5
, pp. 5-15
-
-
Foot, P.1
-
9
-
-
0016943898
-
Killing, Letting Die, and the Trolley Problem
-
Judith Jarvis Thomson, "Killing, Letting Die, and the Trolley Problem," The Monist 59 (1976): 204-17.
-
(1976)
The Monist
, vol.59
, pp. 204-217
-
-
Thomson, J.J.1
-
10
-
-
82955213864
-
The Trolley Problem
-
Judith Jarvis Thomson, "The Trolley Problem," Yale Law Journal 94 (1985): 1395-415.
-
(1985)
Yale Law Journal
, vol.94
, pp. 1395-1415
-
-
Thomson, J.J.1
-
11
-
-
76949091186
-
-
All three articles are reprinted in Ethics: Problems and Principles, ed. John Martin Fischer and Mark Ravizza (Fort Worth, Tex.: Harcourt Brace Jovanovich)
-
All three articles are reprinted in Ethics: Problems and Principles, ed. John Martin Fischer and Mark Ravizza (Fort Worth, Tex.: Harcourt Brace Jovanovich, 1992).
-
(1992)
-
-
-
12
-
-
76949087153
-
-
supplementary material available at
-
Greene et al., "An fMRI Investigation," supplementary material (available at (http://www.sciencemag.org/cgi/content/full/sci;293/5537/2105/DC1/)).
-
An fMRI Investigation
-
-
Greene1
-
13
-
-
76949091328
-
-
Ibid
-
-
-
-
14
-
-
76949100385
-
-
Note
-
Actually, this isn't entirely correct. The trolley problem is usually taken to be, not the problem of explaining our differing verdicts about the footbridge and trolley driver dilemmas, but rather the problem of explaining our differing verdicts about the footbridge dilemma and a variant of the trolley driver dilemma in which you are a bystander who sees the runaway trolley and can hit a switch that will divert the trolley onto the sidetrack containing the one person. Indeed, Thomson (who introduced the term "trolley problem" into the philosophical lexicon) thinks there is no problem explaining the difference in our intuitive reactions to the trolley driver and footbridge dilemmas; for Thomson (and for others following her), the real problem is explaining what grounds our different judgments about the bystander and footbridge dilemmas. And though Singer's summary of Greene et al.'s research suggests that it was the bystander dilemma that was tested.
-
-
-
-
16
-
-
76949104918
-
-
Note
-
and though Greene himself, when describing his research, almost always summarizes the trolley driver dilemma in a way that is ambiguous between the driver and bystander variants.
-
-
-
-
21
-
-
76949101802
-
-
Note
-
it is worth pointing out that in all of the published studies I discuss in this article, it was only the driver, not the bystander, version of the standard trolley dilemma that was studied. Perhaps it is being assumed that our judgments about the driver and bystander cases (and their neural correlates) will be the same; however, many philosophers mark a distinction between these two cases, and in her most recent discussion of the trolley problem.
-
-
-
-
22
-
-
56049121936
-
Turning the Trolley
-
"Turning the Trolley," Philosophy&Public Affairs 36 [2008]: 359-74.
-
(2008)
Philosophy&Public Affairs
, vol.36
, pp. 359-374
-
-
-
23
-
-
76949085130
-
-
Note
-
Thomson argues that although it is permissible to divert the trolley if one is the driver, it is impermissible to divert the trolley if one is a bystander. (Thus, on Thomson's current way of seeing things, there actually is no trolley problem, since the very formulation of that problem contains a false presupposition that there is a morally relevant difference between the bystander and footbridge cases, but no morally relevant difference between the bystander and driver cases.)
-
-
-
-
24
-
-
76949103006
-
-
For a survey of the early classics of the trolley problem literature, see the papers collected in Fischer and Ravizza (eds.), More recent classics not included in that anthology are Judith Jarvis Thomson, The Realm of Rights (Cambridge, Mass.: Harvard University Press) , chap. 7
-
For a survey of the early classics of the trolley problem literature, see the papers collected in Fischer and Ravizza (eds.), Ethics: Problems and Principles. More recent classics not included in that anthology are Judith Jarvis Thomson, The Realm of Rights (Cambridge, Mass.: Harvard University Press, 1990), chap. 7.
-
(1990)
Ethics: Problems and Principles
-
-
-
25
-
-
0007190517
-
-
Oxford: Oxford University Press, chaps. 6-7
-
F. M. Kamm, Morality, Mortality, Vol. II: Rights, Duties, and Status (Oxford: Oxford University Press, 1996), chaps. 6-7.
-
(1996)
Morality, Mortality, Vol. II: Rights, Duties, and Status
-
-
Kamm, F.M.1
-
26
-
-
84921353862
-
-
Oxford: Oxford University Press, chaps. 1-6
-
F. M. Kamm, Intricate Ethics (Oxford: Oxford University Press, 2007), chaps. 1-6.
-
(2007)
Intricate Ethics
-
-
Kamm, F.M.1
-
28
-
-
76949095493
-
-
Note
-
Ibid. Note that these two definitions are not parallel. As Greene uses the expressions, "characteristically consequentialist judgment" means "judgment supported by the sort of moral principle that typically distinguishes consequentialist theories from deontological ones," whereas "characteristically deontological judgment" means "judgment in favor of the sort of verdict that typically distinguishes deontological theories from consequentialist ones." (The contrast is at the level of supporting principles in the one case, at the level of particular judgments in the other.) Thus even though nearly all deontologists judge that it is permissible to divert the trolley in the trolley driver dilemma, such a judgment counts as characteristically consequentialist but does not count as characteristically deontological.
-
-
-
-
29
-
-
76949085825
-
-
Note
-
In philosophical discussions of the metaphysics and epistemology of intuitions, there is an ongoing debate over whether intuitions just are judgments arrived at in a particular way (for example, not as a result of explicit reasoning, testimony, and so on), or whether intuitions are a separate class of mental entities that stand to intuitive judgments as perceptual experiences stand to perceptual judgments. For the former view, see Alvin Goldman and Joel Pust, "Philosophical Theory and Intuitional Evidence," in Rethinking Intuition, ed. Michael R. DePaul and William Ramsey.
-
-
-
-
32
-
-
76949085949
-
-
Note
-
We need not take a stand on this debate here, since even if moral intuitions are separate entities over and above the moral judgments formed on their basis, there will usually be an intuitive moral judgment corresponding to each moral intuition. Thus either we can say (if we identify intuitions with intuitive judgments) that the experiments in question directly study our moral intuitions, or we can say (if we distinguish intuitions from intuitive judgments) that the experiments indirectly study our moral intuitions by collecting data on our moral judgments, which are taken to be tightly correlated with our moral intuitions. In what follows I will generally be fairly lax in sliding back and forth between talk of judgments and talk of intuitions.
-
-
-
-
33
-
-
76949108656
-
-
Note
-
That said, I am not using "intuition" as that term is used in much of the psychology literature, where it refers to any sort of automatic, spontaneous "gut feeling" that one might have.
-
-
-
-
34
-
-
0041683033
-
-
See most of the studies cited in, New Haven, Conn.: Yale University Press, for this sort of usage, which has little to do with what philosophers mean when they talk about intuitions.)
-
See most of the studies cited in David G. Myers, Intuition: Its Powers and Perils [New Haven, Conn.: Yale University Press, 2002] for this sort of usage, which has little to do with what philosophers mean when they talk about intuitions.)
-
(2002)
Intuition: Its Powers and Perils
-
-
Myers, D.G.1
-
35
-
-
76949084249
-
Neural Bases
-
See Greene et al., "Neural Bases," p. 389.
-
-
-
Greene1
-
39
-
-
76949109043
-
-
Note
-
The qualification "at least in nonmoral cases" is crucial here: to say at the outset that "cognitive" processes handle abstract reasoning and problem solving in all domains (including the moral) is question begging.
-
-
-
-
43
-
-
0038907360
-
Self-Constitution in the Ethics of Plato and Kant
-
See Christine Korsgaard, "Self-Constitution in the Ethics of Plato and Kant," Journal of Ethics 3 (1999): 1-29.
-
(1999)
Journal of Ethics
, vol.3
, pp. 1-29
-
-
Korsgaard, C.1
-
44
-
-
76949092508
-
-
Note
-
Korsgaard contrasts the Combat Model of the soul with an alternate Constitution Model that she finds in Plato and Kant.
-
-
-
-
45
-
-
76949098843
-
-
Note
-
Though perhaps it is new to empirical psychology: as Greene et al. tell the story.
-
-
-
-
46
-
-
76949084122
-
Neural Bases
-
"Neural Bases," pp. 397-98.
-
-
-
-
48
-
-
76949094009
-
-
Note
-
empirical psychology went through a long period where, under the influence of Lawrence Kohlberg, it was widely believed that processes of reasoning underwrite the moral judgments of mature adults, followed by a more recent period in which Jonathan Haidt and others have proposed that moral judgments are primarily driven by automatic, emotional processes.
-
-
-
-
49
-
-
0001851459
-
Stage and Sequence: The Cognitive-Developmental Approach to Socialization
-
ed. David A. Goslin (Chicago: Rand McNally)
-
See Lawrence Kohlberg, "Stage and Sequence: The Cognitive-Developmental Approach to Socialization," in Handbook of Socialization Theory and Research, ed. David A. Goslin (Chicago: Rand McNally, 1969), pp. 347-480.
-
(1969)
Handbook of Socialization Theory and Research
, pp. 347-480
-
-
Kohlberg, L.1
-
50
-
-
84909358155
-
The Emotional Dog and Its Rational Tail: A Social Intuitionist Approach to Moral Judgment
-
Jonathan Haidt, "The Emotional Dog and Its Rational Tail: A Social Intuitionist Approach to Moral Judgment," Psychological Review 108 (2001): 814 34.
-
(2001)
Psychological Review
, vol.108
, pp. 814-834
-
-
Kohlberg, L.1
-
54
-
-
76949106100
-
-
In experiment 1 in Greene et al., "An fMRI Investigation," nine subjects responded to fourteen personal moral dilemmas, nineteen impersonal moral dilemmas, and twenty nonmoral dilemmas, presented in random order, while inside fMRI machines. (For the exact wording of the dilemmas that were used, see the supplementary material at
-
In experiment 1 in Greene et al., "An fMRI Investigation," nine subjects responded to fourteen personal moral dilemmas, nineteen impersonal moral dilemmas, and twenty nonmoral dilemmas, presented in random order, while inside fMRI machines. (For the exact wording of the dilemmas that were used, see the supplementary material at (http://www.sciencemag.org/cgi/content/full/sci;293/5537/2105/DC1/).)
-
-
-
-
55
-
-
76949088674
-
-
Note
-
Each dilemma was presented as three screens of text, after which the subject was required to give a verdict about the appropriateness of a proposed action by pressing one of two buttons ("appropriate" or "inappropriate"). Subjects were allowed to advance to each subsequent screen of text at their own rate, though they were given a maximum of 46 seconds to read through all three screens and respond. In experiment 2 in that same article, nine different subjects responded to twenty-two personal moral dilemmas, nineteen impersonal moral dilemmas, and twenty nonmoral dilemmas, using the same protocol. (The set of personal moral dilemmas was altered in experiment 2 to remove a possible confound in the experimental design.)
-
-
-
-
56
-
-
76949107057
-
-
Note
-
see p. 2108, n. 24.
-
-
-
-
57
-
-
76949106101
-
-
Note
-
In Greene et al., "Neural Bases," thirty-two new subjects responded to twenty-two personal moral dilemmas, eighteen impersonal moral dilemmas, and twenty nonmoral dilemmas, using the same protocol. (These were the same dilemmas used in experiment 2 of "An fMRI Investigation," except one of the impersonal moral dilemmas was dropped.) The data from these new subjects together with the data from the nine subjects from experiment 2 in "An fMRI Investigation" were then analyzed together as a whole.
-
-
-
-
59
-
-
76949084249
-
-
table 1
-
Greene et al., "Neural Bases," p. 391 and p. 392, table 1.
-
Neural Bases
, pp. 391-392
-
-
Greene1
-
60
-
-
76949108582
-
-
Note
-
The superior temporal sulcus was originally labeled "angular gyrus" in the first study. Activity in the amygdala was not detected in the first study but was detected in the larger second study. Due to a "magnetic susceptibility artifact," neither study was able to image the orbitofrontal cortex, another brain area that has been associated with emotional processing.
-
-
-
-
61
-
-
76949095159
-
-
see Greene et al., "Neural Bases," p. 2108, n. 21.
-
Neural Bases
, Issue.21
, pp. 2108
-
-
Greene1
-
63
-
-
76949084249
-
-
table 1
-
Greene et al., "Neural Bases," p. 391 and p. 392, table 1.
-
Neural Bases
, pp. 391-392
-
-
Greene1
-
65
-
-
76949107720
-
-
Note
-
In the body of this article I have focused on the neuroimaging and response-time findings, since these results are particularly vivid and tend to capture the public's imagination. However, there have been a number of follow-up studies which have been taken to lend further support to Greene et al.'s dual-process hypothesis, including the following.
-
-
-
-
66
-
-
76949102848
-
-
Note
-
(1) In an additional experiment in "Neural Bases," Greene et al. found that when subjects contemplated "difficult" personal moral dilemmas (where degree of difficulty was measured by response time), the anterior cingulate cortex (a brain region associated with conflict monitoring) and the dorsolateral prefrontal cortex (a brain region associated with abstract reasoning) exhibited increased activity, in addition to regions associated with emotion. They also found that the level of activity in the dorsolateral prefrontal cortex was positively correlated with consequentialist responses to these "difficult" personal moral dilemmas.
-
-
-
-
67
-
-
76949085287
-
-
Note
-
(2) In a follow-up study, Greene and colleagues found that consequentialist responses to "difficult" personal moral dilemmas took longer when subjects were required to perform a cognitively intensive task at the same time as responding to the dilemmas, but deontological responses did not exhibit this effect. See Greene et al., "Cognitive Load."
-
-
-
-
68
-
-
76949085429
-
-
Note
-
(3) Michael Koenigs, Liane Young, and colleagues found that patients with damage to their ventromedial prefrontal cortex (a brain region associated with the emotions) gave a greater percentage of consequentialist responses to the personal moral dilemmas from Greene et al.'s "An fMRI Investigation" than control subjects did.
-
-
-
-
69
-
-
34247352455
-
Damage to the Prefrontal Cortex Increases Utilitarian Moral Judgments
-
See Michael Koenigs, Liane Young, Ralph Adolphs, Daniel Tranel, Fiery Cushman, Marc Hauser, and Antonio Damasio, "Damage to the Prefrontal Cortex Increases Utilitarian Moral Judgments," Nature 446 (2007): 908-11.
-
(2007)
Nature
, vol.446
, pp. 908-911
-
-
Koenigs, M.1
Young, L.2
Adolphs, R.3
Tranel, D.4
Cushman, F.5
Hauser, M.6
Damasio, A.7
-
70
-
-
76949094914
-
-
Note
-
(4) Piercarlo Valdesolo and David DeSteno found that respondents who had watched a funny clip from Saturday Night Live were more likely to give a consequentialist response to the footbridge dilemma than those who had watched a clip from a dull documentary beforehand, but there was no comparable effect for the trolley driver dilemma.
-
-
-
-
71
-
-
33744963857
-
Manipulations of Emotional Context Shape Moral Judgment
-
See Valdesolo and DeSteno, "Manipulations of Emotional Context Shape Moral Judgment," Psychological Science 17 (2006): 476-77.
-
(2006)
Psychological Science
, vol.17
, pp. 476-477
-
-
Valdesolo1
DeSteno2
-
72
-
-
76949087422
-
-
Note
-
One potential design worry about Greene et al.'s research that falls into the former category (i.e., worries that could easily be overcome in future research) is as follows. In order to have a comparison class for their neural-activity data, Greene and his colleagues had their subjects respond to a number of nonmoral dilemmas, such as.
-
-
-
-
73
-
-
76949088994
-
-
Note
-
Turnips Dilemma: "You are a farm worker driving a turnip-harvesting machine. You are approaching two diverging paths.
-
-
-
-
74
-
-
76949091998
-
-
Note
-
By choosing the path on the left you will harvest ten bushels of turnips. By choosing the path on the right you will harvest twenty bushels of turnips. If you do nothing your turnip-harvesting machine will turn to the left.
-
-
-
-
75
-
-
76949093338
-
-
Note
-
Is it appropriate for you to turn your turnip-picking machine to the right in order to harvest twenty bushels of turnips instead of ten?" (Greene et al., "An fMRI Investigation," supplementary material).
-
-
-
-
76
-
-
76949089128
-
-
Note
-
The trigger question here was formulated in terms of appropriateness to make it as parallel as possible to the trigger questions for the moral dilemmas tested, but one might worry that it sounds very odd to ask whether it is " appropriate" to turn a machine one way rather than theother. (Shouldwe interpret this as somesort of prudential appropriateness? Is there even such a notion?) Moreover, the answer to this so-called dilemma is completely obvious, and all told I estimate that of the twenty nonmoral dilemmas used in Greene et al.'s studies, twelve have completely obvious answers, six are somewhat less obvious, and only two are genuine dilemmas; thus one might worry that toomanyof these nonmoral " dilemmas" have readily evident answers for them to serve as an accurate comparison class. However, both of these worries could easily be avoided in future research: the set of nonmoral dilemmas could be altered to include a greater number of difficult dilemmas, and the trigger question for the dilemmas (both moral and nonmoral) could be phrased in a less awkward way. (Indeed, in their " Damage to the Prefrontal Cortex Increases Utilitarian Moral Judgments," Koenigs et al. usedGreene et al.'s dilemmas but rephrased the trigger questions so as to ask whether the subjects " would" perform the action in question, rather than asking whether they deem it " appropriate" to perform the action, perhaps for this very reason. However, this rewording introduces new problems, since one's judgments about what one would do and one's judgments about what it is morally permissible to do might pull apart.)
-
-
-
-
77
-
-
76949096717
-
-
Note
-
One potential design worry about Greene et al.'s research that falls into the latter category (i.e., worries that ultimately are not relevant to the philosophical issues at stake) is as follows. Because of the limitations of fMRI technology, Greene and his colleagues were only able to study neural activity when subjects contemplated hypothetical scenarios. Thus their dual-process hypothesis has, at least so far, not been tested with regard to moral judgments about actual scenarios (whether currently perceived or merely remembered). However, these limited results are enough for the philosophical questions at issue. Even if we could only conclude from Greene et al.'s research that deontological judgments about hypothetical cases are not trustworthy but could not make a parallel claim with regard to deontological judgmentsabout actual cases, thatwouldstillbeanextremely significant result, given the ubiquity of appeals to hypothetical cases in first-order moral theorizing.
-
-
-
-
78
-
-
76949084249
-
-
table 4, and p. 397
-
See Greene et al., "Neural Bases," p. 395, table 4, and p. 397.
-
Neural Bases
, pp. 395
-
-
Greene1
-
79
-
-
76949087841
-
-
Note
-
Ibid., p. 397.
-
-
-
-
81
-
-
76949108446
-
-
Note
-
Ibid.; see also pp. 64-65.
-
-
-
-
83
-
-
76949101938
-
-
Note
-
Ibid., p. 2107.
-
-
-
-
84
-
-
76949105772
-
-
Note
-
To see how these factors could skew the results on Greene et al.'s way of calculating things, consider this. Suppose we took the personal moral dilemma that produced the greatest percentage of "appropriate" answers, and then added a large amount of filler text to the dilemma so as to increase by some set amount the response time for any answer to it. Then-assuming that this change does not affect the ratio of "appropriate" to "inappropriate" responses for that dilemma-our change will have raised the average of all answers of "appropriate" to personal moral dilemmas (Greene et al.'s proposed metric) quite a bit more than it raises the average of all answers of "inappropriate" to personal moral dilemmas. However, the average of the average differences in response time between "appropriate" and "inappropriate" response for each personal moral dilemma (my proposed metric) would be unaffected by such a change.
-
-
-
-
86
-
-
76949089837
-
-
Note
-
Ibid.
-
-
-
-
88
-
-
76949089835
-
-
Note
-
Actually, when they concede this, Greene et al. are worried about a slightly different issue: here they are worried that cases such as the architect and hired rapist dilemmas should not be included in the analysis, since an answer of "appropriate" to such dilemmas does not correspond (or does not obviously correspond) to a consequentialist judgment about such a case. Though I share this worry (seemy third empirical issue, below),my point here is somewhat different. Even if we toss out the response-time data from the architect and hired rapist dilemmas, it is still statistically invalid to compare the average response time of every answer of "appropriate" to the average response time of every answer of "inappropriate," rather than averaging the average differences in response time between answers of "appropriate" and "inappropriate" for each question.
-
-
-
-
89
-
-
76949089285
-
-
Note
-
It is not clear to me whether Greene et al. now realize this point. On the one hand, in a more recent study in which they compare subjects' response times when responding to moral dilemmas while performing a cognitively intensive task (see n. 26), Greene et al. continue to present their response-time data in the statistically invalid manner (see Greene et al., "Cognitive Load," p. 1149, fig. 1, and p. 1150, fig. 2). On the other hand, they write in that samestudy, "This general pattern also heldwhenitem, rather than participant,wasmodeled as a random effect, though the results in this analysis were not as strong" (ibid., p. 1149).
-
-
-
-
92
-
-
84887586117
-
Reply to Mikhail and Timmons'
-
in at p. 109
-
See also Joshua D. Greene, "Reply to Mikhail and Timmons," in Moral Psychology, Vol. 3, pp. 105-17, at p. 109.
-
Moral Psychology
, pp. 105-117
-
-
Greene, J.D.1
-
93
-
-
76949099973
-
-
Note
-
Recently, three psychologists and one philosopher reanalyzed Greene et al.'s data from "An fMRI Investigation" and definitively established that Greene et al.'s responsetime prediction was, in fact, disconfirmed by that data.
-
-
-
-
94
-
-
64049093694
-
A Reanalysis of the Personal/Impersonal Distinction in Moral Psychology Research
-
See Jonathan McGuire, Robyn Langdon, Max Coltheart, and Catriona Mackenzie, "A Reanalysis of the Personal/Impersonal Distinction in Moral Psychology Research," Journal of Experimental Social Psychology 45 (2009): 577-80.
-
(2009)
Journal of Experimental Social Psychology
, vol.45
, pp. 577-580
-
-
McGuire, J.1
Langdon, R.2
Coltheart, M.3
Mackenzie, C.4
-
95
-
-
76949098158
-
-
Note
-
Of course, exactly how we cash out these prohibitions/requirements will vary from one deontological theory to the next. (That said, it is important to keep in mind that a Ten-Commandments-style "Never, ever kill!", "Never, ever lie!", et cetera, version of deontology is not the only option; indeed, such a picture is rarely, if ever, defended outside of introductory ethics courses.)
-
-
-
-
96
-
-
0007190517
-
-
Kamm, Morality, Mortality, Vol. II: Rights, Duties, and Status, p. 154.
-
Morality, Mortality, Vol. II: Rights, Duties, and Status
, pp. 154
-
-
Kamm1
-
97
-
-
76949096147
-
-
Note
-
If you doubt that the me criterion holds in this case, keep in mind that it is usually a standard feature of the footbridge case and its variants that the fall is what kills the fat man, whose body then serves as a weight to slow down the runaway trolley.
-
-
-
-
98
-
-
76949092637
-
-
Note
-
In the body of this article I have appealed to Kamm's Lazy Susan Case in order to argue that the personal dilemma versus impersonal dilemma distinction (whether construed intuitively or in terms of the "me hurt you" criteria) is not the dilemmatypically-giving-rise-to-a-deontological-judgment versus dilemma-typically-giving-riseto-a-consequentialist-judgment distinction. I chose to use Kamm's case since it is familiar from the trolley problem literature. However, a slightly cleaner version of the same case that would equally well serve my purposes is as follows: instead of being on a lazy Susan, the five innocent people are on a dolly (i.e., a wheeled platform) that you can push out of the way of the oncoming trolley, but doing so will cause the dolly to roll down a hill and smash an innocent bystander to death. (Note that in order for either of these cases to be a counterexample to Greene et al.'s proposal, it is not necessary that everyone makes a characteristically consequentialist judgment about such a case; all we need is for a characteristically consequentialist judgment to be the standard reply.)
-
-
-
-
99
-
-
76949106099
-
-
notes that lazy Susan cases raise difficulties for Greene et al.'s, p. 180
-
Kamm notes that lazy Susan cases raise difficulties for Greene et al.'s personal versus impersonal distinction in her Intricate Ethics, pp. 142-43 and p. 180, n. 34.
-
personal versus impersonal distinction in her Intricate Ethics
, Issue.34
, pp. 142-143
-
-
Kamm1
-
101
-
-
76949104400
-
-
Note
-
See also p. 43, n. 37, and p. 418 of Intricate Ethics, where Kamm discusses a second sort of case that poses problems for that distinction.
-
-
-
-
104
-
-
41149111940
-
Do Abnormal Responses Show Utilitarian Bias?
-
Further evidence of the inadequacy of Greene et al.'s "me hurt you" criteria is provided by
-
Further evidence of the inadequacy of Greene et al.'s "me hurt you" criteria is provided by Guy Kahane and Nicholas Shackel, "Do Abnormal Responses Show Utilitarian Bias?" Nature 452:7185 (2008): E5-E6.
-
(2008)
Nature
, vol.452-7185
-
-
Kahane, G.1
Shackel, N.2
-
105
-
-
76949104399
-
-
Note Kahane and Shackel had ?ve professional moral philosophers categorize the moral dilemmas from Greene et al., "An fMRI Investigation," and they found that only ?ve out of the nineteen personal moral dilemmas used in both experiments (26 percent) and only ten out of the twenty-two impersonal moral dilemmas used in experiment 2 (45 percent) were deemed by a majority of these philosophers to involve a choice in which a deontological option is contrasted with a consequentialist option. The data fromKahane and Shackels study can be found at
-
Kahane and Shackel had ?ve professional moral philosophers categorize the moral dilemmas from Greene et al., "An fMRI Investigation," and they found that only ?ve out of the nineteen personal moral dilemmas used in both experiments (26 percent) and only ten out of the twenty-two impersonal moral dilemmas used in experiment 2 (45 percent) were deemed by a majority of these philosophers to involve a choice in which a deontological option is contrasted with a consequentialist option. The data fromKahane and Shackels study can be found at ≤http://ethics-etc.com/wp-content/uploads/2008/03/dilemma-classi?cati on.pdf≥.
-
-
-
-
106
-
-
76949103579
-
-
What about the additional studies that have been taken to lend further support to the dual-process hypothesis (see n. 26)? Here, too, I think the empirical upshot is far from certain. Some worries about those studies.
-
-
-
-
107
-
-
76949108655
-
-
Note
-
(1) In " Neural Bases," Greene et al. took activity in the anterior cingulate cortex during contemplation of " difficult" personal moral dilemmas to provide evidence that there was a conflict between two subsystems in the brain. However, as they themselves note (p. 395), the exact function of the anterior cingulate cortex is not currently known, and the hypothesis that it is devoted to conflict monitoring is just one among several.
-
-
-
-
108
-
-
76949098974
-
-
Note
-
(2) While it is true that in " Cognitive Load" Greene et al. found that consequentialist responses to " difficult" personal moral dilemmas took longer when the subjects were performing a cognitively intensive task at the same time (as the dual-process hypothesis predicts), they did not find that subjects gave a lower percentage of consequentialist responses when performing the cognitively intensive task (as the dual-process hypothesis would also predict). Greene et al. try to explain away this troubling piece of counterevidence by speculating that the subjects were " trying to push through" the interference caused by the cognitively intensive task.
-
-
-
-
110
-
-
76949090940
-
-
Note
-
but as Adina Roskies and Walter Sinnott-Armstrong note, " this story makes sense only if subjects knew in advance that they wanted to reach a utilitarian judgment.".
-
-
-
-
111
-
-
76949087420
-
Between a Rock and a Hard Place: Thinking about Morality
-
See Roskies and Sinnott-Armstrong, " Between a Rock and a Hard Place: Thinking about Morality," Scientific American Mind 19 (2008), (http://www.sciam.com/article.cfm?id=thinking-about-morality).
-
(2008)
Scientific American Mind
, vol.19
-
-
Roskies1
Sinnott-Armstrong2
-
112
-
-
76949088252
-
-
Note
-
(3) Jorge Moll and Ricardo de Oliveira-Souza raise some doubts as to whether Koenigs et al.'s data in their " Damage to the Prefrontal Cortex Increases Utilitarian Moral Judgments" on patients with damage to the ventromedial prefrontal cortex really support Greene et al.'s dual-process hypothesis. Among other worries, Moll and de Oliveira-Souza point out that these patients also had damage to the anterior dorsolateral prefrontal cortex, a more " cognitive" portion of the brain that Greene et al. found to be correlated with consequentialist judgment in " An fMRI Investigation" and " Neural Bases," so these patients are not clean cases in which only emotional processing is impaired.
-
-
-
-
113
-
-
34447576476
-
Moral Judgments, Emotions, and the Utilitarian Brain
-
See Moll and de Oliveira-Souza, " Moral Judgments, Emotions, and the Utilitarian Brain," Trends in Cognitive Science 11 (2007): 319-21.
-
(2007)
Trends in Cognitive Science
, vol.11
, pp. 319-321
-
-
Moll1
de Oliveira-Souza2
-
114
-
-
34447556708
-
Response to Greene: Moral Sentiments and Reason: Friends or Foes?
-
" Response to Greene: Moral Sentiments and Reason: Friends or Foes?" Trends in Cognitive Science 11 (2007): 323-24.
-
(2007)
Trends in Cognitive Science
, vol.11
, pp. 323-324
-
-
-
115
-
-
34447556707
-
Why Are VMPFC Patients More Utilitarian? A Dual-Process Theory of Moral Judgment Explains
-
For Greene's reply, see his
-
For Greene's reply, see his " Why Are VMPFC Patients More Utilitarian? A Dual-Process Theory of Moral Judgment Explains," Trends in Cognitive Science 11 (2007): 322-23.
-
(2007)
Trends in Cognitive Science
, vol.11
, pp. 322-323
-
-
-
116
-
-
76949090663
-
-
Note
-
See also Kahane and Shackel, " Do Abnormal Responses Show Utilitarian Bias?"
-
-
-
-
117
-
-
76949084121
-
-
Note
-
More importantly, though, it is dialectically problematic first to appeal to patientswithdamageto emotional brain regionswhenmakinganempirical case for the dual-process hypothesis and then to go on to argue that the verdicts of these brain regions shouldbe neglected (in effect urging us to bemorelike these patients), since many patients with this sort of brain damage make moral decisions in their personal lives that count as disastrous when evaluated by just about any plausible normative standard.
-
-
-
-
119
-
-
76949105336
-
-
Note
-
So even if studies of such braindamaged patients end up supporting the dual-process hypothesis, they threaten to do so only at the cost of making Greene and Singer's normative claims less tenable.
-
-
-
-
120
-
-
76949098269
-
-
Note
-
See Greene, " From Neural 'Is' to Moral 'Ought' " ; Greene, " Secret Joke" ; and Singer, " Ethics and Intuitions."
-
-
-
-
121
-
-
0010135219
-
Sidgwick and Reflective Equilibrium
-
See Peter Singer, "Sidgwick and Reflective Equilibrium," The Monist 58 (1974): 490-517.
-
(1974)
The Monist
, vol.58
, pp. 490-517
-
-
Singer, P.1
-
123
-
-
76949107718
-
-
Note
-
On one interpretation of Kant, although he appeals to particular-case intuitions about the conditions under which something is good in the opening paragraphs of Groundwork I, he takes himself to have discharged any appeal to particular-case moral intuitions once he has completed his argument for the form and existence of the Categorical Imperative by the end of Groundwork III.
-
-
-
-
125
-
-
76949102725
-
-
Note
-
For a reply to Singer's "Ethics and Intuitions" that focuses on the question of whether Greene et al.'s research poses problems for the method of reflective equilibrium.
-
-
-
-
126
-
-
57349123364
-
The Reliability of Moral Intuitions: A Challenge from Neuroscience
-
see Folke Tersman, "The Reliability of Moral Intuitions: A Challenge from Neuroscience," Australasian Journal of Philosophy 86 (2008): 389-405.
-
(2008)
Australasian Journal of Philosophy
, vol.86
, pp. 389-405
-
-
Tersman, F.1
-
127
-
-
76949085129
-
-
Note
-
However, to my mind Tersman is too quick to concede to Greene and Singer that Greene et al.'s research might demonstrate that deontological moral intuitions are unreliable; his main point is that even if this is so, the method of wide reflective equilibrium can take this fact into account.
-
-
-
-
128
-
-
0004200534
-
-
For contemporary expressions of this sentiment, see (Garden City, N. Y.: Anchor Press/Doubleday)
-
For contemporary expressions of this sentiment, see Robert C. Solomon, The Passions (Garden City, N. Y.: Anchor Press/Doubleday, 1976).
-
(1976)
The Passions
-
-
Solomon, R.C.1
-
131
-
-
0004240370
-
-
with (Cambridge: Cambridge University Press)
-
Michael Stocker, with Elizabeth Hegeman, Valuing Emotions (Cambridge: Cambridge University Press, 1996).
-
(1996)
Valuing Emotions
-
-
Stocker, M.1
Hegeman, E.2
-
134
-
-
76949097343
-
-
Note
-
Replacing the "emotions bad, reasoning good" argument with an "alarm-like emotions bad, currency-like emotions plus reasoning good" argument is clearly no dialectical improvement either, unless something substantive is said about why currency-like emotions are less problematic than alarm-like ones.
-
-
-
-
135
-
-
26644474098
-
Moral Heuristics
-
Cass Sunstein makes a similar point in "Moral Heuristics, at pp. 533-34
-
Cass Sunstein makes a similar point in "Moral Heuristics," Behavioral and Brain Sciences 28 (2005): 531-73, at pp. 533-34.
-
(2005)
Behavioral and Brain Sciences
, vol.28
, pp. 531-573
-
-
-
136
-
-
76949105471
-
-
Note
-
Gerd Gigerenzer has been arguing this point for several decades now.
-
-
-
-
137
-
-
62749104766
-
Moral Intuition = Fast and Frugal Heuristics?
-
among other places, his, ed. Walter Sinnott-Armstrong (Cambridge, Mass.: MIT Press)
-
See, among other places, his "Moral Intuition = Fast and Frugal Heuristics?" in Moral Psychology, Vol. 2: The Cognitive Science of Morality: Intuition and Diversity, ed. Walter Sinnott-Armstrong (Cambridge, Mass.: MIT Press, 2008), pp. 1-26.
-
(2008)
Moral Psychology, Vol. 2: The Cognitive Science of Morality: Intuition and Diversity
, vol.2
, pp. 1-26
-
-
-
138
-
-
76949099284
-
-
also provide a number of nice examples from the psychology literature in which the utilization of automatic, emotional processes seems to result in better nonmoral decisions than the utilization of more cognitive, deliberative processes
-
John Allman and James Woodward also provide a number of nice examples from the psychology literature in which the utilization of automatic, emotional processes seems to result in better nonmoral decisions than the utilization of more cognitive, deliberative processes.
-
-
-
Allman, J.1
Woodward, J.2
-
139
-
-
40849094417
-
Moral Intuition: Its Neural Substrates and Normative Significance
-
See, at pp., 195-96
-
See Woodward and Allman, "Moral Intuition: Its Neural Substrates and Normative Significance," Journal of Physiology-Paris 101 (2007): 179-202, at pp. 189-91, 195-96.
-
(2007)
Journal of Physiology-Paris
, vol.101
, Issue.179-202
, pp. 189-191
-
-
Woodward1
Allman2
-
140
-
-
79959948983
-
What Are Moral Intuitions and Why Should We Care about Them? A Neurobiological Perspective
-
at pp. 174
-
Allman and Woodward, "What Are Moral Intuitions and Why Should We Care about Them? A Neurobiological Perspective," Philosophical Issues 18 (2008): 164-85, at pp. 170-71, 174.
-
(2008)
Philosophical Issues
, vol.18
, Issue.164-185
, pp. 170-171
-
-
Allman1
Woodward2
-
141
-
-
76949109176
-
-
Note
-
I am grateful to Louis Menand for this point.
-
-
-
-
142
-
-
76949104917
-
-
Note
-
Another complication: in formulating the argument from heuristics, I have assumed that heuristics are, by definition, unreliable. However, as Frances Kamm has reminded me, this assumption is, strictly speaking, false. For example, the rule "Add up all the digits and see if the result is divisible by three" is a useful heuristic for determining whether a natural number is divisible by three, but it is also perfectly reliable.
-
-
-
-
143
-
-
76949095038
-
-
Greene
-
Greene, "Secret Joke," p. 43.
-
Secret Joke
, pp. 43
-
-
-
145
-
-
76949104916
-
Moral Intuition: Its Neural Substrates and Normative Significance
-
Actually, it is somewhat controversial whether the emotional underpinnings of our moral judgments have been retained in relatively unchanged form since our early ancestors. For some empirical evidence that this might not be so, see.
-
Actually, it is somewhat controversial whether the emotional underpinnings of our moral judgments have been retained in relatively unchanged form since our early ancestors. For some empirical evidence that this might not be so, see Woodward and Allman, "Moral Intuition: Its Neural Substrates and Normative Significance," pp. 183, 187-88.
-
, Issue.187-188
, pp. 183
-
-
Woodward1
Allman2
-
149
-
-
0142018825
-
-
(Cambridge: Cambridge University Press), chap. 6
-
Richard Joyce, The Myth of Morality (Cambridge: Cambridge University Press, 2001), chap. 6.
-
(2001)
The Myth of Morality
-
-
Joyce, R.1
-
151
-
-
33746134363
-
A Darwinian Dilemma for Realist Theories of Value
-
Sharon Street, "A Darwinian Dilemma for Realist Theories of Value," Philosophical Studies 127 (2006): 109-66.
-
(2006)
Philosophical Studies
, vol.127
, pp. 109-166
-
-
Street, S.1
-
152
-
-
76949099283
-
-
Note
-
Moreover, even if Greene and Singer could somehow adapt Joyce's and Street's arguments for their purposes, there is another reason why I don't think this strategy would work: I don't believe that Joyce's and Street's original versions of the evolutionary argument are convincing. I argue for this claim in a companion piece to this article titled "The Metaethical Irrelevance of Evolutionary Theory.".
-
-
-
-
155
-
-
76949087840
-
-
See also, where he refers to "contingent, nonmoral feature[s]"
-
See also Greene, "Secret Joke," p. 70, where he refers to "contingent, nonmoral feature[s]".
-
Secret Joke
, pp. 70
-
-
Greene1
-
156
-
-
76949087840
-
-
where he again refers to "morally irrelevant factors"
-
Greene, "Secret Joke," p. 75, where he again refers to "morally irrelevant factors".
-
Secret Joke
, pp. 75
-
-
Greene1
-
157
-
-
76949089971
-
-
Note
-
and Greene, "Reply to Mikhail and Timmons," p. 116, where he refers to "arbitrary features" in the course of arguing that deontologists suffer a "garbage in, garbage out" problem.
-
-
-
-
159
-
-
76949108169
-
-
Note
-
Ibid., p. 350.
-
-
-
-
163
-
-
67349226850
-
Pushing Moral Buttons: The Interaction between Personal Force and Intention in Moral Judgment
-
In a more recent study
-
In a more recent study (Joshua D. Greene, Fiery Cushman, Lisa E. Stewart, Kelly Lowenberg, Leigh E. Nystrom, and Jonathan D. Cohen, "Pushing Moral Buttons: The Interaction between Personal Force and Intention in Moral Judgment," Cognition 111 [2009]: 364-71).
-
(2009)
Cognition
, vol.111
, pp. 364-371
-
-
Greene, J.D.1
Cushman, F.2
Stewart, L.E.3
Lowenberg, K.4
Nystrom, L.E.5
Cohen, J.D.6
-
164
-
-
76949089127
-
-
Note
-
Greene and his colleagues claim to have discovered that what explains people's moral judgments about footbridge-like cases is whether the agent's action intentionally harms someone through the use of what they call personal force, which is present when "the force that directly impacts the victim is generated by the agent's muscles" (p. 364). However, there are a number of problems with the study. For instance, many of the contrasting cases have a variety of differences beyond those identified as candidate explanatory factors, and a number of obvious potential counterexamples to their proposal were not tested (for example, do people judge it just as morally unacceptable to force theman off the footbridge by menacing him with a knife, or by threatening to harm his family, or by tricking him into taking a step backwards?). But, ironically, the biggest problem with the study is that Greene et al. seem to have identified, without realizing it, a competing explanation for their respondents' verdicts. Greene et al. gathered evidence about the degree to which their respondents unconsciously filled in more realistic assumptions when imagining the scenarios in question, and they found a high degree of correlation between a tendency to refuse to assume that it was absolutely certain that the five would be saved if the one is killed and a tendency to judge that such a course of action is morally unacceptable (pp. 367-68). So, by their own lights, not all of their subjects were responding to the same scenario, and the variation in responses can be partially explained by the variation in assumptions about the likelihood of the proposed action succeeding. (I suspect that varying assumptions about the degree to which the man on the footbridge might resist, thereby endangering the life of the agent trying to harm him, could also go a long way toward explaining people's differing verdicts.)
-
-
-
-
165
-
-
76949091327
-
-
Note
-
More importantly, though, it is simply a mistake to think that by merely surveying people's opinions about the moral permissibility of certain actions, we can empirically study what sorts of factors elicit characteristically deontological judgments. All these studies tell us is that these people make certain moral judgments about certain scenarios, and certain other moral judgments about certain other scenarios; which of these judgments count as characteristically deontological is not something we can just read off from the empirical results. (Sometimes Greene suggests that we can sidestep this worry by postulating that philosophers are confused about the meaning of the term "deontology," and although they think it refers to an abstract moral theory, in fact it refers to a psychological natural kind, namely the verdicts of the emotional subsystem; see his "Secret Joke," pp. 37-38. However, in making this claim Greene is committing himself to an incredibly controversial claim in the philosophy of language. It is one thing to say that although we think "water" refers to one sort of physical substance, in fact it refers to another sort of physical substance. Greene's claim, however, is akin to saying that although we think that "Goldbach's conjecture" refers to an abstract mathematical theory, in fact it refers to the physical process of digestion, or to saying that although we think that the name "Barack Obama" refers to a certain person, in fact it refers to the number seven.)
-
-
-
-
166
-
-
76949091463
-
-
Note
-
Indeed, there is a sense in which this objection is already apropos even if we assume Greene et al.'s "me hurtyou" proposal to be fully adequate. Greene performs a sort of shell game here: first he proposes that the dilemmas eliciting deontological reactions are dilemmas that, intuitively, involve harm committed in an "up close and personal" manner, and then he glosses dilemmas that, intuitively, involve harm committed in an "up close and personal" manner in terms of the "me hurt you" criteria. However, when it comes time to decide whether deontological judgments are responding to morally relevant factors, Greene switches back to evaluating things in terms of the intuitive up-close-and-personal-harm distinction, rather than in terms of the "me hurt you" criteria. However, it's one thing to say that whether one has committed a harm in an "up close and personal" manner is a morally irrelevant factor, and quite another thing to say that whether one has initiated a new threat that brings about serious bodily harm to another individual is a morally irrelevant factor.
-
-
-
-
167
-
-
76949086871
-
-
Note
-
More precisely, we have four distinctions on the table, since we need to distinguish between the intuitive way of cashing out the personal versus impersonal moral dilemma distinction, and the more regimented way of cashing out that distinction in terms of the "me hurt you" criteria.
-
-
-
-
168
-
-
76949087277
-
-
Note
-
To see how little work the neuroscience is now doing in the argument from morally irrelevant factors, consider this: should we perform additional experiments to see what parts of the brain light up when certain people make a judgment that such-and-suchfactors-picked-out-by-deontological-judgments are not morally relevant, and what parts of the brain light up when other people make a judgment that such-and-such-factorsignored-by-consequentialist-judgments are in fact morally relevant? What would that possibly tell us? (Shall we then perform yet more experiments to see what parts of the brain light up when people make a judgment about the relevance of the factors to which those second-order judgments are responding, and so on, ad infinitum?).
-
-
-
-
169
-
-
76949092902
-
-
Note
-
Although Greene et al.'s attempt in "An fMRI Investigation" at characterizing the features that give rise to deontological judgment did not rely on neuroscience in the way I have just sketched, there is a sense in which evolutionary theory played just that sort of indirect role, since evolutionary considerations are what partially led them to try out the proposal they put forward.
-
-
-
|