This post was written by Jonathan Haidt, Jesse Graham, and Pete Ditto. It is the first in a new series of essays by researchers working with Moral Foundations Theory (MFT), in response to recent critiques. These responses are intended to further conversations and debates in moral psychology, and ultimately to improve MFT.
A) The Critiquing Article: Schein, C., & Gray, K. (2015). The Unifying Moral Dyad: Liberals and Conservatives Share the Same Harm-Based Moral Template. Personality and Social Psychology Bulletin, 41(8), 1147-1163. doi: 10.1177/0146167215591501
B) Links: To article at journal site, including abstract. To ungated version.
C) Crux of the critique: Liberals and conservatives don’t actually differ as MFT says; they all have a purely harm-based morality, and make all moral judgments through the same template-matching process. “Inside the moral minds of both liberals and conservatives beats the heart of harm” (p. 15).
D) Crux of our response: Schein and Gray (from here on referred to as S&G) present a version of MFT as a theory about five distinct processing modules, which it is not. They contrast this straw-man theory with a strong and substantive version of Dyadic Morality, which says that all moral cognition is about agents causing suffering to patients. But Dyadic Morality shape-shifts into a weaker form in the 7 studies. It becomes no more than the claim that harm is “central,” operationalized as merely being the most important of various moral concerns, or the one most associated with immorality (a noncontroversial claim we agree with). So the studies said to pit MFT against Dyadic Morality fail to test the claims actually made by these theories. Further, S&G claim that liberals and conservatives do not differ in their moral cognition; they predict (and find) few effects of political ideology on moral judgments. But rather than reporting correlations between their continuous measure of ideology and moral judgments, they create an unjustifiable dichotomous split, lumping all non-liberals together (including moderates) and calling them conservatives. Their failure to find differences between the two groups is not informative.
E) What we think they got right:
1) We agree with S&G’s main argument that “Harm is central in moral cognition in both liberals and conservatives (p.1).” We agree with them that most of moral cognition involves thinking about people doing things to other people, so we find value in the idea of a dyadic template–the idea that people can easily and automatically think in terms of an “agent” who does something to a “patient.” We find much merit in Gray’s work on “dyadic completion” (Gray, Schein, & Ward, 2014); sometimes people seem to want to find a victim and they seem to seek closure by fabricating victims post-hoc. But does the existence of a harm-based template mean that there are no other templates (or cognitive foundations, or intuitions) that might add additional content to moral judgments, such as considerations of fairness, loyalty, authority, or sanctity? Does this template always and only involve harm? Or are there other kinds of social relationships — other ways that people might be thought to be interacting, or other moral goods (and evils) that are contributing to moral judgment? That is the big question.
If by “central” S&G mean that harm is the most important, the strongest, the factor or issue that will be most powerful in predicting judgments (as in the 7 studies reported here), then we agree; we expect harmfulness to be the best single predictor of moral judgments. This might be true everywhere, but we are confident that it will at least be true in WEIRD cultures. Haidt, Koller, & Dias (1993) opened their report with this question:
Harm, broadly construed to include psychological harm, injustice, and violations of rights, may be important in the morality of all cultures. But is a harm-based morality sufficient to describe the moral domain for all cultures, or do some cultures have a nonharm-based morality, in which actions with no harmful consequences may be moral violations?
They found that college educated populations in the USA and Brazil had a morality that was indeed “harm-based.” These participants rarely said an action was wrong unless they could point to a victim. Low SES groups, particularly in Brazil, also cared a great deal about harm, but they were more likely (than college educated participants) to condemn actions that they themselves said were harmless. So for any study using educated American participants, such as the seven reported by S&G, we would expect appraisals of harm vs. harmlessness to be the most powerful single factor in moral judgments.
More broadly, if any moral foundation or moral issue is “central” or most common in WEIRD cultures, it is the Care/harm foundation, as was found in a large experience sampling study (Hoffmann et al., 2014). This weak version of the harm hypothesis — that harm is “central,” is fully compatible with MFT, particularly if the definition of harm is expanded to encompass all the various ways that one person can do something bad to another person.
2) S&G are correct to criticize our claim that “conservatives endorse all five foundations more or less equally.” Our claim is based on our findings using the MFQ, and to be precise, it is only true for those who self-describe as “very conservative.” Those who self-describe as more moderate conservatives endorse Care and Fairness more than Loyalty, Authority, and Purity.
Their larger point is also correct: our statements about MFQ scores have sometimes given the impression that in the daily lives of conservatives, issues of loyalty, authority, and sanctity should be just as common, or just as “central,” as issues of harm. This is clearly not true. Daily life for everyone in WEIRD societies (and perhaps in non-WEIRD societies) involve many judgments about people who have harmed others; issues of loyalty, respect for authority, and sanctity arise much less frequently. S&G’s data in study 1 confirm the finding of Hoffmann et al. (2014) that moral judgments involving harm are most common. Harm is “central,” followed by fairness, for conservatives as well as for liberals. We thank S&G for this critique, and we will make it clear in future writings that we don’t believe that all foundations are equally important or common or central, for any group within WEIRD societies. We will also refrain from talking about a “two-foundation vs. five-foundation morality.” Such ways of speaking suggest a categorical distinction between liberals and conservatives that is not there — the differences in all of our reports have been relative differences. We agree with S&G that liberals and conservatives are not categorically different in their moral cognition, a point made in previous MFT papers (e.g., “Importantly, the differences between liberals and conservatives were neither binary nor absolute,” Graham, Haidt, & Nosek, 2009, p. 1033). Conservatives construct their moral matrices with more reliance on the loyalty, authority, and sanctity foundations than do liberals, but that does not mean that liberals never make use of these foundations.
F) What we think they got wrong:
THEORETICAL PROBLEMS
The biggest problem with S&G’s article is that they present a straw-man version of MFT based on strong modularity (in which there are 5 innate and distinct Fodorean modules doing the processing), and then pit the “strong” version of MFT they inaccurately impute to us against a shape-shifting version of Dyadic Morality. It’s a strong and interesting theory in the introduction, but Dyadic Morality morphs into a “weak” version in the experiments — a version in which harm is defined very broadly and is just “central” to moral judgment rather than its very “essence” (Gray, Young, & Waytz, 2012). We agree with them that strong modularity is wrong and weak harm-centrality is right (at least in WEIRD cultures). But then S&G draw the unjustifiable and never-tested conclusion that a strong version of Dyadic Morality (saying that moral cognition is ALL harm, so there is no need for other foundations) has defeated or disproven MFT in its actual form. This simply does not follow, as we show below.
1) Straw man description of MFT modularity.
S&G try to present MFT as a theory about Fodorian modules: there are five distinct, domain specific, fully encapsulated processing systems. (“Fodorean” refers to the strict criteria for modularity laid out in Fodor, 1983). S&G write:
“MFT suggests that harm, fairness, in-group, authority, and purity each represent a distinct functional moral mechanism or cognitive module (Haidt, 2012). MFT defines cognitive modules as “little switches in the brains of all animals” that are “triggered” by specific moral “inputs” (Haidt, 2012, p. 123). These modules are suggested to be ultimately distinct from each other, involving fundamentally “distinct cognitive computations” (Young & Saxe, 2011, p. 203), such that violations of one content area (e.g., harm) are processed differently from those of another (e.g., purity)” (p. 2, emphases added).
Yet this portrait of hard and discrete Fodorean modules bears little resemblance to MFT. From the beginning, our commitment was to nativism (innate moral knowledge), not to modularity. MFT grew out of the observation that morality has some uncanny similarities across cultures, such as the nature of reciprocity, which fits closely with Trivers’ (1971) description of the evolution of reciprocal altruism, or purity and pollution practices, which bear an obvious relationship to the psychology of disgust and contagion. We were attracted to versions of modularity developed by cognitive anthropologists Dan Sperber and Larry Hershfeld, which offered ways to integrate nativism with the obvious facts of cultural diversity. In our major statement on modularity (Haidt & Joseph, 2007) we described “Sperberian modules” like this:
Most of Sperber’s modules are not innate; they are generated during development by a smaller set of “learning modules” which are innate templates or “learning instincts” (Sperber, 2005, p.57, citing Marler, 1991). Some of these innate modules have specific perceptual content built in; for example, a fruit-learning module will “know” that fruit is sweet, and will only generate subsequent fruit-recognition sub-modules (e.g., one for apples, one for bananas) for objects in the environment that meet those pre-specified criteria. Other learning modules may be more purely conceptual; for example, if there is an innate learning module for fairness, it generates a host of culture-specific unfairness-detection modules, such as a “cutting-in-line detector” in cultures where people queue up, but not in cultures where they don’t; an “unequal division of food” detector in cultures where children expect to get exactly equal portions as their siblings, but not in cultures where portions are given out by age. Because Sperber envisions a core set of innate modules generating a great diversity of other modules, he uses the evocative term “teeming modularity.” (p. 397)
In other words, we have never said that moral judgment was carried out by five distinct modules. We said that moral development begins from some innate knowledge (about care, fairness, etc), which makes it easy to learn some things and hard to learn others. This cultural learning can be described as the generation of many new modules, influenced by one’s culture. The moral foundations are the foundations of development; they are not five spots in the brain, nor are they five Fodorean modules, nor is there any requirement that the adult mind contain 5 “distinct” or “discrete” modules (or even sets of modules) with no overlap. All cultures develop local moral concepts that draw on (the innate knowledge represented by) multiple foundations, for example honor, diversity, and human rights. Real-time moral cognition is complex and culturally contingent; it can’t be explained by five modules, let alone by a single (harm-based) one.
We have always made it clear that modularity itself is not essential to MFT. Here is an early statement of this point:
We have long been searching for the foundations of intuitive ethics—the psychological primitives that are the building blocks from which cultures create moralities that are unique yet constrained in their variations. … Each of these five is a good candidate for a Sperber-style learning module. However, readers who do not like modularity theories can think of each one as an evolutionary preparedness (Seligman, 1971) to link certain patterns of social appraisal to specific emotional and motivational reactions. All we insist upon is that the moral mind is partially structured in advance of experience so that five (or more) classes of social concerns are likely to become moralized during development (Haidt & Joseph, 2008, p. 381, emphasis added).
Here is how we discuss modularity in our major description of MFT:
But you do not have to embrace modularity, or any particular view of the brain, to embrace MFT. You only need to accept that there is a first draft of the moral mind, organized in advance of experience by the adaptive pressures of our unique evolutionary history. (Graham et al., 2013, p. 63)
It is true that Haidt (2012, p. 144 in the paperback) referred to modules as “little switches”, but the actual quote is that “modules are LIKE little switches in the brains of all animals,” and the quote is describing how readers can think about modules in general; it was not stating that moral foundations ARE little switches in the brain. Haidt was trying to convey the idea of modularity to a general audience, without getting into the complexity of explaining the various types of modules in a “massive modularity” developmental theory such as that of Sperber and Hershfeld.
The bottom line is that in the academic papers where we discuss modularity, MFT bears little resemblance to the straw man presented by S&G.
2) Shapeshifting between strong and weak versions of the harm hypothesis
In contrast to their very strict standards for moral modularity, S&G have a permissive set of standards for Dyadic Morality. They say that their central claim is this: “we test six predictions of dyadic morality, which can be summarized as follows: Harm is central in moral cognition for both liberals and conservatives.” (p. 1). Does “central” mean “most important” (weak version, tested by the studies)? Or does “central” mean necessary and sufficient, obviating the need for any innate capacity to process or learn about unfairness, disloyalty, disobedience or degradation (strong version, discussed in the introduction and discussion)? This strong version has been advanced by Gray in previous papers, e.g. “A dyadic template suggests not only that perceived suffering is tied to immorality, but that all morality is understood through the lens of harm” (Gray, Young, & Waytz, 2012, p. 108). And this strong version of the harm hypothesis is the primary claim of dyadic morality in S&G’s paper as well (except in the studies).
The term “harm” also comes in strong and weak versions. The strong version is stated clearly on page 2: “More technically, harm involves the perception of two interacting minds, one mind (an agent) intentionally causing suffering to another mind (a patient)—what we call the moral dyad.” If S&G were to stick to this strong definition–which specifies suffering–then their definition of harm would match fairly closely to our description of the Care/harm foundation, but with the useful addition of the agent and patient. We could then debate whether there is any need for other moral foundations.
However, S&G go on to present a theory of “harm pluralism” which says that just about any way that one person can do something to another, which the other would object to, counts as harm. So failing to return a favor, saying something bad about your country, disrespecting an elder, or giving in to one’s “animal” urges are all said to be processed as variants of the harm-based dyadic template. When harm is diluted so much that it means any kind of badness, then Dyadic Morality becomes little more than the claim that moral judgments are intrinsically about dyads. And since the patient can now be anything — a group, a flag, a mountain, or a nation — the theory gets diluted even further to become little more than the claim that moral judgments are about social relationships. This weak version of DM should then be compared to Fiske and Rai’s Relational Models Theory, which also says that morality is about social relationships (Fiske, 1992; Rai & Fiske, 2011). Do S&G really believe that when people are engaged in an equality matching relationship, versus an authority ranking relationship, all we need to understand is the different ways in which people perceive their partner to be harming them?
The bottom line is that S&G present seven studies testing the weak version of the harm hypothesis — in which harm, expansively defined, is “central” to moral judgment — against a straw man version of MFT featuring five fully encapsulated, biologically-based moral processing modules. From these tests, which have many methodological flaws (as described below), they claim that the strong version of the harm hypothesis has been vindicated. It has not.
3) Mischaracterization of moral pluralism, and of Richard Shweder
S&G offer a new concept of “harm pluralism,” which they say is compatible with the writings of “eminent anthropologist Richard Shweder” (p. 15). They argue that dyadic morality allows for “universality without the uniformity,” a phrase taken from Shweder (2012). We were surprised by this claim, for we know Shweder’s work quite well. MFT is based in part on Shweder’s theory of the “big three” ethics of moral discourse (Shweder, Much, Mahapatra & Park, 1997).
S&G’s attempt to apply “universalism without the uniformity” is to argue that once you understand that people in different cultures have different beliefs about harm, you can see that morality is universally about harm. Cultural differences (including political differences) are shallow, simply a matter of differing beliefs about facts (e.g., does gay marriage actually hurt anyone, including the institution of marriage?)
But this is not what Shweder meant by “universalism without the uniformity.” S&G’s position is in fact the very position that Elliot Turiel and his colleagues took in the 1980s, when they argued (contra Shweder) that once you take account of the “informational assumptions” held by Brahmins in Bhubaneswar, India, then the cultural differences between Bhubaneswar and Chicago would vanish (Turiel, Killen & Helwig, 1987).
The differences don’t in fact vanish (Haidt, Koller, & Dias, 1993). But more importantly, S&G’s use of the term “pluralism” is radically incompatible with Shweder’s use. In the very essay that Schein and Gray cite, Shweder (2012) explains what he means by “universalism without the uniformity.” It is based on the moral pluralism of Isaiah Berlin (2001), who believed that human beings pursue many different moral goods, yet the list of goods is finite, and we can understand people who choose different goods than we do. As Shweder explains:
the imagined moral truths or goods asserted in deliberative moral judgments around the world are many, not one…. On a worldwide scale the argument-ending terminal goods of deliberative moral judgments privileged in this or that cultural community are rich and diverse, and include such moral ends as autonomy, justice, harm avoidance, loyalty, benevolence, piety, duty, respect, gratitude, sympathy, chastity, purity, sanctity, and others” (Shweder, 2012, p. 98).
Shweder and Berlin both reject the pursuit of parsimony for its own sake. They embrace the messy multiplicity of moral life. Dyadic Morality in its strong version (immorality = harm = infliction of suffering) is parsimonious, but it cannot distinguish disobedience from unfairness. Dyadic Morality in its strong form turns Occam’s razor into Occam’s chain saw, cutting down every tree except one. Dyadic morality in its weak form (immorality = harm = anything people object to in social behavior) sacrifices parsimony, and even then, it can’t tell us how disobedience differs from unfairness, beyond saying that they are different ways in which people harm each other. S&G’s new offering of “harm pluralism” is not really pluralism (in Shweder’s terms). It is a procrustean monism, straining to shoehorn all moral violations into the single template of agent-causing-suffering-to-patient. No real explanatory work is done by such monism.
Pluralism, on the other hand, improves explanation. S&G claim (p. 5) that MFT has limited predictive utility because foundations overlap and harm is the best predictor. This is an odd use of “predictive utility.” Contrast this with the real predictive utility shown in papers like Koleva et al. (2012), which found that sanctity scores greatly improved prediction of culture war attitudes, over and above Care/harm scores and ideology; or Waytz, Dungan, & Young (2013) predicting whistleblowing with loyalty and fairness, or Rottman et al. 2014 predicting judgments of suicide with sanctity over and above harm. None of these findings would be possible if researchers embraced Dyadic Morality and only measured beliefs about harm. Later (p. 13) the authors chide MFT for not asking about taxation, gun control, euthanasia, capital punishment, and environmentalism, but this is exactly what we have done, in Koleva et al. 2012, and what many others have done in their research demonstrating the utility of using multiple foundations to understand and to change political attitudes (e.g., Feinberg & Willer, 2013).
EMPIRICAL/METHODOLOGICAL PROBLEMS
We describe the specific problems with each of S&G’s 7 studies below. But first we note the two major flaws that run across all 7 studies. First and foremost, as detailed above, the studies claim to pit MFT predictions against dyadic morality predictions. But what they really contrast are predictions from the weak harm hypothesis (harm as most central, important, etc.) with predictions from a strong Fodorean modularity MFT has never endorsed. The studies are then said to support dyadic morality over MFT, even though the actual claims of these theories – the weak modularity of MFT and the strong harm hypothesis of dyadic morality – were never tested.
Second, in these seven small to medium sized studies on MTurk (Ns between 79 and 111), S&G divide their participants into 2 groups based on responses to a continuous 7-point ideology measure. Those who chose 1-3 (“strongly liberal” to “somewhat liberal”) are classed as liberals, and usually comprise 50-60% of the sample. Everyone else – including moderates – is lumped into the group “conservatives.” This is extremely problematic, not only because moderates are not conservatives, but because participants who do not know where to place themselves (including many libertarians, and people who are simply non political) have little choice but to pick 4, making the “moderates” a hodge podge of political views.
To make matters worse, S&G cite Haidt (2012) as their precedent for this step:
“Consistent with past work (Haidt, 2012), we define liberals here and elsewhere as those who respond 1 through 3 on the political scale and conservatives as those who respond 4 through 7” (S&G, p. 6).
We asked S&G by email to clarify what passage in The Righteous Mind justified a dichotomous split with moderates counted as conservatives. They responded that there was no specific passage, they were just drawing on Haidt’s general arguments about differences between left and right. In other words, they had no justification for lumping moderates in with conservatives, and had cited Haidt (2012) inappropriately. With low power and this unjustifiable dichotomous split, we are not surprised that so few significant left-right differences were found. Moderates should never be included with the conservatives. Instead, effects of ideology should be investigated with the full continuous measure rather than losing information with the dichotomous split. Such correlations are not provided in the paper nor in the supplements.
Study 1: Recalling Immorality
Study 1 shows that if you ask MTurkers for an example of an immoral act, harm comes up most frequently. This should surprise no one. It confirms the finding of Hoffman et al. (2014) that Care/harm was the most frequent MFT category in a beeper study of daily moral judgments, as well as the claim of Haidt, Koller, and Dias (1993) that educated Western groups have a largely harm-based morality. Nobody doubts the weak harm hypothesis (that harm is “central”), so the findings of study 1 are consistent with both MFT and Dyadic Morality.
There is also something odd about study 1. In the results section we find this text:
Examining participant labels revealed that 68% of participants categorized their first act recalled as harmful, 9% labeled it as unfair, 14% labeled it as disloyal, 8% labeled it as disobedient, and 1% labeled it as gross.
What do they mean by “first act recalled”? The method section states that only one act was requested. Yet in Gray’s previous write-up of this study, it was reported that three acts were requested, and all three acts were analyzed. That analysis reported an uncomfortable finding for S&G: Harm violations were most common in the first act listed, but when all three acts were analyzed together, fairness violations (including dishonesty) were more frequent than harm violations. We cannot tell if the final published manuscript simply does not report the additional acts, or if the authors ran a new study that only asked for one act and then used that new data to update the text in the final published article.
Study 2: Morality of a Hypothetical Tribe.
People rate the immorality of a hypothetical tribe’s foundation violations, and harm is rated the worst overall. But note that harm violations are not worse than fairness for liberals, but this is dismissed because “unfair violations are fundamentally dyadic”. A liberal/conservative difference is found in purity, and might have been found for other foundations if they had looked at correlations with the continuous ideology measure (not a 1-3 / 4-7 split). Again, this provides evidence for the weak claim that harms are the worst actions (perfectly compatible with MFT), and no evidence at all for Dyadic Morality’s strong claim that all moral judgments reduce to harm judgments.
Study 3: X But Not Y Task
This study provides good support for the weak harm hypothesis that harm is seen as the worst, most important, or most central, with lots of other foundation differences as well. The importance ordering seems to be harm, unfair, betray, subvert, impure, just as Graham (2010) found in his dissertation’s “which is worse” task. Once again, there is no evidence for the strong version of DM.
Study 4: Correlated Response Times
Response times for harmful ratings correlate with response times for immoral ratings. This is discussed as something special and unique to harm, but the study does not even test whether similar correlations emerge for ratings of fair/unfair, pure/impure, etc. Finding higher correlations between response times for harm and immorality ratings could support the weak harm hypothesis that harms are the most prototypically immoral actions. But again no evidence is provided for dyadic morality’s claims that all moral judgments are caused by perceived harm.
Study 5: Correlated Ratings
This study employs the same unusual design employed in Gray & Keeney (2015) to make it look like all foundations correlate with each other at r=1.0, based on participants’ ratings of the same violations using different foundation adjectives, which they use interchangeably. See Graham (2015) on why this is a problem. As that paper notes, correlations between severity ratings using different adjectives tell us little about how distinct these moral concerns are. But correlations across people between moral judgments about harm and impurity violations can tell us more (e.g., how well can you predict people’s impurity judgments from their harm judgments?). The harm-impurity correlation is r=.06 for the MFQ, r=.35 for the Moral Foundations Sacredness Scale (MFSS; Graham & Haidt, 2012), r=.23 for the MFSS adjusting for overall willingness to do things for money, r=.19 for the Moral Foundations Vignettes (MFV; Clifford et al., 2015), r=.06 for MFV items selected to match on severity and arousal (Dehghani et al., 2015), and r=.29 for the participant-generated naturalistic measures used by Gray and Keeney in Study 2. These correlations vary across formats, but all are substantially lower than the misleading ratings correlations (e.g., r=1.0) in S&G’s Study 5.
Study 6: Common Currency
Study 6 presents U.S. mTurkers with violation comparisons, asking which is more immoral and then which is more harmful, unfair, disloyal, disobedient, and gross. Unsurprisingly, the study shows that for these people harmfulness ratings correlate most highly with immorality ratings, followed by unfair, gross, disobedient, and disloyal. This provides further evidence for the weak harm hypothesis that harm is the most important or most prototypical kind of immorality, at least for US participants. However, the authors fail to note that in their own data, by their own criteria, four of the five foundation violation terms served as common currency. Rather than showing that harm is the only common currency, the study shows that harm, unfair, gross, and disobedient are all used as types of common currency between different moral violations. Study 6 thus provides evidence against the strong claim that harm is the one true common currency, obviating the need for any other currency. One might say that this study disproves Dyadic Morality on its own terms.
Study 7: IAT Studies
Although MFT doesn’t make any a priori predictions about patterns of relations between the foundations, in just about every measure we’ve used to capture multiple moral concerns the clustering has been along the individualizing-binding distinction found in the exploratory factor analysis of the MFQ (Graham et al., 2009; see also Chakroff, 2015). So it would be surprising to us if a new measure came along that found higher correlations between harm and loyalty or between harm and purity, than between loyalty and purity. However, this is not what this study actually shows.
In the first IAT reported, harm items have the label Harmful (so far so good), disloyalty items have the label Disloyal (good), but the impurity items have the label “Immoral.” The task for these items is to distinguish them from the nonmoral items “forget,” “procrastinate,” and “boring.” So even though the stimulus items for moral are all purity violations, the task itself is about distinguishing immoral content from nonmoral content. Several studies show that harm is seen as the worst, the most immoral, so if the participants link “harmful” with “immoral” more easily than they link “disloyal” with “immoral” (which they no doubt do), then the overall d-score of the IAT would reflect that association, NOT a greater association between harm and impurity than between disloyalty and impurity. This interpretation is supported by the authors’ use of “Gross” as the label in Study 7b (and actually changing the impurity moral words to the far less moral words “disgusting,” “gross,” and “filthy”), rather than Impure, further stacking the deck for the association between “harmful” and “morality.”
This confound could be easily addressed with single-category IATs using all three labels (Harmful, Disloyal, Impure), comparing reaction times when, say, harmful and disloyal share a response key and impure is the other key, vs. disloyal and impure sharing a key when harmful is the other key. This would be a straightforward test of how these concepts implicitly relate to one another.
Overall, this study provides no support for either the strong or weak version of the “harm is common” hypothesis. We suggest an improved design to remove a crucial confound in these IATs; this design could provide support for a weak version of the “harm is most common” hypothesis, but not a strong claim that therefore no psychological distinctions exist between foundations.
G) CONCLUSION
In conclusion, S&G have made a useful point: all foundations are not equally central, important, or frequent in the lives of Americans, even conservative Americans. Harm is probably more central or frequent, followed by fairness. But S&G’s empirical studies had so many flaws — particularly their unjustified decision to lump moderates in with conservatives — that we can not conclude from their studies that liberals and conservatives do not differ. Nor can we conclude that liberals and conservatives all share a harm-based “template” that is necessary and sufficient for their moral judgments. And because S&G created a straw man out of MFT and compared it with a shapeshifting version of Dyadic Morality, their article does not shed light on the validity of either model.
To learn more about Moral Foundations Theory, please click here.
References
Berlin, I. (2001). My intellectual path. In H. Hardy (Ed.), Isaiah Berlin: The power of Ideas (pp. 1-23). Princeton, NJ: Princeton.
Chakroff, A. (2015). Discovering structure in the moral domain. Doctoral dissertation, Harvard University.
Clifford, S., Iyengar, V., Cabeza, R., & Sinnott-Armstrong, W. (2015). Moral Foundations Vignettes: A standardized stimulus database of scenarios based on moral foundations theory. Behavior Research Methods, 1–21.
Dehghani, M., Johnson, K. M., Sagi, E., Garten, J., Parmar, N. J., Vaisey, S., Iliev, R., & Graham, J. (2015). Purity homophily in social networks. Manuscript under review.
Feinberg, M., & Willer, R. (2013). The moral roots of environmental attitudes. Psychological Science, 24, 56-62.
Fiske, A. P. (1991). Structures of social life: The four elementary forms of human relations: Communal sharing, authority ranking, equality matching, market pricing. New York: Free Press.
Fodor, J. (1983). Modularity of mind. Cambridge, MA: MIT Press.
Graham, J. (2010). Left gut, right gut: Ideology and automatic moral reactions. Doctoral dissertation, University of Virginia.
Graham, J., & Haidt, J. (2012). Sacred values and evil adversaries: A moral foundations approach. In P. Shaver & M. Mikulincer (Eds.), The Social Psychology of Morality: Exploring the Causes of Good and Evil (pp. 11-31). New York: APA Books.
Graham, J., Haidt, J., Koleva, S., Motyl, M., Iyer, R., Wojcik, S., & Ditto, P. H. (2013). Moral Foundations Theory: The pragmatic validity of moral pluralism. Advances in Experimental Social Psychology, 47, 55-130.
Graham, J., Haidt, J., & Nosek, B. (2009). Liberals and conservatives rely on different sets of moral foundations. Journal of Personality and Social Psychology, 96, 1029-1046.
Gray, K., Schein, C., & Ward, A. F. (2014). The myth of harmless wrongs in moral cognition: Automatic dyadic completion from sin to suffering. Journal of Experimental Psychology: General, 143(4), 1600–1615.
Gray, K., & Keeney, J. E. (2015). Disconfirming Moral Foundations Theory on its own terms: Reply to Graham (2015). Social Psychological and Personality Science.
Gray, K., Young, L., & Waytz, A. (2012) Mind perception is the essence of morality. Psychological Inquiry, 23, 101-124.
Haidt, J. (2012). The righteous mind: Why good people are divided by politics and religion. New York: Pantheon.
Haidt, J., & Joseph, C. (2007). The moral mind: How 5 sets of innate intuitions guide the development of many culture-specific virtues, and perhaps even modules. In P. Carruthers, S. Laurence & S. Stich (Eds.), The Innate Mind, Vol. 3 (pp. 367-391). New York: Oxford.
Haidt, J., Koller, S., & Dias, M. (1993). Affect, culture, and morality, or is it wrong to eat your dog? Journal of Personality and Social Psychology, 65, 613-628.
Hofmann, W., Wisneski, D. C., Brandt, M. J., & Skitka, L. J. (2014). Morality in everyday life. Science, 345, 1340-1343.
Koleva, S., Graham, J., Iyer, Y., Ditto, P.H., & Haidt, J. (2012) Tracing the threads: How five moral concerns (especially Purity) help explain culture war attitudes. Journal of Research in Personality, 46, 184-194.
Rai, T. S., & Fiske A. P. (2011). Moral psychology is relationship regulation: Moral motives for unity, hierarchy, equality, and proportionality. Psychological Review, 118, 57–75.
Rottman, J., Kelemen, D., & Young, L. (2014). Tainting the soul: Purity concerns predict moral judgments of suicide. Cognition, 130, 217-226.
Schein, C., & Gray, K. (2015a). The unifying moral dyad: Liberals and conservatives share the same harm-based moral template. Personality and Social Psychology Bulletin, 41, 1147–1163.
Schein, C., & Gray, K. (2015b). Making sense of moral disagreement: Liberals, conservatives and the harm-based template they share. SPSP Blog post, August 12, 2015.
Seligman, M. E. P. (1971). Phobias and preparedness. Behavior Therapy, 2, 307-320.
Shweder, R. A. (2012). Relativism and Universalism. In D. Fassin (Ed.), A Companion to Moral Anthropology (pp. 85–102). John Wiley & Sons, Ltd.
Shweder, Richard A, Much, N. C., Mahapatra, M., & Park, L. (1997). The “big three” of morality (autonomy, community, and divinity), and the “big three” explanations of suffering. In A. Brandt & P. Rozin (Eds.), Morality and health (pp. 119-169). New York: Routledge.
Sperber, Dan. (2005). Modularity and relevance: How can a massively modular mind be flexible and context-sensitive? In P. Carruthers, S. Laurence & S. Stich (Eds.), The innate mind: Structure and contents (pp. 53-68). New York: Oxford.
Sperber, D., & Hirschfeld, L. A. (2004). The cognitive foundations of cultural stability and diversity. Trends in Cognitive Sciences, 8, 40-46.
Trivers, Robert L. (1971). The evolution of reciprocal altruism. Quarterly Review of Biology, 46, 35-57.
Turiel, E., Killen, M. , & Helwig, C. C. (1987). Morality: Its structure, function, and vagaries. In J. Kagan & S. Lamb (Eds.), The emergence of morality in young children (pp. 155-243). Chicago: University of Chicago Press.
Young, L., Saxe, R. (2011). When ignorance is no excuse: Different roles for intent across moral domains. Cognition, 120, 202-214.