Moral Intuitions

The following is a tutorial essay for Moral Philosophy with Paul Elbourne. In effect, it is a quick agenda for an hour-long discussion on a guiding question, which for this week was ‘Is it a problem for a normative ethical theory if it is in conflict with widely felt moral intuitions?’. A free-standing and much more readable version of some of the material may eventually go up on my Substack.

After the essay is the original draft; I converted it to the present version because Paul had been encouraging me all term to write less tersely.

For this essay, I stipulate See Part Six of On What Matters for a defense of the idea that natural theories aren’t normative. a distinction between natural and normative theories. A natural theory gives descriptions how things are, or how things could be. For instance, Newtonian mechanics is a natural theory about how physical objects behave. For the descriptions of a natural theory to be successful, they must be accurate and simple. So, it is a problem for a natural theory if it is not accurate, or if it is not simple.

Newtonian mechanics is a little bit inaccurate. Accuracy is a relatively small problem for it; it is still very successful. Aristotelian mechanics is very inaccurate. Accuracy is a relatively big problem for it; it is not very successful. So, problems for theories come in degrees.

Meanwhile, a normative theory gives prescriptions about how things should be. It says what things are good or bad, wrong or right, etc. This is its extension. It also says why these things are so. (This is its intension.) For instance, Savage’s decision theory is a normative theory about how you should decide among choices with uncertain outcomes. For the prescriptions of a normative theory to be successful, they must be accepted and complied with. So, it is a problem for a normative theory if it is not accepted, or if it is not complied with. It seems right to say: ‘The problem with Savage’s decision theory is that nobody accepts it!’ (and similarly for ‘complies with’). Like accuracy, the problems of acceptance and compliance can come in degrees.

Some theories about ethics are natural theories, while others are normative. A natural ethical theory might ‘describe our moral sense’. Rawls, p. 41 A simple one might describe our actual judgements about what is good or bad, wrong or right, etc. A more sophisticated one might describe ‘what we would be motivated to do if we were vividly aware of the relevant facts’. Attributed to Darwall in ‘The Unimportance of Internal Reasons’ in On What Matters. Meanwhile, a normative ethical theory might give us external reasons to act in certain ways. It does this by saying what consequences are good or bad, what actions are wrong or right, etc., and by telling us why they are so. Here is an illustration of the difference. A natural ethical theory might say: ‘If you knew the facts, you would want to do X.’ The normative version of that ethical theory would say: ‘If you knew the facts, you would want to do X, so do it!’4 See ‘Normative Beliefs’ in On What Matters.

Notice that both of these theories might have problems with accuracy; for instance, perhaps if you knew all the facts, you would want to do Y, not X! The same thing goes for simplicity. But the normative ethical theory alone faces the problems of acceptance and compliance. This is because the success of a description does not depend on whether it guides action, but the success of a prescription does depend on whether it does so. So, acceptance and compliance give us no good reason to reject a natural theory. They also might give us no good reason to reject a normative theory. However, they would still be problems for a normative theory. If these are big problems for a normative ethical theory, its advocates might need to spend quite a bit of effort promoting acceptance or compliance. This effort might include ingraining new patterns of thought, or establishing social and political institutions. So these problems can be overcome, even if they are big.

We’ve defined normative ethical theories as things which might give us external reasons to act in certain ways, and we’ve contrasted these with natural ethical theories. We’ve discussed potential problems for normative ethical theories, such as acceptance, compliance, accuracy, and simplicity. And we’ve noted that these problems come in degrees. For the rest of this essay, I will write T to stand for a generic normative ethical theory.

A moral intuition is the way something seems; in particular, whether that thing is good or bad, wrong or right, etc, and perhaps why it is so. For instance, I have the moral intuition that it would be seriously wrong not to donate my kidney to save someone’s life, even if they are a stranger. I also have the moral intuition that it would be good if people were more satisfied with their lives. Unfortunately, not many people share the moral intuition. It is not very widely felt. Fortunately, though, many people share the second moral intuition. It is very widely felt. For the rest of this essay, I will abbreviate the class of widely felt moral intuitions as WFMI.

I feel some of my moral intuitions more intensely, and I feel others more mildly. For instance, I feel very intensely that we should go to great lengths to avoid killing humans, but more mildly that we should do the same for insects.

One important type of moral intuition is about whether something is impermissible, strictly permissible, or obligatory. The first means that it would be very wrong to do it; we must not do it. The last means that it would be very wrong not to do it; we must do it. The middle one means that it is neither very wrong to do it, nor very wrong not to do it; we may do it or not. I have the moral intuition that some things are moral dilemmas; that is, some things are impermissible when described one way, but obligatory when described another. For instance, Sartre’s pupil faced the moral dilemma over whether to defend his country or to stay home for his mother. It seems impermissible for him to leave his mother, but it seems obligatory for him to defend his country. In the case of a moral dilemma, there is a strong conflict between two of my moral intuitions. One of my moral intuitions says that an action is obligatory, while another says that it is impermissible. It’s impossible to satisfy both of these moral intuitions. So, there is a strong conflict between them.

Strong conflicts might also arise between some normative ethical theory T and WFMI: for instance, T might say that it’s obligatory to kick every baby that you see, or that it’s impermissible to help your mother if she collapses.

Next, there are asymmetric conflicts. T conflicts asymmetrically with WFMI when WFMI says that something is strictly permissible, but T either says that it’s obligatory or says that it’s impermissible. For instance, T might say that it’s obligatory to donate your spare kidney, or that it’s impermissible to eat meat. Notice that WFMI does not conflict asymmetrically with T about these cases; that is why this type of conflict is asymmetric.

When WFMI does conflict asymmetrically with T, I say that T conflicts weakly with WFMI. Here, T says that something is strictly permissible, but WFMI either says that it’s obligatory or says that it’s impermissible. For instance, T might say that it’s strictly permissible to save your child over saving a stranger, or that it’s strictly permissible to defile a corpse. I call these weak conflicts with WFMI because T will never say that we are very wrong for acting on WFMI.

Finally, I consider minimal conflicts. These are any disagreements between T and WFMI about what things are good or bad, wrong or right, etc., or about why we should believe that they are so. There is only one type of disagreement that this excludes, called self-effacement. This is a specific disagreement about why things actually are good or bad, wrong or right, etc. Self-effacement arises when T says that you should reject T in favor of your moral intuitions. In this case, T specifically says that you should follow your moral intuitions. So, it seems like an abuse of the term ‘conflict’ to say that T conflicts with your moral intuitions. Because of this, we can’t say that T conflicts with WFMI if it T self-effacing in favor of WFMI.

I’ve defined strong, asymmetric, weak, and minimal conflicts between a normative ethical theory and WFMI. Earlier, I defined some moral intuitions as intense, and others as mild. I extend that definition to say that some conflict is more intense when it is against a more intensely felt WFMI, and more mild when it is against a more mildly felt WFMI. As I’m using the terms, the intensity and the strength of a conflict are different. For instance, T might say that it’s strictly permissible to save your child over a stranger. This was an earlier example of a weak conflict; however, it is a very intense one, since most people feel very intensely that saving your child over a stranger is not strictly permissible, but obligatory.

Here is one version of an argument (ACCURACY) that it is a problem for a normative ethical theory if it conflicts with widely felt moral intuitions:

(1) It is a problem for T if its extension is probably inaccurate.

(2) T’s extension is probably inaccurate if T is in conflict with WFMI.

…() It is a problem for *T if it is in conflict with WFMI.

If I want to defend T, I could accept ACCURACY but argue that T is not in conflict with WFMI; this would mean that ACCURACY poses no problems for T in particular. I could also take a more general strategy by attacking (2): perhaps WFMI are merely results of evolutionary pressures, and thus unlikely to be a guide to the truth. This is a sort of evolutionary debunking argument. If an EDA isn’t selective enough to undercut only actual moral intuitions, it risks undermining morality (or even human reasoning) in general; this might mean that advocates of EDAs are companions in guilt. See Singer for an EDA which Sandberg & Juth argue isn’t selective enough. If this sort of attack is successful, then ACCURACY fails to show (). This would mean that ACCURACY poses no problems for any normative ethical theory. However, this might not mean () is false, since there might be other reasons to believe something like it. These other reasons might still pose problems for normative ethical theories. I will briefly argue for two.

One reason to believe something like (*) is COMPLIANCE:

(C1) It is a problem for T if we won’t comply with it.

(C2a) We won’t comply with T if we have motivation against compliance.

(C2b) We won’t comply with T if we lack motivation for compliance.

(C3a) We have motivation against compliance if T conflicts strongly with WFMI.

(C3b) We lack motivation for compliance if T conflicts asym-ly with WFMI.

.. (C) It is a problem for *T if it conflicts strongly or asymmetrically with WFMI.

We have (C1) from our discussion of problems for normative ethical theories, and (C2a) seems pretty clear if there are no overriding considerations. (C2b) is similar; note that here T calls something either obligatory or impermissible, and it seems difficult to always or never do something if there are no overriding considerations. By introducing such considerations, advocates of T can block these two premises and thereby defeat the COMPLIANCE problem. See the last chapter of Kagan, which is dedicated to dealing with the compliance problem. As I suggested in the first section, COMPLIANCE is defeasible.

For (C3a), recall that strong conflict with WFMI means that WFMI calls something impermissible or obligatory, i.e., very wrong to do or fail to do. I assume that this is motivating to the extent that we intensely share the WFMI. So, if the moral intuition is not very intense, or not very widely felt, then COMPLIANCE is less of a problem. COMPLIANCE is a problem which comes in degrees.

For (C3b), recall that asymmetric conflict with WFMI means that T calls something impermissible or obligatory, i.e., very wrong to do or fail to do. It seems unlikely that we will always do or fail to do something if we don’t have some reason for it. Since asymmetric conflict means that WFMI says this thing is strictly permissible, it doesn’t provide this reason. So, in the absence of some other consideration, this premise seems clear. Note that this means a COMPLIANCE problem arising from asymmetric conflict is much more defeasible than one arising from strong conflict. For instance, T may have a compliance problem because it is too demanding (e.g., in requiring us to donate our spare kidneys). This is easier to overcome than the compliance problem which arises if T requires us to do something we and most of society feels is very intensely immoral (e.g., killing someone for the greater good).

Another reason to believe something like (*) is ACCEPTANCE:

(A1) It is a problem for T if we will not accept what it says to accept.

(A2) We won’t accept what T says to if it means rejecting our moral intuitions.

(A3) Acceptance means rejecting our moral intuitions if T conflicts with WFMI.

.. (A) It is a problem for *T if it conflicts with WFMI.

Like (C1), we have (A1) from our discussion of problems for normative ethical theories. For (A2), I assume that we need some overriding consideration to reject our moral intuitions, especially if they are intensely felt. This makes ACCEPTANCE defeasible as well, since its advocates might be able to provide such considerations. For (A3), I assume that conflict means at least minimal conflict. Recall that this means that T disagrees with WFMI about, at the very least, what we should believe about why some things are good or bad, right or wrong, etc.; T and WFMI might also have other conflicts between their extensions. To accept T when it disagrees with WFMI just means to reject our moral intuitions, at least for all of those people who share them.

To wrap up, I consider a few edge cases and complications. The first occurs for COMPLIANCE if T or WFMI admit moral dilemmas. Firstly, notice that the main argument still works in this case. More interestingly, if T yields a moral dilemma, then we have an even quicker argument for COMPLIANCE it’s impossible to comply with T when it yields a moral dilemma, since we are both obligated to do something and obligated not to do it. So, no matter what, we fail to comply with T. So of course it has a compliance problem!

Another complication for COMPLIANCE is when T conflicts asymmetrically with WFMI by calling something obligatory (or impermissible) when WFMI calls it supererogatory (or suberogatory). That is, WFMI says that the act is strictly permissible, but better than an obligatory act or worse than an impermissible one. In this case, it seems like WFMI does provide some reason for doing or for avoiding the act. Firstly, an act seeming supererogatory (or suberogatory) may not be enough of a reason to always do (or always avoid doing) the act. Secondly, though, we can amend our conception of conflict to directly deal with reasons instead of the principle deontic categories. In particular, strong conflict obtains when T and WFMI give opposing reasons; asymmetric conflict obtains when T gives a reason while WFMI doesn’t; and weak conflict obtains when T gives no reason while WFMI does.

An edge case for ACCEPTANCE is self-effacing theories. Notice that the main argument works because it deals with accepting what T says to accept, rather than with accepting T itself. (Of course, for theories which aren’t self-effacing, these amount to the same thing.) Interestingly, though, this raises the idea that ACCEPTANCE is a particular type of COMPLIANCE; namely, complying with T’s prescription to accept something. This can be developed with the amended conception of conflict.

A final complication for ACCEPTANCE is the worry that it is too broad. In particular, the only theories which avoid ACCEPTANCE are those that don’t even minimally conflict with WFMI. But there’s good reason to think that WFMI are extensionally incorrect. For instance, the WFMI of the past may have included the permissibility of slavery, or the inferior moral worth of various groups of people. If the WFMI of so many eras are so flawed, it’s difficult to believe that our current WFMI are somehow flawless. But if our current WFMI are extensionally flawed, then so is any theory which self-effaces in favor of our current WFMI! Such a theory also seems incapable of revising our moral intuitions, which is something we might seek from a moral theory. These two problems seem worse than ACCEPTANCE; so, perhaps ACCEPTANCE is a good problem to have.

This conclusion is more or less right. However, recall again that ACCEPTANCE comes in degrees. While a little bit of this problem is the price for avoiding much worse problems, this problem can also become severe for moral theories that diverge very far from our current WFMI. So, ACCEPTANCE still represents a genuine problem.



Moral Intuitions (Original)

Here is one version of an argument (CONFLICT) that it is a problem for a normative ethical theory T if it conflicts with widely felt moral intuitions (WFMI):

(1) It is a problem for T if it is probably extensionally incorrect. i.e., wrong about what things are right/wrong, good/bad, etc. This might not automatically disqualify T; e.g., it might still be useful (cf. Newtonian mechanics as a physical theory).

(2) T is probably extensionally incorrect if it is in conflict with WFMI. e.g., extensional correctness might be determined by moral judgements in reflective equilibrium, which strongly-held WFMI are likely to persist through.

(3) So, it is a problem for T if it is in conflict with WFMI.

To defend some particular theory T (e.g., Mill’s utilitarianism), one might accept CONFLICT but argue that T doesn’t conflict with WFMI. A more general line is to attack (2) with an evolutionary debunking argument If an EDA isn’t selective enough to undercut only actual moral intuitions, it risks undermining morality (or even human reasoning) in general; this might mean that advocates of EDAs are companions in guilt. See Singer for an EDA which Sandberg & Juth argue isn’t selective enough. ; if successful, this means CONFLICT fails to establish (3). However, this would not refute (3), since there might be other reasons to believe it. Here is a sketch of two:

(C1) It is a problem (COMPLIANCE) for T if it is difficult to comply with.

(C2) T is difficult to comply with if it is in strong conflict with WFMI.

(A1) It is a problem (ACCEPTANCE) for T if it is difficult to accept.

(A2) T is difficult to accept if it is in weak conflict with WFMI.

Note that these are two distinct problems: an implications of some T might be easy to comply with but difficult to accept (e.g., ‘everything is permitted’) or vice versa (e.g., ‘we are morally obliged to give up meat’). However, as we’ll see, acceptance might be a special type of compliance.

Establishing (C1) and (A1)

On my usage, a positive theory describes how things are or could be, while a normative theory prescribes how things should be. See Part Six of Parfit for a defense of this usage, i.e., that natural theories are never normative; also, what I call prescriptions are more specifically external reasons. So, a positive ethical theory may ‘describe our moral sense’, Rawls, p. 41 i.e., our ‘judgements in reflective equilibrium’, Rawls, p. 43 or perhaps ‘what we would be motivated to do if we were vividly aware of the relevant facts’. Attributed to Darwall in ‘The Unimportance of Internal Reasons’ in On What Matters. But this is not yet normative; a parallel normative theory would prescribe acting in accordance with that moral sense. See ‘Substantive Subjective Theories’ in On What Matters. Plausibly, a successful description only needs accuracy and simplicity. CONFLICT attacks the former. But I take it that a successful prescription needs, perhaps among other things, to be accepted and then complied with. It seems right to say: ‘The problem with T is that nobody accepts it!’ (or similarly with ‘complies with’). Presumably, if T recommends x, it’s a problem on T’s own terms if people don’t accept that they should do x, or if people don’t end up doing x. So, on this understanding of normativity, it is a problem for a normative ethical theory if it will not get broad compliance, or if it will not get broad acceptance.

So, to establish (C1) and (A1), we must bridge from (a) T is difficult to accept to (b) T will not get broad acceptance (and same for compliance). This is fairly simple: by (a) I simply mean that by default T probably will not be broadly accepted; i.e., it’s very likely by default that (b) will obtain. The relative weakness of this link means that the problems I raise are defeasible. Advocates of T might need to spend quite a bit of effort promoting acceptance or compliance. This might range from ingraining patterns of thought to establishing social and political institutions that encourage acceptance or compliance.

Establishing (C2) for strong or asymmetric conflict

By strong conflict between T and WFMI, I mean some action x being obligatory under one but impermissible under the other. This reading comes from the intuition that two principles don’t really conflict if one can always comply with both simultaneously. Of course, if there are moral dilemmas (e.g., if on both T and WFMI, Sartre’s pupil is obligated to defend France but also to stay home for his mother), then T and WFMI are in strong conflict with themselves as well as each other; but this edge case supports the idea that T is difficult (impossible) to comply with. More generally, I take it that people are strongly motivated toward acts which are obligatory under WFMI (e.g., calling an ambulance if a parent collapses), and strongly motivated against acts which are impermissible under WFMI (e.g., kicking a baby). Now, if T strongly conflicts with WFMI on x, then either x is impermissible under T but people are strongly motivated toward it, or x is obligatory under T but people are strongly motivated against it. Either way, complying with T requires overcoming this strong motivation from WFMI. This is difficult because, by default, I’m likely to act on my strong motivations, and very unlikely to act against them. This establishes (2) and thus COMPLIANCE for strong conflict.

Strong conflict is sufficient for COMPLIANCE, but not necessary. We can asymmetrically weaken our reading of ‘T is in conflict with WFMI’ such that it is also satisfied by either some act y being obligatory under T but strictly permissible under WFMI (e.g., donating a kidney), or else being impermissible under T but strictly permissible under WFMI (e.g., consuming animal products). This reading is asymmetric because WFMI would not be in conflict with T on y. e.g., in set theory, ‘GCH is in conflict with ZF on AC’ vs. ‘ZF is in conflict with GCH on AC’. This reading also sounds reasonable to my ear. Now, if T asymmetrically conflicts with WFMI on y, then either y is obligatory under T but people aren’t morally motivated toward it, or y is impermissible under T but people aren’t morally motivated against it. I’m unlikely to always do (avoid) something if I don’t have any default reason to do (avoid) it. And sometimes, I’ll even have strong nonmoral reasons against doing (avoiding) it. Either way, complying with T probably requires building sufficient motivation if it doesn’t come from elsewhere. See the last chapter of Kagan, which is dedicated to dealing with the compliance problem. So T is difficult to comply with, so (2) and thus COMPLIANCE obtains for asymmetric conflict as well.

However, asymmetric conflict does seem necessary for COMPLIANCE. We can weaken our reading such that ‘T is in conflict with WFMI’ is satisfied by some act z being strictly permissible under T but either obligatory under WFMI (e.g., saving your child over a stranger) or impermissible under WFMI (e.g., defiling a corpse). We can’t establish (2) with this weak reading, since we comply with T whether or not we do z (since it’s strictly permissible under T).

Establishing (A2) for at least minimal conflict

ACCEPTANCE applies for weak conflict. We are strongly attached on the basis of WFMI to the claims that z is obligatory or that z is impermissible, and accepting T requires rejecting these claims. By default, we are unlikely to reject these claims. So, T is difficult to accept. This establishes one version of (4) and thus ACCEPTANCE.

We can push ACCEPTANCE further, but we need to be more careful. At first pass, accepting T looks like a special case of complying with T’s prescription to accept it. This assumes that every theory prescribes its own acceptance. However, some theories may be self-effacing See ‘How S Might Be Self-Effacing’ in Reasons and Persons ; that is, they might not prescribe their own acceptance, and might even prescribe their own rejection. So, let φ* be the beliefs that T prescribes adopting. It might be that most normative ethical theories prescribe their own acceptance (i.e., φ* is identical to T). But it seems plausible that some ethical theories do not have this feature. It might be permissible or even obligatory on some T to retain WFMI (this includes rejecting the fact that WFMI are ultimately grounded in T). In this case, where φ* and WFMI are identical, it seems like an abuse of language to say that T and WFMI are in any sort of conflict. But so long as this isn’t the case, there is some minimal amount of conflict between T and WFMI. Then, we can argue for (2) as above, based on the difficultly of rejecting intuitions to which we are strongly attached. Thus, ACCEPTANCE is a problem for any T which even minimally conflicts with WFMI.

Further issues

Here is one worry about ACCEPTANCE. We might think, on the basis of something like pessimistic induction, that WFMI are surely wrong. Some (perhaps many) WFMI of the past were wrong by our lights today, so the WFMI of today will likely be wrong by the lights of future generations. One obvious candidate for such a WFMI is the permissiblity of eating meat. And it seems right that the WFMI of future generations will be more considered than ours, and thus more likely to be correct. Thus, WFMI are probably wrong about some things. But to avoid ACCEPTANCE entirely, a normative ethical theory must prescribe its own self-effacement in favor of our WFMI; and so such a normative ethical theory is probably wrong about some things. It also seems unable to correct our current behaviors, which might be something we want from a normative ethical theory. These problems might be much worse than ACCEPTANCE. So, ACCEPTANCE is a good problem to have.

I think the spirit of that worry is correct. But ACCEPTANCE is not a good problem to have. To address this, we must recognize that both the problems I’ve raised come in degrees. The degrees are not determined by what I’ve called the strength of conflict between T and WFMI. Rather, they are determined by what I’ll call intensity: WFMI may range anywhere from very mildly felt to very intensely felt. If T conflicts with intensely held WFMI, then the arguments for the two problems I’ve raised become stronger, and they become bigger problems. Note that strength and intensity are two separate considerations: if T says that it’s strictly permissible to be indifferent between your child and a stranger, then this is a relatively weak (asymmetric) but very intense conflict with WFMI. So, COMPLIANCE and ACCEPTANCE are more serious problems when they are in especially intense conflict with WFMI, and relatively trivial problems (perhaps even good problems to have) when they are in only mild conflict with WFMI. Even if they are serious problems for T, they aren’t automatically good grounds to think that T is false. But as I’ve discussed above, they are still problems for the normative, prescriptive success of T.

Recap

It is a problem for a normative ethical theory T if it is in conflict with widely felt moral intuitions. One argument to this effect is CONFLICT. We can situate the debate about intuition and the methodology of normative ethics here. But even if CONFLICT is undercut by that debate, there might be independent lines toward its conclusion. I advance two of them: COMPLIANCE and ACCEPTANCE. Here are revised versions of the arguments. (Of course, read ‘difficult’ as ‘difficult for most people’ and ‘we’ as ‘most people’.)

(C1.1) It is a problem for T if it will not get broad compliance.

(C1.2) T will not get broad compliance if it is difficult to comply with.

(C2.1) T is difficult to comply with if we have must overcome or create strong motivations to do so.

(C2.2) We must overcome strong motivations to comply with T if it conflicts strongly with WFMI, and create strong motivations to comply with T if it conflicts asymmetrically with WFMI.

(COMPLIANCE) It is a problem for T if it conflicts strongly or asymmetrically with WFMI.

(A1.1) It is a problem for T if φ* will not get broad acceptance.

(A1.2) φ* will not get broad acceptance if it is difficult to accept.

(A2.1) φ* is difficult to accept if we must reject strong attachments to do so.

(A2.2) We must reject strong attachments to accept φ* if T conflicts even minimally with WFMI.

(ACCEPTANCE) It is a problem for T if it conflicts even minimally with WFMI.

References