BillB left some thoughtful comments about morality and well-being on my recent post, “Naturalistic Atheism Is An Extraordinarily Strange and Unlikely Worldview.” I started to answer him there, but it turned into a long blog post’s worth of material. Since I started out addressing it in the second person to BillB, I’m posting it here in that form, but this is for all to read and respond to.
Greetings, BillB, and thanks for commenting.
Your idea of right is that it is “that which increases the well being of humans (or other sentient beings);” and wrong is the opposite of that: that which produces gratuitous suffering.
That’s a commonly held position. I just want to introduce you to some of its complications.
Some Things We Agree On
First, though, we agree on certain moral facts. I do not think a person needs to be a believer in God to know these moral facts to be true, or to act morally in accord with those facts. My view is that a belief in God is required to be able to explain how or why they are true. A person can be moral with or without that belief, but without it no one can explain why morality is good or necessary. They just know that it is.
That’s background. I need to restate it often because this argument is frequently misunderstood as saying “Atheists can’t be moral.”
What Makes Human Well-Being Not Just Human Well-Being?
Your biggest problem is in equating “right” with “promotes human well-being.” As I understand it, that view smuggles in an unexplained layer: “Right is what promotes human well-being, and human well-being is right.
I think your argument makes that hidden move, and when it does, it becomes tautological; you can just drop the word “right” completely out of it: “Promoting human well-being promotes human well-being.” Unless you can explain what makes human well-being right, you aren’t explaining much about ethics. You’re just talking about well-being.
Good for What?
You can say, “We all know that human well-being is good.” I would agree. We know it. Good for what? I think you might say, for human pleasure, fulfillment, absence of pain, etc. Those are practically synonymous with well-being, though, so they don’t do much more but define the problem better than before. We know what makes human pleasure or absence of pain desirable to humans, but that doesn’t explain yet what makes it right.
Now at this point I’ve heard people say (with a straight face!) “Well, if you don’t know that it’s right, then there’s something very deeply wrong with you!” They don’t realize that my position is that we all know that it’s right, and that I can explain what makes it right, but I don’t think you can. You can explain that it’s desirable, but not that desirability makes it right.
Making All the Other Moral Views Wrong
Maybe desirability actually equals rightness, and vice versa. I’ve heard some people try that approach. But that does a strange thing: it turns a moral term into a pleasure term; a moral concept into a pleasure concept. If that’s what “right” is, then pretty much every civilization everywhere at all times has been wrong about “right;” for the normal conception of “right” includes the idea that it’s right to do right regardless of the pain or pleasure consequences that may be predicted from that action. (See Joyce, cited here.)
The Utilitarian Calculus
There’s also a raft of problems associated with what’s called the utilitarian calculus. Is it always right to increase the pleasure of sentient beings? Well, what if Smith’s enslaving Jones and Wilson results in his pleasure increasing by a factor of ten? It seems that could only be wrong if he enslaved at least ten men, not two. You might object to the way I do the arithmetic here, but if you do, you’ll demonstrate my point anyway: no one knows the right equation, and there’s something obviously objectionable about even asking the question anyway. But your theory of increasing pleasure seems to require asking it in some form or another.
Easy Cases Aren’t Much Help
Sam Harris tries to move past that problem (and also a previously mentioned one, “But you know what’s right!”) by pointing out that we all know it’s better to be healthy than to be sick. Sure. Easy cases are easy. Is it better for a thousand people to be more healthy based on cruel and involuntary medical experimentation done upon a hundred? I don’t think so, and neither do you.
But how about a half a million people being much healthier based on long, torturously cruel research done on ten screaming six-year old children? The arithmetic is clear: if greater well-being is right, then torture the girl! But we all know that’s wrong. The whole ethical foundation is wrong.
You might say that there’s something about the kids’ suffering that multiplies their weight on the moral calculus just because it’s gratuitous suffering. But one could argue in return, “How is it gratuitous if it helps so many hundreds of thousands of people?”
“My Pleasure Counts More Than Your Pleasure”
Later you add,
I consider well being to mean simply “pleasure” and gratuitous suffering to mean “pain”; both in the straightforward physical sense.
This is hardly as straightforward as you suppose. What’s more pleasurable: a Bach Brandenburg Concerto or a heavy metal song? For some people it’s the latter. I would say they don’t know what real musical pleasure and enjoyment are. The world would contain more real pleasure if we would all listen to the greats, and it would contain less false pleasure if people would quit listening to hip-hop. My pleasure is more real than yours.
Okay, I don’t actually believe that! I wrote it in the first person because I wanted you to catch it with the full snobbish force of a first-person statement. The point is, even defining pleasure is difficult, which makes it a very hard standard to use for “right.” The example could be extended to other areas of life even more consequential than music.
How Do You Balance Well-Being Points In Actual Tough Cases?
And even though you think problems like these can be solved:
But in my experience many of these contrived scenarios ignore factors that make the proposed trade not so “justified” after all (e.g. debilitating guilt for the “many”; reduced future trust in others; a slippery-slope tendency to allow greater evil in future; a failure to truly consider all available options; etc).
What about the trade-off between a woman’s desire to control her own body, and the baby’s place in the world of human beings? How will you calculate that on your principle? Do you know the right number to assign to each?
What about gay couples’ desire to marry, and the damaging effect on future generations that could come from diluting the meaning of marriage and family today? Do you know what numbers to assign to each of those? If your point is that no one knows, that’s my point, too! Yet everyone seems to be saying there’s a moral dimension to allowing gay marriage. How do you make that computation?
I expect you’ll say you have an opinion. I also expect that upon examination we’ll find that some of it is based on arbitrary assignments of value.
What if Donald Trump decides to put all Muslim immigrants through extreme vetting. Many people think that would be wrong. How do you calculate the costs and benefits there? It’s one thing to say it would be imprudent, or poor foreign policy, or ineffective, or disrespectful of Muslims, or lots of other things.
But how do you count up the points to call it a net reduction in human well-being — especially since part of the calculus must include an important unknown: the extremely low-probability but extremely high-impact risk that any of those immigrants might be planning to bomb a few thousand Americans?
These things aren’t the least bit obvious on a well-being-based moral calculus. And they’re not trivial, either.
Objectivity Without Explanation
So now I think I’m in a position to answer your statement, “You might accuse me of making up these definitions arbitrarily, but IMO everyone accepts them implicitly.” I think we all accept them implicitly as a heuristic for thinking through some of our more obvious moral decisions — since reducing pain and increasing well-being are two concepts that map onto “the good” pretty successfully for obvious moral situations, if not for all cases. As you say,
But if it were possible to objectively and reliably decide between actions that cause pleasure and those that cause pain, why would anyone choose the latter? I think there is enough common ground among people to formulate a useful moral model.
Indeed. When it’s easy it’s easy, and most of the time there’s a good match between those kinds of decisions and those that we call “moral.” But they don’t define morality either exhaustively or completely accurately.
What Does It Actually Mean That It’s Human Nature?
You try to root the goodness of human well-being in objective human nature. This is harder than you might suppose. When you say,
I can’t help but try to avoid suffering because it is a core part of what makes me human. And all of us are in the same boat.
If we can’t help it, what makes it a moral thing rather than a determined/driven thing?
“Objective” and “Intrinsic”
Besides, this is just a way of saying, “we know objectively that humans have a preference for well-being;” which is true, but it doesn’t add anything to the argument except for the word “objective. I could have inserted “objective” in all the right places above and it wouldn’t have changed a thing.
I don’t take “intrinsic” to mean that some action could be good in a vacuum, in and of itself. In a universe devoid of sentient beings there can be no such thing as a “good” (or “evil”) action. An action can only be good to someone, and what makes it good is precisely that it increases well-being or avoids gratuitous suffering.
Christianity posits God as something not quite like a sentient being, for he is a being of mind and thought and emotion and triune relationship but not of sensation — but this could get very complicated so I won’t go any further with it. Suffice to say that Christianity understands goodness to be an aspect of the essence of God, before and beyond all time and space. This is not goodness in a vacuum, but no other sentient creature is necessary for it to be real.
But the question had to do with this word “intrinsic.” Normally if you perform a free act of self-sacrificial love, our view of it would be that there is goodness not only in the effect but in the very action itself. The act is good not just because of its effects but because it is in itself good. That’s a normal way human way to view such things. That’s what I’m referring to as acts being intrinsically good. If you think the goodness is entirely in the effect, and none of it in the act, then you are thinking counter to the way humans have thought about such things for a very, very long time.
Force-Fitting Human Morality Into An Equation
Human well-being matches a lot of what we intuitively know about what’s right and wrong — in the easy cases. It almost makes sense as a moral explanatory principle. The more you try to get it to fit all of human experience, though, the more you have to pound it into shape with a sledge hammer. You have to force-fit it. (Or else you have to pound human experience into another shape to fit the principle, which is even worse.)
There’s a more natural-fitting option for those who want to know about it.