What’s wrong with torturing sloths for fun?

Most of us are inclined to believe that there are some things that just are right, and some thing that just are wrong. It seems true to say that “torturing sloths for fun is wrong”. But what is it that makes it wrong? This is a topic I will go on about at great length if given half a chance, but here I will only outline some of the main theories (and waffle on about them at great length in future posts).

1. Cultural relativism

Cultural relativism argues that “torturing sloths for fun is wrong” because of certain cultural norms that make it morally unacceptable to torture a sentient being just for the fun of it. You might think of it in terms of manners. In some cultures people shake hands when meeting a new person, in others people place their hand on their heart and bow. There is no right or wrong here, it’s just different norms. In the same way, those who favour cultural relativism are inclined to argue that there is no truth regardless of culture. What makes “torturing sloths for fun is wrong” true is that it is culturally tabo to be the sort of person who tortures sloths.

2. Virtue ethics

Alternatively, you might believe that it is not culture that makes it wrong, but rather the fact that you are a human being. Good humans act in a virtuous manner and do not torture sloths for the mere fun of it. It would not be virtuous to torture a sloth, it would be cruel and brutish, and a virtuous person knows not to act in cruel and brutish ways. He should instead foster virtuous behaviour such as generosity, kindness and honesty.

3. Kantian ethics

Or you might believe that it is wrong to torture sloths because you cannot rationally will every other person to torture sloths. Before you act you must think “would I want everyone to act in this way?” (the so-called universalisation test) Do you think that it would be alright if everyone was to torture a sentient being for fun? Presumably not, in which case it is wrong. (Although Kant is a little awkward to bring up in an example about animals).

4. Utilitarianism

It could, instead, be that it is wrong to torture sloths because of the pain it gives to the sloth. A utilitarian would argue that maximising pleasure is right and maximising pain is wrong. Therefore, the pain of the sloth would mean that it is wrong to torture sloths for fun.

Notice, however, that a utilitarian has a problem in that a person might enjoy torturing a sloth and he might be in a room full of sadists who gain more pleasure as a group from watching the sloth’s plight that the sloth is in pain. The utilitarian has to do some tricky argumentative gymnastics to hold that “torturing sloths for fun is wrong”, if many people gain pleasure from it.

5. Divine Command Theory

Another option is that what is right and wrong depends on what God commands. If God commands that torturing sloths for fun is wrong, then it is wrong because God has commanded it. Here it is worth noting that those who don’t believe in God are not let off the moral hook. Just as someone who does not believe in gravity still feels its effects, the fact that people don’t believe in God wouldn’t change the fact that it is wrong. There are, however, numerous other problems with God suring up morality that will be explored in another post. (roll on the jolly old Euthyphro Dilemma)

This list is harldy eshaustive, but it offers some of the main options open to you if you are enclined to say that “torturing sloths for fun is wrong”. Of course, there are some who argue that it makes no sense to say that “torturing sloths for fun is wrong”. I will save this motely bunch of anti-realists for another day.


Believing, doubting and doing

Recently, I was arguing with a friend who believes he ought to be a vegetarian. He is not one of those who simply thinks it would be healthier were he a vegetarian, rather he holds the view, more susceptible to self-righteousness, that it is morally better if he is a vegetarian. He argues that it is better for the environment and animal welfare that he eat no meat.

Yet he told me, with soulful lament, that he still occasionally indulges in a steak or a bacon sandwich, and he has been wrestling with the question of whether it is possible for him to genuinely hold the moral belief that he ought not to eat meat, yet still sometimes eat meat.

I told him not to be too hard on himself; he can reasonably hold that he should be a vegetarian and still not always act in accordance with his belief. Unfortunately, he was unconsoled, grasping my arm and asking dramatically what could determine what he believes, if not his actions. In eating meat, he said, he betrays that he doesn’t believe in his moral conviction enough.

I agreed, extricating myself from his grip, that it would be odd to espouse the virtues of a meat-free diet, while still chomping away at a bacon sandwich every breakfast. But I do think it is possible to eat meat now and then and believe it is wrong to eat meat. You might believe it to be true, but be overcome by other factors, such as the social pressure of not disappointing Auntie Mildred who has slaved over a Sunday roast. It doesn’t mean you believe it any less, you just think it’s more important not to upset Auntie Mildred on this occasion. He, for his part, seemed unconvinced and said sorrowfully that if he truly held his beliefs he ought to be sticking to his guns.

Though I didn’t feel as passionately as he did, I thought the question was an interesting one. What are the criterion for holding a moral belief? Should I be motivated by it all the time? Or is it enough to be slightly motivated by it, just enough to give me a twinge of guilt as I sit down in front of the goggle box instead of providing my services at the nearest soup kitchen.

I think that it is possible to hold a moral belief generally but not have to be motivated by it all the way through to action. Imagine if we had to act on every moral impulse, there would be no time for doing things that make life pleasurable, such as going to the opera or scratching yourself in front of the telly.

Or maybe I am just to laissez faire about morality and there is a moral standard which I am failing to live up to by writing diatribes about the demandingness of morality, while I could be out helping people. As for my friend, I am still of the opinion that he can hold his belief and not act on it every time. While he, possibly a more moral cove than me, guiltily partakes in a steak every now and then.


Does Frank Underwood have reason to be moral?

Anyone who has seen the House of Cards knows that Frank Underwood is a thoroughly nasty piece of work; he kills people who get in his way; he throws them in front of trains; locks them in gas-filled cars; he lies, cheats and kills his way to the top.

Doubtlessly Frank Underwood is a pretty horrible sort of person. But does Frank Underwood have any reason to be more moral?

One approach is to argue that Frank Underwood has a reason to be moral because being moral is rational. Of any action you should consider whether everyone else in the world could act in this way and if the answer is no, it is irrational to do it. So when he throws Zoe Barnes in front of the train, Frank is being irrational; if he were to think correctly he would realise that he could not condone everyone throwing people in front of trains.

Such a view is Kantian and argues that being moral and being rational are closely related and given that people have a reason to be rational, Frank Underwood certainly has a reason to be moral.

The problem with this view is that it doesn’t seem that Underwood is irrational. One of the most alarming things about Frank Underwood is the calm and rationality that goes into his actions. He is cool, calculated and rational in his horrible actions, and it seems philososlothically heroic to argue that he is being irrational.

Another way of arguing that Frank Underwood has a reason to be moral is to argue that his actions somehow impair him as a human. The fact that it is wrong to kill another person gives Frank Underwood a reason not to push Zoe Barnes in front of the train. He violates his human nature by doing this because it goes against what it is to be a human.

This view seems more promising, however, again, there is a problem: Frank Underwood does very well for himself. He has reached the pinnacle of power, and his evil actions have helped him make it to the very top. It doesn’t seem that damaging his humanity is providing him with any incentive to be more moral.

This comes back to the age old problem: Why do good things happen to bad people? If Frank Underwood was cuddlier and nicer I doubt he would have made it so far. Are we forced to conclude that his nasty actions pay off?

In the last post I presented a view which argues that reasons for action are closely tied to what can motivate a person. A person’s desires, cares, projects and commitments are arguably the only thing that gives him reasons.

But what can we say of Frank Underwood? His desires, cares, projects and commitments certainly do not lead him to moral reasons. But does that mean he has good reason to push Zoe Barnes in front of that train? Or indeed that he has no reason not to kill her, given what he cares about?

This is one of the most interesting areas of moral philosophy and it is the branch the philososloth enjoys sitting on and contemplating the most. What do you think?

Does Frank Underwood have a reason to be moral?

If so, is this a reason that he doesn’t understand?

If he doesn’t understand being moral as a reason, is it a reason for him?

Do let me know your thoughts on the matter: drop me a comment or connect @ThePhilososloth

A solution to the thought experiments?

Thus far in our exciting jaunt into the realms of moral dilemmas we have considered three thought experiments:

We considered one in which you have the choice to pull a leaver to kill one person or not pull the leaver and let five people die. Another in which you have the choice between shooting one person or allowing 19 people to be shot. And a third in which you are asked to choose whether you should accept a job which goes against your principles or reject the job, which will lead to better chemical weapons being created.

It became clear that there are two recurring positions in response to thought experiments such as these. The two responses are broadly Utilitarian or Kantian.

A Utilitarian argues that the right thing to do in response to all these thought experiments is to maximise happiness in the world. They would argue that it is right to pull the leaver and kill one person to save five; right to shoot one person to save 19, and right take a job which goes against your principles if it will lead to less death and destruction.

A Kantian, on the other hand, would contend that you should not pull the leaver and instead let the five people die, you should not shoot one person, even if it saves 19 others, and that you cannot accept a job that you think is wrong, even if taking it would cause better results.

These positions are well established and there are many advocates on either side. However, there is a criticism of both these approaches, one which follows the ideas of Bernard Williams (I told you he was a big philososloth).

He argues that the Utilitarian and the Kantian are wrong to provide a single answer to the question “What should you do?” Regardless of what sort of person you are the Utilitarian thinks you should maximise happiness and the Kantian thinks you should only act only in the way you wish everyone else should act. But the problem with answering the question “What should you do?” with a “one-size-fits-all” response is that it ignores the most important aspect of the question. “What should you do?”

By providing an answer which is supposed to be true for all people, Utilitarianism and Kantianism ignore you and your most fundamental concerns, interests, wishes, wants, projects and commitments. If we accept their position your deepest and most important sense of self could be lost by simply surrendering to what some system of morals tells us is the right thing to do.

Williams argues that both Utilitarianism and Kantianism ask us to defy who we are. What if your feelings against taking the job are particularly strong? Is Utilitarianism right to insist that it is morally right for you to take the job even it defies a deep sense of who you are?

What if you could not live with yourself if you knew you could have prevented the deaths of five people, by simply pulling a leaver and killing one? Is Kantianism right to insist that people who pull the leaver do the morally wrong thing, if they are unable to live with not pulling it?

Williams would most certainly say “No”. It is what you decide that matters and gives you reason to act, not some moral theory which might not speak to you.

What do you think? (This, as Williams would insist, should be what you think, not what anyone ought to think under the circumstances)

A further juicy moral conundrum

So far we have considered two moral thought experiements. Here is the last one in this series of juicy conundrums.

Imagine that you have recently completed a Ph.D. in chemistry and are finding it difficult to get employed. Unfortunately, you are a rather puny, academic creature and the heaviest thing you have dealt over the last years is a Bunsen burner. Your partner is struggling to find work too and, with two young children, money is tight.

You are, however, acquainted with an older chemist who says that he has a project which is decently paid in a laboratory where you can use your chemical expertise. The job is a project on chemical and biological warfare to which the government is giving generous and enthusiastic funding.

You have qualms about chemical and biological warfare to say the least and you inform the older chemist that you cannot accept this post. He responds that he is not so keen on the project himself, but that you saying no is not going to make the job go away. In fact he happens to know that the job will go to a contemporary of yours, who has no scruples about the project, and is likely to work at developing the weapons with particular zest and energy.

Indeed, the older chemist confides, he is hoping you will take the job to prevent this zealot from working day and night to create the very best chemical and biological weapons, which will kill many people.

Your partner has no particularly strong feelings either way about chemical weapons, and you clearly need the money.

What should you do?

As you might have guessed from the previous posts, there are two general responses to this thought experiment. Those who say “take the job” and those who say “don’t take the job”. Again they fall broadly along the lines of a Utilitarian response and a Kantian response.

The Utilitarian would argue that you clearly ought to take the job, whatever your feelings of reticence might be. The fact that there is a keen chemist in line for the job means that more people are likely to be killed than if you take the job. Also, your partner and children need you to take the job for the extra money it will bring in.

A Kantian, on the other hand, would argue that you ought not to take the job. Given your personal feelings against chemical and biological weapons it would be wrong for you to accept the position. You are responsible for your actions and the chemist who will do a better job, is responsible for his actions. You cannot condone acting in such a way, because your work will directly cause death and destruction. What the eager chemist does is out of your hands.

Again, there are two basic and conflicting intuitions at play here. Should you to maximize general good outcomes by accepting the job, after all fewer people are likely to die from your efforts? Or should you to stay true to you principles and not take on a job that goes so thoroughly against your scruples?

Again these intuitions seem hard to shift and both have a kernel of truth in them. Which position so you adopt?

(Again this thought experiment is adapted from one by Bernard Williams. I have a funny feeling we will be revisiting his views shortly)

Another moral dilemma to chew on

Having started our foray into the domain of thought experiments, let’s try another one.

Imagine you are exploring a forest and in the middle of it you come across a small town. Tied up against a wall in the town square is a row of twenty captives. They are being held at gunpoint by several armed men.

A large man in a sweat-stained uniform is clearly in charge and after a good deal of questioning you find out that the prisoners are villagers who have been protesting against the government’s plan build real-estate on a local sloth habitation. The prisoners are about to be killed to warn people of the dangers of protesting against the government.

Since you are an honoured visitor and a famous explorer, the captain offers you a special privilege; you may kill one of the captives yourself. If you accept, the captain will mark the occasion by letting the other prisoners go.

However, if you refuse to kill the prisoner then the captain will do what he was planning to do anyway and kill all twenty. It is clear that any attempt on your part to disarm the captain or the other guards instead will be unsuccessful.

The prisoners up against the wall are, of course, begging you to accept the offer.

What should you do?

In this thought experiment, like the previous one, there are two general responses. Some say you should kill one and allow 19 to live. Others say that you should not kill the prisoner. You are not responsible for the actions of the captain, but you would be morally responsible for killing another person.

Thought experiments like this one help to illustrate the differences of two important approaches to ethics; Utilitarianism and Kantianism. (I will explain these more fully in a later post).

Very broadly a Utilitarian argues that the right action is the action which causes the greatest utility, or greatest amount of happiness. In this case it seems that killing one person to save 19 people is not only permissible, but it is the right thing to do.

A Kantian, on the other hand, would say that it is impermissible to kill one person to save 19. It is the captain and the captain alone who is morally responsible if he kills the twenty prisoners. You are only responsible for your actions and how you lead your life. It seems that controlling your life in a good way should include not killing anyone. That would impair you as a person.

Typically a Utilitarian would respond that it is rather self-indulgent of you to take the moral high ground by refusing to act to save 19 people. Sure, it will be awful for you to kill one person, but it’s the right thing to do in order to save the lives of 19 others.

A Kantian would retort that you have no reason to affront your human dignity by killing a person. What the captain does is not your responsibility.

You now have the sort of cantankerous and uncompromising debate that could ruin any decent pub-trip. Both sides are equally entrenched on their core intuition; 19 lives saved is better than one versus the idea that it is wrong for you to kill.

What do you think?

Feel free to drop me a line and let me know what you think is the right thing to do. The philososloth thrives on conundrums (as well as shoots and leaves).

(This thought experiment was inspired by one that Bernard Williams thought up. There will doubtlessly be more posts about Williams later, as he is a key philososloth)

A couple of moral dilemmas to get the ball rolling

To kick off this exciting foray into the philosophical domains, I thought we could start with a couple of jolly thought experiments. (Correction: As will soon become apparent there are few thought experiments in moral philosophy that can be described as “jolly”. This is not just because philososlothers are a miserable bunch, but because if everything were cushty, there wouldn’t be much reason for morality in the first place). Anyway, let’s get the ball rolling…

Imagine that you’re standing by a railway junction and there is an out-of-control train beetling down the track at breakneck speak. The track branches into two, one on which five people are tied and one on which one person is tied. You do not have time to rush over and untie any of them, but you’re standing by a leaver which can divert the train.

If you do nothing, the train will smash into the hapless group of five, whereas if you pull the leaver, the train will change tracks and hit the lone person.

What should you do? (I did warn you it wouldn’t be jolly)

This problem, thought up by Philippa Foot, is meant to test our ethical intuitions and as you might expect there are two schools of thought on this question; those who say “Pull the leaver!” and those that say “Don’t!”.

The first group thinks that it is obvious that you ought to pull the leaver and kill the one person in order to save five people. After all one person dying is better than five people dying.

However, the second group thinks it is equally obvious that it’s wrong to pull the leaver. By taking action and pulling the leaver, you are actively killing another person, while if you do nothing, you cannot be held responsible for your inaction, they say.

Philososloths have conducted surveys (never a very common or comfortable project for them) and it seems that most people think the right thing to do is to pull the leaver, even though it means actively killing a person.

It might seem that it’s acceptable to pull the leaver, killing one, but saving five. But, consider an alternative thought experiment in which five people need an organ transplant in order to survive. One needs a heart, another needs a kidney, a third needs lungs, and so on. Paul, a healthy person, has all these organs and is a perfect donor match for the ailing patients.

Should we kill Paul and harvest his organs in order to save five people. If killing one to save five was permissible in the train case, why is it not in this case?

Is this example comparable to the train example? If not, why not?

You’ll find that I ask a lot of questions in these philososlothical blogs. This is intended to get my brain cell going as much as anything else. Please do get in touch if you have a question relating to any of the issues I will be raising here.