Philosophy of the Month – Utilitarianism

Pleasure vs. pain… it’s a balancing act!

It still surprises me how few people are aware of philosophical ethical theories. We all know the basic premises of major religions but a surprisingly small amount of people seem to look for ethical guidance outside the realm of religion. Every other month I’ll be exploring a different ethical theory in the hope of providing a wide scope of secular moral theories, and this month I’ll be taking a look at one of the most debated theories; Utilitarianism.

Whilst there are variations within the theory itself, from the traditional utilitarianism proposed by Bentham in the nineteenth century, to negative and preference utilitarianisms that have found favour in the twentieth century, there are some consistencies within each formulation. Utilitarianism at its basic level is a consequentialist theory; not grounded in a set of objective rules that must be dogmatically followed, but instead based around the projected consequences of individual acts, which usually hinge on an over-riding principle. The principle in the case of most utilitarian theories is to promote the greatest good for the greatest number.

Jeremy Bentham, the founder of modern day Utilitarianism was a remarkable character. Writing in the late eighteenth and early nineteenth centuries, he was a political radical ahead of his time who called for the abolition of both slavery and the death penalty, equality for women and rights for animals to name a few. Bentham wanted social and legal practices to be based on a theory which would guide followers as to the most ethical course of action in any given circumstance, and out of this devised a single principle – ‘The Greatest Happiness Principle’. Bentham saw man as governed by two sovereign masters, pain and pleasure, thus proposing that the moral course of action produced the greatest pleasure and least pain. He devised a system to calculate which course of action is most moral in any given situation, known as the Hedonic Calculus. Under this view, the projected pleasure or pain of an act should be calculated in terms of duration, certainty, remoteness and extent.

Let’s see if we can put the theory into practice. Say I was trying to decide whether or not to go and visit my granny in hospital. I don’t particularly want to go, but it will probably make her happy, so I need to work out whether my discomfort (pain) outweighs her happiness (pleasure) and if it doesn’t I’ll have no choice but to visit. Firstly, the duration of her happiness will probably be a lot longer than my pain at having to visit. One nil granny. Secondly, I am fairly certain that this act will produce happiness for her (she is always asking me to visit) and I am less certain that the act will cause me pain – sometimes she’s actually quite funny. Two nil granny. Now, the remoteness of either my pain or her pleasure probably isn’t very significant. Maybe I’ll be in a bad mood if she asks me why I still haven’t got a boyfriend and I’ll then be unpleasant company at the pub later, but maybe she’ll be in an excellent mood if I go and then she’ll brighten up the hospital staffs’ day. Call that one a draw – still two nil. I can’t be sure of the extent of either my pain or her pleasure but I think I’ll have to concede that the mild pain I might have to endure will probably be less than the pleasure she’ll have at being graced with my excellent company. Three nil granny. So after weighing up the pleasure and pain of my proposed act, I think I can safely conclude that I’d best go and visit granny.

Obviously, there are some much more tricky cases to deal with and a common criticism of Utilitarianism is that it allows the sacrifice of one for the good of many. Let’s imagine a situation in which I am being held captive with ten other prisoners; an evil guards tell me that if I shoot one of the other prisoners at random he’ll let the rest of us go, and that if I don’t that one prisoner can leave and the rest of us will die. What should I do? Bentham wouldn’t hesitate for long – the greatest good for the greatest number is surely to shoot the unfortunate prisoner and let the rest of us go. Now maybe most of us would agree that that is the best course of action in this case, but do we really want to follow a theory that is so quick to make up its mind in life and death situations?

Possibly the most pressing issue for Utilitarianism is that it hinges on what we assume the future will hold. How can we possibly know which course of action will bring about the greatest good in every situation? Perhaps traditional Utilitarianism can’t really tell us what’s best to do day to day. I think I’ve decided that visiting granny or shooting the prisoner is for the best after balancing it up, but what do I know? I’ve made a lot of assumptions. Maybe the random prisoner I shoot has the cure for cancer and would have in fact saved far more lives than the nine I save in killing him. Maybe I really annoy my granny and she would have a much better Sunday afternoon watching Downton Abbey instead.

Another instinctive worry is that the focus on pleasure seems to be an odd formulation for a moral theory. We’re used to hearing moral theories take the form of: ‘don’t murder’, ‘don’t lie’, ‘don’t cheat’ but Bentham gives us the option – you can murder if it’s going to do more good than not murdering, you can lie if the person you’re lying to will gain a greater benefit than if they’d heard the truth (although you’d better be careful they don’t find out!) and you can cheat in certain circumstances too. There are no absolutes other than ‘do whatever will produce the greatest pleasure and the least pain’. However, when we really think about it, it is likely that what will produce the best consequences in general will be following rules such as, ‘don’t murder’ or lie or cheat. If we imagine a society in which the innocent can be murdered at any time for the sake of the greater good – say we desperately need organs to save five people and I can gain them all by killing innocent, healthy John – this society would probably be less well off than if we enforce rules to protect the innocent and so the best consequences are actually achieved by thinking in terms of a bigger picture rather than individual cases.

I think overall, despite some initial misgivings we might have, Utilitarianism can actually be formulated in a way that creates a pretty compelling view of the moral life. It promotes doing the best for the greatest number and causing the least pain possible. I think that despite our not being able to accurately forecast the future, if we at least aim to take these principles on board we might end up pretty satisfied with the results.

Leave a Reply