Sunday 25 September 2011

On the sub-optimality of bus seating

I realise that it’s pretty far down the long list of Things That Are Wrong With The World, but I am increasingly vexed by the potential inefficiency of seat allocation on buses. It is almost certainly the economist in me bridling at the casual violation of the Pareto Criterion – itching to realise the economist’s dream of making an arrangement better, and leaving nobody any worse.

Essentially, for our purposes here, there are two types of people who take the bus. There are those with companions, who want to sit with those companions. And there are those who are alone, who would rather have the space of two seats to themselves than be pressed up against a stranger.[1] I set aside the question of why people seem to be so socially retarded and jealous of their privacy that they find the idea of approaching a stranger terrifying rather than the prelude to a new friendship. If people were more open, the problem would soon disappear. Suffice to say, this cultural foible has been discussed at length elsewhere.

If the bus is not too crowded, there is ample opportunity for both types to get their wish. People can choose to sit next to the people they want to or to opt for isolation. The trouble begins when the number of free seats dips below twice the number of people who want to sit alone. That is, when there are no longer any free pairs of seats.

Suppose there are no free pairs of seats available then, and a couple – Alice and Bob – get onto the bus. They cannot sit together, and so have to sit next to other people. Alice sits next to Craig, and Bob next to Davina. Thus not only must Alice and Bob do without the pleasure of each other’s company, but Craig and Davina have to sit next to strangers. Yet if Craig and Davina must sit next to people they don’t know anyway, wouldn’t it make sense for them to sit next to one another, and so allow Alice and Bob to sit together? Alice and Bob would be better off, Craig and Davina no worse off, and Pareto can settle down in his grave. But this doesn’t happen, and all the real-life Alice and Bobs must suffer in silence.

What can be done about it all? With my neo-Stalinist tendencies, I’m inclined towards a bureaucratic statist solution. Everybody must register for their bus trip 24 hours in advance, and inform the Central Bureau of Seat Allocation who they want to sit next to. But even I have to concede that this is a bit of an overreaction. It probably isn’t cost-effective to open a whole government department to deal with the problem. One of the great virtues of buses is their relative flexibility – how many people know 24 hours in advance exactly which bus they want to take and when? Liberty, as ever, upsets patterns.

If the state fails, the obvious response is to turn to the market for a solution. Maybe Alice and Bob can pay Craig and Davina for their seats. This idea has the merit of identifying the pair who are least bothered about relinquishing their space – if Craig and Davina put too high a premium on it, they can be undercut by others who will sell for less because they care less about being alone.

However, the seat market appears to be prone to a major market failure: its transaction costs are too high. People are unlikely to go through the effort of setting up the mechanism of the market for so little reward. In any case, by the time the deals are struck, the chances are that it will be time for one of the parties to disembark, making bargaining somewhat impractical.

The best and most obvious option is what might be called the ‘ethical libertarian’ approach[2] Here, we just rely on people to use their initiative to escape these sub-optimal arrangements. We expect Craig or Davina to offer their seats, and hope Alice and Bob are not too bashful to ask for them.

The trouble with ethical libertarianism, both here and in general, is that it expects a lot of people in terms of good sense and altruism. Perhaps I’m too much of an economist to share their faith.





[1] I assume here that seating is arranged in pairs, as on Aberdeen buses, but the idea is easily applied to other arrangements.

[2] I’ve just made up the name ‘ethical libertarianism’. I’m not sure if it’s any good. In any case, I take it to mean the position that we have extensive moral duties to other human beings, but that the government ought to execute relatively few of them

Tuesday 20 September 2011

Should we abandon moralism?

One of the most interesting aspects of philosophy is its ability to provoke a variety of responses, and to be utilised for a wide variety of functions. It can be diverting and entertaining – hence the success of hit movies based more or less explicitly on thought experiments. It can be a practical tool to help us decide what to do – this is the aim of those who do practical ethics (clue’s in the name, guys). It can be pressed into little motivational or comforting soundbites (c.f. anything by Alain de Botton)

But often (and I think this is less commonly recognised or discussed), philosophy can be terrifying. It can challenge the very foundations of our existence, undermining the values and commitments we have based our lives around. Discussing Descartes’ scepticism, Mark Steel advises his viewers not to allow the thought to linger for more than 20 seconds because otherwise “it will start to drive you mental”. The same is probably true of the claims of pessimists, nihilists and determinists.

What’s always most troubled me, though, the philosophical Pandora’s box I have been desperate to keep shut, are the challenges of meta-ethics. What is morality? What do the concepts of right and wrong even mean? Why should we care about it? In most of my moral thinking, I have been content to assume that there is an answer to this question, and to get on with the substantive questions of how we should act – firmer ground where I felt more comfortable. I don’t think this is an unusual policy – in my first graduate class in Ethics, we were advised not to worry too much about meta-ethics, and assured that it is entirely separable from practical ethics.

This attitude infuriates Joel Marks, who urged moral philosophers to out themselves as amoralists in a pair of New York Times articles last month. Appealing to luminaries such as Hume, Nietzsche, Ayer, Sartre, Mackie and Rorty, Marks argues that morality does not exist. Moreover:

“This, odd as it may sound to say so, is relatively uncontroversial in modern ethical philosophy; for what I mean by morality here is its metaphysical conception as a truth or command that comes to us from “on high.” Very few well-known philosophical moralists have believed in such a thing since a century and more.

But precisely my gripe is that you wouldn’t know it from the way they speak! And even if they can communicate clearly with one another, the lay person is left to think otherwise.

Most philosophers, according to Marks, have been doing what I have done: treating meta-ethics as though it has no bearing on practical ethics. However, Marks thinks that this common meta-ethical position has radical implications for how we think about moral problems – he calls for the abolition of moral language altogether. His point is so obvious, it seems bizarre that it should even need stating: if most philosophers do not believe that such a thing as morality exists, why should they still keep talking as though it does?

I think the meta-ethical position that Marks sets out is more or less similar to my own. Moral ‘beliefs’ are nothing more than statements of preference. So when I say that it is morally wrong that anybody should live in poverty, all I am saying is that I would really like it if poverty were eradicated. This is fundamentally no different in kind from my desire that the sun stays out, or my desire to eat cake.

Though I agree that moral claims are nothing more than preferences, I am wary of making this explicit for a couple of reasons. The first is pragmatic – to abandon moralistic argument is to lose one of the most powerful means of persuasion. Suppose it is my desire that poverty be eradicated. I am far more likely to convince people to do their bit to reduce poverty if I can couch my arguments in objective moral terms – if I can portray it as being your duty, rather than just in line with my whims. Of course, if everybody were what Marks calls a ‘desirist’, then what we once called ‘moral’ argument could be carried out in amoral terms. But for as long as moral realists exist, it is worth giving our arguments moral dressing so that they appeal to as many people as possible.

Undeniably, there is something underhanded about reasoning in this way. Should we really be using moral argument in such a manipulative way? But this thought itself shows the dangers and difficulties of trying to purge our reasoning of moral content. The idea that we shouldn’t manipulate others so that they act in accordance with our desires is itself a moral notion – it is just another preference. If we try to cleanse our language of explicitly moral terms, there is a danger that moral ideas may still sneak in through the back door.

We are so used to thinking in moral terms that we need to remind ourselves that our values are just preferences – we are liable to forget our meta-ethical commitments. Keeping our moral premises explicit ensures we compare like-for-like, and don’t mistake a preference for something more forceful, like an objective moral command. For example, a natural response to the idea that there are no objective moral values is to conclude that all moral commitments are equally valid, and so we ought to unquestioningly tolerate the views of others. Yet this, of course, ignores the fact that tolerance is a moral value itself – and so we should only care about it insofar as we have a preference for tolerance.

Peculiarly, given that he believes it to be the only novel aspect of his argument, Marks offers only a solitary paragraph towards the end of his second article to making a positive case against moralistic language. Essentially his objection is that “The most horrific acts of humanity have been done not in spite of morality but because of it”. Even putting aside the moralistic assumptions that underpin this statement, this isn’t much of an argument. The question of whether morality has been a force for good is as tiresome and irresolvable as whether religion has been beneficial or not. Some people have been motivated to do good things by morality, others have been led to commit terrible acts. And just as atheists have their fair share of atrocities on their hands, it is hardly as if no evil has come from people pursuing their own desires.

When Marks says that most philosophers do not believe in morality, what he really means is that they do not believe in an objective, transcendental set of moral truths. But this is not enough reason to abandon the language of morality altogether. Ultimately, Marks fails to make a convincing case against conceiving of morality as a special type of subjective desire.

Saturday 3 September 2011

In defence of an episodic History education

Here is a list of topics I studied in history over the course of my primary and secondary school education, more or less in the order I studied them: Celts & Romans, Tudors & Stuarts, The Stone Age, Scottish Wars of Independence, The Industrial Revolution, the female suffrage campaign, the formation of the Welfare State, the prelude to World War I and the Weimar Republic/ Nazism/ World War II (on about three or four separate occasions).


I don’t think my experience was that unusual among British history students of my generation: a more or less random assortment of topics, linked by little more than an underlying eurocentrism, and a peculiar obsession with the Nazis. It is this pick ‘n’ mix approach, I think, which is increasingly coming under attack. Simon Jenkins decries ‘optionalism’, insisting that history must have a ‘narrative’, “starting at the beginning and running to the end”. In doing so, he allies himself with Education Secretary Michael Gove, and his history tsar, Niall Ferguson, prominent opponents of ‘tapas’ or ‘smorgasbord’ history.


But I think the loud objections to Ferguson’s view of history (most entertainingly from children’s history god Terry Deary) illustrate the problems of this approach. The story that Ferguson wants to tell is the story of how and why Europe came to dominate the world. There’s two problems with this. Firstly, there is the question of the truth or otherwise of this narrative. For example, Ferguson is a revisionist historian who tends to emphasise the positive effects of European colonialism. But it would be deeply irresponsible of him to present these views as accepted historical fact, and to fail to acknowledge the number of historians who would have profoundly object to this picture.


Even if Ferguson’s answers are valid, he may still be asking the wrong questions. Ferguson’s proposed emphasis has raised eyebrows because of its unabashed eurocentrism. But it also looks like it will be strongly focused on ’history from above’ – the big picture political, economic and military story, as opposed to the details of everyday life.


The trouble with teaching history as a narrative, therefore, is that it sacrifices two different types of neutrality. In the first place, It involves favouring one narrative over another. This is particularly problematic, given the strongly ideological flavour of many of these competing accounts. Favouring one narrative over another means making a choice between using history to develop a national myth to foster patriotism, or to seek cross-cultural understanding and to mould global citizens. Methodological neutrality is sacrificed, too. To insist on teaching history as a grand narrative is to presume that history should be about big picture questions, and therefore implies that excessive focus on details is a less valid approach.


Why should we care about neutrality? Why should the people designing a history curriculum try to avoid making a stand on controversial questions? There is a ethical argument and a practical argument. The moral objection is that history should not be used as an ideological tool – children should be left to make up their own minds, and is it is illegitimate to try to ‘mould’ them in any way.


The practical point is more often missed. Children are rarely passive receptacles of education. Just because you design a course for them does not mean that they will absorb it in the way you want them to. Consequently, it is not just what children should know, but also what will engage and interest them, that must be considered when designing a curriculum.


The episodic approach to history escapes both these problems. Rather than having to choose one big, sweeping narrative, it allows students to focus on different topics in depth. An essential part of properly addressing a topic in history is to understand how it is viewed from different ideological and historical perspectives. If one topic lends itself to a given approach, the next can be used to illustrate a completely different way to do history. That way, there is a hope of finding something to suit every taste. The thing that educationalists often seem forget is that even if you spoon feed children a certain message, you cannot be sure that they will swallow it. An equally crucial task of a history education is to try and get them to develop a taste for the subject.


The objection to an episodic history education is that it leaves the student with big gaps in their knowledge, and little sense of how events fit together. But history is so vast that nobody can hope, in the limited time a child has in the history classroom, to cover even the ‘basics’. There will always be omissions, and those omissions will always be controversial.


A more plausible aim is to give the curiosity to ask their own questions, the tools to follow these interests, and the critical faculties to form their own narratives. I think the depth and pluralism of the episodic approach best serves this goal.