In Western ethical philosophy, there are three main schools of thought on how to go about approaching the problem of determining the right action to take in any given situation.  Note that all three of these general approaches don’t necessarily determine the actual content of any given moral code.  They just describe what is the right and proper framework to use to arrive at the right answer.

During the late 18th and early 19th Centuries, philosophers all over Europe got very excited about approaching the traditional problems of philosophy from what they considered to be their new, Enlightened perspective.  The idea was that reason, wisely and properly applied, could help definitively answer these age-old questions.

These debates resolved themselves into two broad camps.  The first came to be called the deontologists.  They claimed that the proper moral framework was, essentially, legal.  There exists a list of valid rules governing all action for all men, whether laid down by God directly or derivable from a small set of correct axioms.  And so, men should be judged by how well they follow these rules.

The second claimed, in contrast to the deontologists, that the way an act should be properly judged is by its real-world consequences.  Hence, they came to be known as consequentialists.  In their view, intentions and rules are morally secondary to the actual effects of any given action.  If you actually want a better world instead of just feeling good about your repeatedly failed attempts to make one, they argue, how can you believe otherwise?  Deontological frameworks necessarily blind people to the importance of the potentially disastrous consequences of their actions, instead rewarding them for almost-certainly misguided zeal.

The typical deontologist response to this is that accurately judging expected consequences ahead of time is really hard.  In practice, basically nobody can do it right.  So in the real world, they argue consequentialism tends to cash out into long, complicated chains of motivated reasoning, eventually ending up wherever the philosopher really wanted to go in the first place.  It’s much more tractable to follow a set of objective rules.

Generations of ethical philosophers have gone back and forth along these lines.  And despite their inability to resolve their dispute to this day, they generally agree on the superiority of the modern approaches to what came before.  Which raises an interesting question.  The Western philosophical tradition stretches back thousands of years before the Enlightenment.  So if both modern deontological and consequentialist approaches have their roots in that time, what was the general ethical approach that they displaced?

The answer turns out to be an ethical framework known to modern philosophy as virtue ethics.  Virtue ethicists, reaching at least as far back as Aristotle, have argued that the most important factor in determining the moral rightness of an action is by its effect on the character of the actor.  The interior effect is more important than any exterior consequences or demands.

This implies a sharp contrast to the two Enlightenment approaches, because it does not claim universality.  By this, I mean that in theory, if any person were placed in a given situation, he would be expected to follow the same deontological imperative or perform the same consequentialist calculation.  But the virtue ethicist does not consider this to be necessarily, or even likely, true.

This is why modern philosophers are beloved of thought experiments that help to clarify moral ambiguities.  A famous example is known as the trolley problem, which poses a scenario that asks whether or not it is better to actively kill one man to save five, or to let the five go to their deaths.  This problem generally divides deontologists (who tend to hew to a rule like “don’t kill people”) and consequentialists (who often say one death is less than five, so do the thing that gets fewer people dead).  But they both believe that, if one could figure out a consistent moral system that behaves correctly on these strange, fictional scenarios, then you could puzzle out the right answer to other, more practical problems and be assured that you are correct, even when the results do not match intuition.

A virtue ethicist would say that all of this work is a waste of time, because the very idea of a situation with a right answer independent of the actor has already cut away the very most important consideration.  Since talking about these sorts of problems is the core activity for academic ethicists, this goes a long way to explaining why the virtue ethics approach is unfashionable nowadays.  Which makes sense.  It’s hard to get tenure – or even respect – when you loudly proclaim that you’re not going to engage with any of the main open problems in your field.

So, with all that background, we can now delve a little deeper.  What do all these ethical frameworks have in common?  They all purport to tell real people how to act in order to make the real world a better place.  Ethics is, therefore, necessarily bound up in physical limitations.  To take a silly example, an ethical rule that requires a man to fly around the room simply by waving his arms about is obviously inoperative.  It doesn’t matter why that would be a good idea; people just can’t fly like that.

Well, one of the interesting properties of the physical world we live in is that the force exerted by any of the fundamental forces (gravitation, electromagnetism, the nuclear forces) upon a mass is inversely proportional to the distance between them.  And, in fact, for the forces that dominate at human scales, they are inversely proportional to the square of the distance.

Among many, many other things, this explains why effects are commonly expected to be local.  It is often said when talking about chaos theory in popular culture that a butterfly flapping its wings in China can eventually cause a storm in the United States.  This butterfly effect is an example of how the weather is a chaotic system.  But this is notably weird precisely because virtually everything else works in a pattern cognate to the underlying inverse-square laws: distant effects in space and time are attenuated compared to local ones.

If that’s the case, then it’s not unreasonable to assume that ethics should work the same way.  The amount of influence you have over any given region of the future is probably inversely proportional to the distance between you and it.  If it’s far away in space or time, you probably shouldn’t be worrying about it too much, because it’s not like worrying about it is going to help you change anything.

I call this hypothesis the Inverse Square Law of Ethical Concern.  Formalized, it would look something like this: C ~ k * V / d^2.  Here, ‘C’ represents the correct concern (or priority weighting) that a person should put on figuring out the answer to any ethical dilemma.  The ‘V’ is the naïve valuation you’d use in calculations of this sort.  And the ‘k’ term is just some unknown constant, depending on the units and the relative weighting thereof.  Commonly, it’s omitted when talking about relationships like this.

And ‘d’, in this context, isn’t just physical distance.  It’s what I call ethical distance: a distance in a multi-dimensional space where four of the dimensions are the traditional space/time terms and the others are based upon what you might call social or ideological distance.  In all multi-dimensional measures of distance, adding more dimensions just increases the possible distance, so this means that ethical distance is necessarily lower-bounded by space-time distance.

This concept has a bunch of neat properties.  In particular, I find that it neatly resolves several dilemmas that cause trouble for other ethical systems.

For example, there is a thought experiment that’s commonly referred to as Pascal’s Mugging (riffing off of Blaise Pascal’s Wager arguing for belief in God).  In it, a dude comes up to you on the street and claims to be God.  He tells you that he’ll create a brand-new pocket dimension identical to this one and destroy it in a horrible fashion, unless you give him $5.  Most people would just laugh out loud and go on their way.

But to your typical consequentialist utilitarian, this is a real problem.  From this perspective, you’re being presented a deal that looks like a choice between unimaginably vast global negative utility multiplied by a very tiny probability and the certainty of -$5.  There’s no non-zero probability you can put on this claim that doesn’t lead you to determine that the best course of action isn’t to just cough up the $5.  And good rationalists don’t believe in zero probability.  So, this is obviously, hilariously bad: your “maximize global EV” rule has a trivial security hole.  This is way worse than being open to being Dutch booked.

Well, if our mugging victim was using the Inverse Square Law of Ethical Concern, he could discount the pocket dimension by its necessarily large ethical distance from the potential victim, in addition to the discount for the credibility of the threat itself.  Essentially, he’d have grounds to shrug and say “Not my problem.”  And then walk away with the $5.

This also implies, in accordance with most people’s intuitions, that they should focus most of their altruistic or charitable efforts close to home.  Many deontologists and consequentialists tend to believe that a life is a life, so it is obviously more ethical to give to a charity that is saving lives in Africa rather than to the local PTA.  But with the correct distance discount, a life ten thousand miles away probably is of less concern than the marginal improvement to your child’s classroom.

But in addition to providing the proper weighting to distant events, I believe that it also resolves the age-old ethical framework trilemma.  After all, if you think about it, you always have a spatial distance of zero to yourself.  So, the dominant term in any ethical consideration is often how a given action will modify the actor to behave in the future.  And this is just a restatement of the core argument in favor of virtue ethics.  But, intriguingly, we’ve rederived it as a consequence of an analogy to the physical world in which actors are necessarily embedded.

Advertisements