Pavlos (pavlos) wrote,

A small theory of morals

Our moral instinct, the urge that we feel to do things a certain ethical way or the indignation that we feel when unethical things happen, is a feeling. It's a phenomenon of the mind, like smell or attraction. It's not some thing that exists outside of our minds, and so an abstract axiomatic discussion of morality, such as Plato might have offered, is an expression of morality rather than any useful analysis of the phenomenon.

The object of the moral feeling, in other words the thing that we feel some moral obligation towards, is a mixture of two factors. One factor is empathy, and it yields a moral obligation to whatever has the capacity to suffer. This certainly includes conscious people, and we may allow it include other creatures. The other factor is reciprocity, and it yields a moral obligation to whatever we think harbours a similar moral instinct. We may extend this to more or fewer people, and possibly to other anthropomorphic entities such as organisations.

Many moral problems arise from differences over what is a valid object of empathy. For example some believe that mammals suffer in a similar way to us and thus cruelty to mammals is morally wrong. If you are planning a controversial action, like a war, you are going to harm some people presumably to aid some other people. Which people have the strongest claim to empathy? The more numerous? Those who are more acutely harmed? Those with whom you are emotionally closer (an obviously immoral but common stance)? The same person might change minds radically over time. For example someone may want to "give up" now, committing suicide, falling into addiction, or whatever. The same person may wish to live an active and integrated life in the future. Is the present or the future mind the better object of empathy?

More commonly, moral problems arise from differences in our assessment of what moral feelings are present in other minds. We justify killing enemies in a conflict by claiming that they are similarly void of moral responsibility towards us. When arguing how to punish criminals, we're tempted to ask for a stronger punishment if we feel they have no moral boundaries. If we are racist, we assume that people of other cultures don't apply high moral standards in their interaction with us, and we set out treating them "the same" way. It is very self-serving, seductive, and open to manipulation to believe that another person's morality has broken down.

Still other problems arise from the balance between empathy and reciprocity as moral drives. If we love someone, but they have done something wrong and the state, acting fairly, wants to punish them, what is the right thing to do? If someone has killed others with no apparent remorse, but they are still human and probably in need of compassion, which motive should have greater weight in how we deal with the case? Suppose that two factions who despise each other have fought over some territory for decades, resulting in terrible bloodshed on both sides. As third parties, should we take a dim view of their respective morals, or extend compassion to both of them?

Curiously, the subject of morality, the mind that is expected to do the judging, is not our own self, nor is it the other to whom we extend moral feelings and actions. It is a third party. We act morally to be positively judged by some from of peer group or authority structure in which we feel we belong. One can imagine that our morality evolved this way because it provides a stable mechanism for altruism, whereas a morality rooted purely within the self would easily degenerate into sadism. Unfortunately, there seems to be no evolutionary trait to maximise this group.

We differ materially in moral behaviour by how broadly we choose to define our approval group. Relatively few people seek approval from the entire body of humanity, and we think of this as an unrealistic or admirable stance. More commonly, we seek approval from our own state or culture, and once we get it we can go to war with other states or cultures. Some people get their approval from smaller groupings such as sects or political alignments, which gives rise to hostility or fighting. Gangsters, terrorists, Nazis, and CEOs all get their moral approval from extremely narrow groups, perpetuating great harm on the majority. Families are an appalling source of approval unchecked by moral obligation to the world outside.

As well as choosing the scope of our moral peer group, we can also choose the weight that we give to its messages compared to messages that might reach us from outside with a different moral teaching. For example if we are strongly religious, we're likely to hear only what our church says, ignoring criticism or argument from the outside. We can choose to take in the moral messages of our national media only, or we can pay attention to what people in other places, perhaps where we fight wars, are saying about us. If we are in a highly disciplined organisation such as a corporation or an army, we're more likely to limit ourselves to a narrow moral code than if we're an unaffiliated member of the public.

So, while the dual motives of empathy and reciprocity give rise to some interesting, slightly theoretical moral problems, the burning moral issues that we have to deal with arise from the peer groups that people use as suppliers of moral approval. To make the world a better place, we must broaden these groups and make their walls more permeable. We must therefore engage rather than antagonise those whose morals we seek to change, we must try to demolish or break holes in approval groups such as countries or families, and if we find ourselves inside the group of someone who does wrong we must use the opportunity to pass a moral message.
  • Post a new comment


    default userpic
    When you submit the form an invisible reCAPTCHA check will be performed.
    You must follow the Privacy Policy and Google Terms of use.
  • 1 comment