Would you ever take moral advice from a computer program, like Apple's Siri? To my mind, this question boils down to whether it will ever be possible to programme a computer to make realistic, life-like recognisably-human moral decisions. And the answer to this question depends on one’s underlying view of morality. If you view morality as something magical or mystical, then the answer may be no. But if you view morality as part of the material world, a product of human biology and culture, then the answer must surely be yes. If meat is capable of morality, then there is no principled reason why silicon could not be.
The next question is then, what would the content of Siri’s morality be? What, after all, is the difference between right and wrong?
The best theory we have is that the function of morality is to promote cooperation: morality can be seen as a set of heuristics or decisions rules or strategies for solving the problems of cooperation and conflict recurrent in human social life (see here).
It would be a fairly simple matter to program these decision rules into a computer – in fact, we already do.
Reciprocity is an important component of human morality — and we have already programmed artificial agents to reciprocate, as in the case of tit-for-tat in Axelrod’s tournaments. These agents exhibit the functional equivalents of trust, honesty, revenge, remorse and forgiveness… And there’s no reason to suppose we couldn’t also program artificial intelligence with all the other components of our moral psychology — caring for kin, adopting local conventions, respecting hierarchies and so on.
Of course, these agent-based simulations operate in very circumscribed artificial environments. So the real challenge would be, not to program morality, but to have such a program interact with the messy reality of real-world moral problems.
But here, Siri would have an advantage, because it would not have to get to grips with the real world – it would instead interrogate the user for the relevant information.
We might envisage a program that ran through a checklist or decision tree, posing questions such as: Who does this problem concern (you, a family member, a friend, a member of your community, a stranger)? How important this person is to you? How frequently you see them? What have they done for you in the past? How powerful are they?… and so on.
Having parsed the problem, and weighed up the various options, Siri might come back with a recommendation. It might tell you the right thing to do, or the better thing to do. It might tell you what the average person would do, or what a high status person would do. Or if the programme was networked, with access to a database of other users, it might tell you what other people actually have done in similar situations.
You could also calibrate the system, to tailor it to your own proclivities, by completing various personality tests and measures of social and moral attitudes. It could then tell you what you would do if you thought about it carefully, what you would be most comfortable doing, what was consistent with what you did in the past — or perhaps ‘what you would do on a good day’.
You might think that the real test would come with moral dilemmas — adjudicating between apparently incommensurate moral goals. Actually, I think such situations would be the easiest to resolve. If it was a choice between doing the right thing, and the wrong thing (for example, helping yourself at the expense of others) then you don’t need a computer to tell you the difference. What’s lacking is not advice, but motivation. And if it’s a case of two options — for example, helping your family versus helping your friends — which are both equally moral, and between which you (and Siri) are indifferent, then the program could help by ‘flipping a coin’, or acting like a kind of moral 8-ball. For if each option is as good as the other, then the sooner you resolve the deadlock the better.
I am not a programmer, and so I do not know if or when such an artificial machine might be built. But I am confident that the attempt to build such a machine would be instructive. It would be a kind of ‘moral Turing test’ that would allow us to try out our theories of moral psychology. The closer Siri’s responses corresponded to those of ‘the reasonable person’, the ‘man on the Clapham omnibus’, a trusted friend, a respected colleague… the closer we would be to fully understanding the nature and content of human morality.
___
Picture credit, The Economist
No comments:
Post a Comment