Monday, 30 November 2015

What would Siri do?

Would you ever take moral advice from a computer program, like Apple's Siri? To my mind, this question boils down to whether it will ever be possible to programme a computer to make realistic, life-like recognisably-human moral decisions. And the answer to this question depends on one’s underlying view of morality. If you view morality as something magical or mystical, then the answer may be no. But if you view morality as part of the material world, a product of human biology and culture, then the answer must surely be yes. If meat is capable of morality, then there is no principled reason why silicon could not be.

The next question is then, what would the content of Siri’s morality be? What, after all, is the difference between right and wrong?

The best theory we have is that the function of morality is to promote cooperation: morality can be seen as a set of heuristics or decisions rules or strategies for solving the problems of cooperation and conflict recurrent in human social life (see here).

It would be a fairly simple matter to program these decision rules into a computer – in fact, we already do.

Reciprocity is an important component of human morality — and we have already programmed artificial agents to reciprocate, as in the case of tit-for-tat in Axelrod’s tournaments. These agents exhibit the functional equivalents of trust, honesty, revenge, remorse and forgiveness… And there’s no reason to suppose we couldn’t also program artificial intelligence with all the other components of our moral psychology — caring for kin, adopting local conventions, respecting hierarchies and so on.

Of course, these agent-based simulations operate in very circumscribed artificial environments. So the real challenge would be, not to program morality, but to have such a program interact with the messy reality of real-world moral problems.

But here, Siri would have an advantage, because it would not have to get to grips with the real world – it would instead interrogate the user for the relevant information.

We might envisage a program that ran through a checklist or decision tree, posing questions such as: Who does this problem concern (you, a family member, a friend, a member of your community, a stranger)? How important this person is to you? How frequently you see them? What have they done for you in the past? How powerful are they?… and so on.

Having parsed the problem, and weighed up the various options, Siri might come back with a recommendation. It might tell you the right thing to do, or the better thing to do. It might tell you what the average person would do, or what a high status person would do. Or if the programme was networked, with access to a database of other users, it might tell you what other people actually have done in similar situations.

You could also calibrate the system, to tailor it to your own proclivities, by completing various personality tests and measures of social and moral attitudes. It could then tell you what you would do if you thought about it carefully, what you would be most comfortable doing, what was consistent with what you did in the past — or perhaps ‘what you would do on a good day’.

You might think that the real test would come with moral dilemmas — adjudicating between apparently incommensurate moral goals. Actually, I think such situations would be the easiest to resolve. If it was a choice between doing the right thing, and the wrong thing (for example, helping yourself at the expense of others) then you don’t need a computer to tell you the difference. What’s lacking is not advice, but motivation. And if it’s a case of two options — for example, helping your family versus helping your friends — which are both equally moral, and between which you (and Siri) are indifferent, then the program could help by ‘flipping a coin’, or acting like a kind of moral 8-ball. For if each option is as good as the other, then the sooner you resolve the deadlock the better.

I am not a programmer, and so I do not know if or when such an artificial machine might be built. But I am confident that the attempt to build such a machine would be instructive. It would be a kind of ‘moral Turing test’ that would allow us to try out our theories of moral psychology. The closer Siri’s responses corresponded to those of ‘the reasonable person’, the ‘man on the Clapham omnibus’, a trusted friend, a respected colleague… the closer we would be to fully understanding the nature and content of human morality.
___

Picture credit, The Economist

Wednesday, 15 January 2014

Expecting to fly

In response to this year’s Edge Annual Question, I argue that there is no such thing as association / associative learning, and that we need to abandon the idea in order to make progress with the real science of learning. By way of an alternative to associationism, I recombine existing ideas to form the novel conjecture that humans (and other animals) learn by recombining existing ideas to form novel conjectures (and then putting them to the test). I’ve pasted some references to some of these previous ideas below; and, as the essay is itself an exercise in conjectural learning, I would be very grateful for any ‘tests’ in the form of comments, questions or suggestions for further reading.

The problems with associationism
*Popper, K. R. (1972). Objective Knowledge: An evolutionary approach (Revised ed.). Oxford: Oxford University Press.
Popper, K. R. (1990). A World of Propensities. Bristol: Thoemmes.
Popper, K. R. (1999). All Life is Problem Solving. Bristol: Routledge.

Popper developed his critique of induction, and his alternative hypothetico-deductive method, in the context of philosophical debates about epistemology and scientific method. In his later work, Popper applied this same line of thinking to the related psychological problem of how organisms -- including humans -- acquire knowledge about the world. Popper argued that because induction was logically invalid, it must also be psychologically invalid; in other words, given that there is "no such thing" as induction in logic, then induction (or ‘association’) is not available as means by which organisms could learn about the world. Popper maintained that "there is no such thing as association", it is "a kind of optical illusion"; "we can, and must, do without [it]" (Popper, 1972). Instead, Popper proposed that organisms come into the world equipped with innate knowledge about what to expect (the product of previous iterations of trial-and-error by natural selection); and that, during their lifetimes, individual organisms increase their knowledge by testing their expectations against the world and learning from their mistakes. In later lectures he put it like this: “everything we know is genetically a priori. All that is a posteriori is the selection from what we ourselves have invented a priori." (p46), and "all knowledge is a priori, genetically a priori, in its content. For all knowledge is hypothetical or conjectural: it is our hypothesis. Only the elimination of hypotheses is a posteriori, the clash between hypotheses and reality. [“our senses can serve us . . . only with yes-and-no answers to our own questions" p46-7 (Popper, 1990)] In this alone consists the empirical content of our knowledge. And it is enough to to enable us to learn from experience; enough for us to be empiricists" (p47) (Popper, 1999).

Information theory
*Dawkins, R. (1998). The Information Challenge. The Skeptic, 18.
Dawkins, R., & Dawkins, M. (1973). Decisions and the uncertainty of behaviour. Behaviour, 45, 83-103.

Animal learning
Breland, K., & Breland, M. (1961). The misbehaviour of organisms. American Psychologist, 16, 681-684.
Garcia, J., & Koelling, R. A. (1966). Relation of cue to consequence in avoidance learning. Psychonomic Science, 4, 123-124. ["The hypothesis of the sick rat, as for many of us under similar circumstances, would be, 'It must have been something I ate.'"]
Gallistel, C. R. (1990). The organisation of learning. Cambridge, MA: MIT Press.
Gallistel, C. R. (1999). The replacement of general-purpose learning models with adaptively specialized learning modules. In Gazzaniga (Ed.), The Cognitive Neurosciences (2nd ed., pp. 1179-1191). Cambridge, MA: MIT Press.
Gallistel, C. R., Brown, A. L., Carey, S., Gelman, R., & Keil, F. C. (1991). Lessons from Animal Learning for the Study of Cognitive Development. In S. Carey & R. Gelman (Eds.), The Epigenesis of mind: essays on biology and cognition (pp. 3-36). Hillsdale, NJ: Lawrence Erlbaum Associates.
Gould, J. L. (1986). The Biology of Learning. Annual Review of Psychology, 37, 163-192.
*Gould, J. L., & Marler, P. (1987). Learning by Instinct. Scientific American, 256, 74-85.

Human learning
Boyd, R., Richerson, P. J., & Henrich, J. (2011). The cultural niche: Why social learning is essential for human adaptation. PNAS, 108, 10918-10925.
Gopnik, A. (1996). The Scientist as Child. Philosophy of Science, 63, 485-514.
Marcus, G. (2004). The Birth of the Mind: How a tiny number of genes creates the complexities of human thought. New York: Basic Books.
*Pinker, S. (2010). The cognitive niche: Coevolution of intelligence, sociality, and language. PNAS, 107, 8993-8999.
Spelke, E. (1994). Initial Knowledge - 6 Suggestions. Cognition, 50, 431-445.

* = especially recommended
___

Picture credit: The Daily Telegraph