Monday, August 4, 2008

Thoughts on Artificial Intelligence

I was having a conversation with my dad the other day, who tends towards the optimistic end of the spectrum as far as scientific progress goes. He was arguing that computers are bound to acheive self-awareness in the near future, given that they are advancing at such a great rate. I've included my response below, for anyone who's interested (be warned that many of the thoughts therein are pilfered from Raymond Tallis, Kenan Malik, Nicholas Humphrey and other techno-skeptics). Also, be warned - it's pretty long....

The conversation that we had about computers got me thinking: it was difficult to think on the run about the computer intelligence issue, so I’ve refined my thoughts on it a little.
I think it helps to think back to the first calculators. Let’s say, for the sake of argument, that in 1950 there was an autistic maths genius who could calculate pi to 100 decimal places in only 3 seconds. This was a lot faster than anyone else in the world.
Then the first decent calculator was invented. It could calculate pi to one hundred decimal places in 2 seconds. When everyone heard about this, they were terrified. A computer had knocked man off his perch, threatening all that made him superior to the beasts!
Soon, though, people realised that they were mistaken. The calculator was better at calculating pi than the maths genius, but this did not make the computer ‘human’. The computer and the maths genius were performing an identical operation: feeding numbers into a simple algorithm. Using an algorithm is not a human trait; a computer that can do algorithms quickly cannot be said to be ‘human’. (It also shows how easily human-type words infect how we think about computers, making them seem more human than they are: the calculator wasn’t ‘feeding’ anything into anything, because it has no agency. Its ability to get the answer was simply the result of electrons being forced through a bunch of switches in a chunk of silicon by the laws of physics.)
The chess case is more complicated than the pi case, but not for the reason that I first thought. The reason why a Grand-Master-beating computer freaked everyone out so much is that the computer and Gary Kasparov – unlike the calculator and the maths genius – were not playing the same game.
Kasparov was playing chess. The computer, however, was only playing ‘chess’. To all observers, ‘chess’ was indistinguishable from chess. There was a real chess board in the room, with real pieces. Every time the computer printed a move on the screen, the attendants would physically move the chess pieces on the board.
‘Chess’ also looked like chess because Kasparov was in the room, sweating, becoming increasingly agitated, and generally acting human. We are used to seeing human-like things interacting with human-like things. So, by inference, the spectators assumed that Kasparov’s opponent was also a human-like thing.
But in fact, for a computer, playing ‘chess’ is just like calculating pi: numbers are fed into an algorithm. The computer, in a sense, did not beat Kasparov at chess, because it was not playing chess at all. It was only playing ‘chess’.
Kasparov, however, was playing chess. If he was playing chess against another Grand Master, his success would rest largely on his ability to second-guess his opponent several moves in advance. To win at chess, chess masters must construct a psychological profile of their opponent. This requires an understanding that the opponent is a thinking being, with his own self-contained intentions and desires. Only conscious entities can do this.
When two chess masters play each other, there is a circulation of thought between them:
‘If I do X, he will do Y; but if I do Z instead, he will still think that I am going to do X, because I know that he doesn’t know that I know that he will do Y if I do X’. This is not just number-crunching: it requires multiple levels of intentionality. Not only do you have to understand that your opponent is a thinking being, but you also have to understand that your opponent knows that you are a thinking being. And so on. This requires empathy with your opponent.
If the players know each other beforehand, this circulation of empathy becomes more obviously ‘human’. So, the famous Bobby Fisher vs. Boris Spassky match in the 1970s was highly dramatic, because each player knew the other, and could use their previous knowledge to second-guess the other man’s technique.
But even if the players don’t know each other, they have to quickly construct a complex, evolving sketch of the other player’s mind. This requires building a set of assumptions about the opponent’s future behaviour that can be violated if the player does not meet expectations. For example, if I know you as a reckless player, it may take me some time to realise that you are playing cautiously.
It’s true that chess-playing computers can use strings of human-like moves that seem to demonstrate an understanding of human psychology. But such moves have been put there by human programmers. In these cases, the computer functions as a simple container that has been filled with the products of human creativity (i.e. millions of moves). The computer uses an algorithm to select the most suitable string of moves, and uses that string to do the job: just like calculating pi.
The difference between Kasparov and Deep Blue was also invisible because humans are hardwired to ascribe intentionality to objects. We rely for our survival on interacting with other people, and we have an innate tendency to ascribe consciousness to non-conscious things (e.g. the wind blows a potplant over, and I think there is an intruder). So, we assume that Deep Blue was doing what Kasparov was doing – i.e. constructing a psychological profile of its opponent – but it wasn’t. They were doing totally different things.
Kasparov, meanwhile, couldn’t obtain a psychological profile of his opponent, for the simple reason that the computer has no psychology. A computer doesn’t ‘change tactics’ in order to psychologically intimidate its opponent – it just crunches more and more numbers. Another reason for the confusion is that the language of chess is hopelessly biased in favour of human agency: the computer is not really performing a ‘move’; it does not really put Kasparov in ‘check’; it is not really playing ‘aggressively’. All of these human terms are illusions based on our past experiences with humans. The computer is not ‘playing’ chess: it is performing algorithms.
Another reason that Deep Blue’s victory was seen as overly important is because of the strange nature of the game of chess. Chess comes with an incredible amount of cultural baggage. The powerful imagery of medieval battles gives chess an emotional dimension that it does not really have.
Imagine if chess pieces were identically shaped, and distinguished only by number. For instance: all the pieces are numbered squares. Call the Knight ‘piece no. 3’. Rule: Piece 3 can only move in an ‘L’ shape. Call the bishop ‘piece no. 4’. Rule: Piece no. 4 can only move diagonally. The game would be exactly the same, yet much of the effect would be lost. The rich connotations of battle are not part of the rules – we bring these feelings to the game because we are human, and are affected by the emotional connotations of battle.
This change to the game of chess would make the nature of the computer’s ‘victory’ a lot clearer. Computer chess is distorted by the same illusion that makes a Windows operating system seem more ‘human’ than a Dos system, even though they are performing identical operations.
This makes me think that the predictions about increased computer ‘intelligence’ have come to exactly nothing. Computers are faster at calculating pi than they were in 1950, because humans can now cram more logic gates on a silicon chip than they could before. But that is all computers are better at. They are no closer to achieving consciousness now than they were when the Chinese invented the abacus a few thousand years ago. Same principle, same result – just more beads on the abacus being flipped faster. The illusion of ‘intelligence’ is all in the interface: we supply the ‘human’ dimension and falsely ascribe it to the computer.
I can’t see how a supercomputer could be seen as anywhere near as ‘intelligent’, in an emotional sense, than, say, a mouse. The false promise of AI is even clearer when you think of how much better computers are than mice at lots of things (e.g. mice can’t calculate pi), yet how much worse they are at others. A mouse can experience a primitive form of affection for its carer – not because it can do more algorithms than a computer, but because it has a dim awareness of the existence of another being with separate intentions, even if this awareness is very limited.

Post-life

Some months ago, my friend – who is turning out to be a frequent source of unconventional wisdom these days – gave me some advice about managing anxiety. True to form, the advice was pretty much out of left field; when I first heard it, I was inclined to take it with a rather large chunk of salt. But as I think more about it, his theory sounds more plausible. I’ll paraphrase him here (although, due to my partial memory of the conversation, he’ll just have to grin and bear it if I heinously misquote him).

“A few years ago, I noticed that I was having quite a few panic attacks. When I spoke to my friends – who were also in their mid-20s at this time – it turned out that they were having similar experiences. For a while there, it seemed like everyone I knew was suffering from panic attacks. This made me realise that our mid-20s are a pretty intense time for such feelings.
This got me thinking: why is it, at this age, that anxiety hits people so hard?
I began to realise that this type of anxiety could be caused by our growing awareness of our mortality.
In our teenage years, we think that we’re pretty much invincible – and by and large, we are. But when you notice yourself aging a little bit, you begin to understand that this kind of attitude can’t be sustained. We all get older, of course; when we start understanding that we’re on a continuum with our elders, rather than being totally separate from them, we suddenly stop thinking of ourselves as a ‘special case’.
I stopped feeling anxious about the future once I began to think of mortality differently. When I was younger, I thought about the fact of death a lot differently than I do now.
In our teens, we find it hard to think of the fact of death in a genuine way. It’s simply that we don’t really understand it. We all have moments when we – even fleetingly – imagine what it would be like if we were no longer around.
The difference between then and now is that these imaginings of the end of our lives are based on a failure of imagination. When we are feeling very sorry for ourselves (for instance, when we feel unappreciated by our peers) we ask ourselves the question: ‘what would they do if I wasn’t here?’ This idea is based on the selfish idea that all children have – i.e. the world won’t be able to function without me.
Although it may not seem like it, this is a comforting thought. But it’s also a destructive one.
When we are young, we find it impossible to think of a situation when we are ‘not here’. This is because we can’t help seeing it from our own perspective.
Think back to when you were a child, when you were angry with a parent or friend for not appreciating you. You probably thought, ‘what would they do without me?’
If we travel forward in time until after our deaths, we sometimes tend to imagine people grieving for us at our funeral. It is a type of revenge fantasy. The people in attendance will be saying, ‘I wish I appreciated him when he was around.’ In this common childhood daydream, we are watching our friends as they mourn for us. We feel validated by this fantasy, because we are able to maintain the same perspective that we have in our everyday lives. We stick around as observers, just so that we can say, ‘I told you so.’
But if we can only think of death by including our own subjectivity, we have not properly faced up to it. The challenge of mortality, for an adult, is to think of life going on without you. This doesn’t mean thinking of life going on while you are watching it from a nice spot in heaven. The conventional Christian idea of an afterlife is flawed, because it can’t take the ‘self’ out of the equation. When we imagine ourselves in ‘heaven’, we are still ‘alive’ in the sense that we retain our own identity. It is a comforting denial of the fact of death, rather than an acceptance of it.
Thinking of death in a purer sense is more difficult, but it helps to dispel anxiety about our future. We have to be able to think of the time after we are gone as lacking our selves, not just our bodies. Getting rid of the ‘observer’ also rids you of the thought that your friends and family only have meaning in relation to you. Forcing yourself to imagine a state of the world where you no longer have a perspective to view things from may sound scary; however, I found that it was a very effective way to come to terms with the limitations of my own existence.
After I learned to jettison the notion of ‘afterlife’ as a time in which we can only helplessly observe the world, I found that my anxiety was no longer such a problem. You can only accept your finite lifespan once you realise that the end of life also entails the end of subjectivity.”

Not bad for a half hour at the pub, eh?