I was having a conversation with my dad the other day, who tends towards the optimistic end of the spectrum as far as scientific progress goes. He was arguing that computers are bound to acheive self-awareness in the near future, given that they are advancing at such a great rate. I've included my response below, for anyone who's interested (be warned that many of the thoughts therein are pilfered from Raymond Tallis, Kenan Malik, Nicholas Humphrey and other techno-skeptics). Also, be warned - it's pretty long....
The conversation that we had about computers got me thinking: it was difficult to think on the run about the computer intelligence issue, so I’ve refined my thoughts on it a little.
I think it helps to think back to the first calculators. Let’s say, for the sake of argument, that in 1950 there was an autistic maths genius who could calculate pi to 100 decimal places in only 3 seconds. This was a lot faster than anyone else in the world.
Then the first decent calculator was invented. It could calculate pi to one hundred decimal places in 2 seconds. When everyone heard about this, they were terrified. A computer had knocked man off his perch, threatening all that made him superior to the beasts!
Soon, though, people realised that they were mistaken. The calculator was better at calculating pi than the maths genius, but this did not make the computer ‘human’. The computer and the maths genius were performing an identical operation: feeding numbers into a simple algorithm. Using an algorithm is not a human trait; a computer that can do algorithms quickly cannot be said to be ‘human’. (It also shows how easily human-type words infect how we think about computers, making them seem more human than they are: the calculator wasn’t ‘feeding’ anything into anything, because it has no agency. Its ability to get the answer was simply the result of electrons being forced through a bunch of switches in a chunk of silicon by the laws of physics.)
The chess case is more complicated than the pi case, but not for the reason that I first thought. The reason why a Grand-Master-beating computer freaked everyone out so much is that the computer and Gary Kasparov – unlike the calculator and the maths genius – were not playing the same game.
Kasparov was playing chess. The computer, however, was only playing ‘chess’. To all observers, ‘chess’ was indistinguishable from chess. There was a real chess board in the room, with real pieces. Every time the computer printed a move on the screen, the attendants would physically move the chess pieces on the board.
‘Chess’ also looked like chess because Kasparov was in the room, sweating, becoming increasingly agitated, and generally acting human. We are used to seeing human-like things interacting with human-like things. So, by inference, the spectators assumed that Kasparov’s opponent was also a human-like thing.
But in fact, for a computer, playing ‘chess’ is just like calculating pi: numbers are fed into an algorithm. The computer, in a sense, did not beat Kasparov at chess, because it was not playing chess at all. It was only playing ‘chess’.
Kasparov, however, was playing chess. If he was playing chess against another Grand Master, his success would rest largely on his ability to second-guess his opponent several moves in advance. To win at chess, chess masters must construct a psychological profile of their opponent. This requires an understanding that the opponent is a thinking being, with his own self-contained intentions and desires. Only conscious entities can do this.
When two chess masters play each other, there is a circulation of thought between them:
‘If I do X, he will do Y; but if I do Z instead, he will still think that I am going to do X, because I know that he doesn’t know that I know that he will do Y if I do X’. This is not just number-crunching: it requires multiple levels of intentionality. Not only do you have to understand that your opponent is a thinking being, but you also have to understand that your opponent knows that you are a thinking being. And so on. This requires empathy with your opponent.
If the players know each other beforehand, this circulation of empathy becomes more obviously ‘human’. So, the famous Bobby Fisher vs. Boris Spassky match in the 1970s was highly dramatic, because each player knew the other, and could use their previous knowledge to second-guess the other man’s technique.
But even if the players don’t know each other, they have to quickly construct a complex, evolving sketch of the other player’s mind. This requires building a set of assumptions about the opponent’s future behaviour that can be violated if the player does not meet expectations. For example, if I know you as a reckless player, it may take me some time to realise that you are playing cautiously.
It’s true that chess-playing computers can use strings of human-like moves that seem to demonstrate an understanding of human psychology. But such moves have been put there by human programmers. In these cases, the computer functions as a simple container that has been filled with the products of human creativity (i.e. millions of moves). The computer uses an algorithm to select the most suitable string of moves, and uses that string to do the job: just like calculating pi.
The difference between Kasparov and Deep Blue was also invisible because humans are hardwired to ascribe intentionality to objects. We rely for our survival on interacting with other people, and we have an innate tendency to ascribe consciousness to non-conscious things (e.g. the wind blows a potplant over, and I think there is an intruder). So, we assume that Deep Blue was doing what Kasparov was doing – i.e. constructing a psychological profile of its opponent – but it wasn’t. They were doing totally different things.
Kasparov, meanwhile, couldn’t obtain a psychological profile of his opponent, for the simple reason that the computer has no psychology. A computer doesn’t ‘change tactics’ in order to psychologically intimidate its opponent – it just crunches more and more numbers. Another reason for the confusion is that the language of chess is hopelessly biased in favour of human agency: the computer is not really performing a ‘move’; it does not really put Kasparov in ‘check’; it is not really playing ‘aggressively’. All of these human terms are illusions based on our past experiences with humans. The computer is not ‘playing’ chess: it is performing algorithms.
Another reason that Deep Blue’s victory was seen as overly important is because of the strange nature of the game of chess. Chess comes with an incredible amount of cultural baggage. The powerful imagery of medieval battles gives chess an emotional dimension that it does not really have.
Imagine if chess pieces were identically shaped, and distinguished only by number. For instance: all the pieces are numbered squares. Call the Knight ‘piece no. 3’. Rule: Piece 3 can only move in an ‘L’ shape. Call the bishop ‘piece no. 4’. Rule: Piece no. 4 can only move diagonally. The game would be exactly the same, yet much of the effect would be lost. The rich connotations of battle are not part of the rules – we bring these feelings to the game because we are human, and are affected by the emotional connotations of battle.
This change to the game of chess would make the nature of the computer’s ‘victory’ a lot clearer. Computer chess is distorted by the same illusion that makes a Windows operating system seem more ‘human’ than a Dos system, even though they are performing identical operations.
This makes me think that the predictions about increased computer ‘intelligence’ have come to exactly nothing. Computers are faster at calculating pi than they were in 1950, because humans can now cram more logic gates on a silicon chip than they could before. But that is all computers are better at. They are no closer to achieving consciousness now than they were when the Chinese invented the abacus a few thousand years ago. Same principle, same result – just more beads on the abacus being flipped faster. The illusion of ‘intelligence’ is all in the interface: we supply the ‘human’ dimension and falsely ascribe it to the computer.
I can’t see how a supercomputer could be seen as anywhere near as ‘intelligent’, in an emotional sense, than, say, a mouse. The false promise of AI is even clearer when you think of how much better computers are than mice at lots of things (e.g. mice can’t calculate pi), yet how much worse they are at others. A mouse can experience a primitive form of affection for its carer – not because it can do more algorithms than a computer, but because it has a dim awareness of the existence of another being with separate intentions, even if this awareness is very limited.
Monday, August 4, 2008
Subscribe to:
Post Comments (Atom)
3 comments:
"Good morning Dave..."
I love reading your work, Tim. I just wish I could learn to play 'chess'.
Actually Kasparov DID change his game play based on how he thought Deep Blue thought. In one game, he did a strange move early on because he knew that Deep Blue wouldn't have the move in it's opening database and would thus have to start spending more processing power early in the game, losing a lot of time on its clock in the process. It's the same thing he would do if he were playing against an opponent and he knew that his opponent didn't know that opening.
Anyway, where do you draw the line between intelligence and machine? One could argue we humans are simply massively parallel processors made of neurons (aka machines). How do we know that that special something we call "consciousness" actually has an affect on our actions, and that our actions aren't just the effect of our brains alone?
Similarly, how do we know that computers don't already have consciousness?...This line of argument boils down to how you define consciousness, but it's something to consider anyway..
Good post.
Hi Streetcat,
That's really interesting! I had no idea about that. Did the computer get penalised for being slower? (I don't know a great deal about chess, so not sure how these things work).
Re: how do we know that the computer doesn't have intelligence. Well, I guess we don't. It does boil down to definition in a way, but I think that any reasonable definition of consciousness includes a sense of self-awareness.
Post a Comment