Many of you will know about the Turing test. It was developed by probably the most important person in computers Alan Turing, the brilliant, gay mathematician and philosopher who tragically was driven to suicide by 1950s homophobia. (He did almost all the early groundwork in developing the concept of a general-purpose computer. Who knows what other wonderful things he would have continued to develop if his extraordinary mind hadn't been cut short by other tiny, intolerant minds.)
The Turing Test is a way to see if a computer has developed intelligence. It, like most stiking insights, is remarkably simple... in hindsight. It goes like this:
On the other end of the device you use for communication is either another person or a computer. If you can't tell that a computer is not a human then it is to all intents and purposes intelligent. We don't need to understand the inner workings at all.
This was countered some time back by a flawed argument that I've always been surprised sucked so many people in. It is the Chinese Room Argument:
You are in a room with a whole lot of chinese symbols written on cards and a big book of rules for using them. Through a slot you are periodically passed more cards with chinese symbols. Your task is to look them up in the book of rules and use it to select and pass other cards back out again in the right order. You don't understand chinese. You rely entirely upon the book of rules for how to send the cards. Unknown to you the cards you receive are actually questions, and the cards you send are intelligent-looking answers. If such a system passed the Turing Test, so the argument goes, it would still not be intelligent because you have no idea what you are doing.
The argument sounds reasonable until you realise that the nerve cells in your brain are not conscious and don't understand what you are doing. They are just little animals that pass messages back and forth according to simple rules. It is the system which is intelligent. It is the system that is conscious. It is unlikely that the rule book in the Chinese Room Argument could be complex enough to produce intelligent response (intelligence is incredibly complicated and can often give different responses to the same question), but if the rule book was some kind of self-modifying system that gave you rules that replied in an intelligent fashion then yes, I have no doubt that this system would be intelligent.
The Turing Test is a way to see if a computer has developed intelligence. It, like most stiking insights, is remarkably simple... in hindsight. It goes like this:
On the other end of the device you use for communication is either another person or a computer. If you can't tell that a computer is not a human then it is to all intents and purposes intelligent. We don't need to understand the inner workings at all.
This was countered some time back by a flawed argument that I've always been surprised sucked so many people in. It is the Chinese Room Argument:
You are in a room with a whole lot of chinese symbols written on cards and a big book of rules for using them. Through a slot you are periodically passed more cards with chinese symbols. Your task is to look them up in the book of rules and use it to select and pass other cards back out again in the right order. You don't understand chinese. You rely entirely upon the book of rules for how to send the cards. Unknown to you the cards you receive are actually questions, and the cards you send are intelligent-looking answers. If such a system passed the Turing Test, so the argument goes, it would still not be intelligent because you have no idea what you are doing.
The argument sounds reasonable until you realise that the nerve cells in your brain are not conscious and don't understand what you are doing. They are just little animals that pass messages back and forth according to simple rules. It is the system which is intelligent. It is the system that is conscious. It is unlikely that the rule book in the Chinese Room Argument could be complex enough to produce intelligent response (intelligence is incredibly complicated and can often give different responses to the same question), but if the rule book was some kind of self-modifying system that gave you rules that replied in an intelligent fashion then yes, I have no doubt that this system would be intelligent.
no subject
Date: 2006-08-04 06:00 am (UTC)It fooled quite a few.
As a behavioural model of intelligence, the "black-box" approach is good enough for many. It's only after people take the lid off the box for a peek inside that they go: "Hey, what?" and redefine intelligence.
[This is probably what sets me (a Tech Writer) apart from Searle, who's a Systems Analyst when you get down to it. ;) ]
A similar thing happens with linguists and questions of Language and Animal Intelligence.
I think it's important to remember that what is being discussed is Artificial intelligence, not Human Intelligence, and that the two will probably never be the identical. This isn't a criticism of one or the other, merely an observation, and something useful to keep in mind when people scream: "But it's not the same so it's bad/wrong/false!"