Monday, 10 August 2015

Advocates of Artificial Intelligence as Behaviourists


 

In extremely general terms, it can said that behaviourism was a response to the Cartesian (or, even more widely, Western) philosophical tradition in which behaviour, actions or what is done by persons was seen as the outward expression of what goes on in the mind. Thus, in that sense, many of those who were initially involved in artificial intelligence (AI) were following in behaviourism's footsteps in that they believed that if a computer (or robot) behaved as if it had intelligence or had a mind, then, almost by definition, it must be intelligent or have a mind.

Many other currents in post-World War Two philosophy played-down the innards of the mind and, consequently, played-up behaviour. We had the work of the late Wittgenstein in which private states were seen as nothing more than beetles in boxes. We had Gilbert Ryle's The Concept of Mind and Quine saying that all there is to meaning is “overt behaviour”. And then functionalism followed all that.

Specifically in terms of AI, it can fairly safely be said that many of the defenders of AI denied (or simply played-down) the distinction between actions (or behaviour) and what's supposed to be “behind” action (or behaviour). Thus if that "binary opposition" is rejected, then all we have to go on are the actions (or behaviour) of computers. And if computers pass the Turning test, then they're intelligent. Full stop. Indeed it's only a few behavioural steps forward from this to argue that computers have minds.

Of course if we follow this line to the letter, then it can be said that Zombies also have minds; as well as consciousness. And a thermostat has a little bit of a mind too.

If you think my last inclusion of a thermostat is ridiculous, then here's John Searle talking about the inventor of the term 'artificial intelligence', John McCarthy. Searle writes:

McCarthy says 'even a machine as simple as a thermostat can be said to have beliefs.' I admire McCarthy's courage. I once asked him 'What beliefs does your thermostat have?' And he said 'My thermostat has three beliefs – it believes it's too hot in here, it's too cold in here, and it's just right in here.'...” (1984)

Weak and Strong AI

This is where the distinction between strong and weak AI comes into play.

Weak AI proponents argue that it's unquestionably the case that some computers (or all computers?) act as if they're intelligent or have minds. Though the operative words here are “as if”. Thus, they continue, it may take a little bit more time to develop computers which have genuine intelligence (whatever that is) or have minds. In other words, there has to be more than behaviour or actions to intelligence/mind.

Alan Turing himself put the weak AI position when he argued that it doesn't matter if a machine has a mind in the human sense: what matters is whether or not it can act in the way that human beings act – i.e. intelligently. (In those days that basically meant answering questions and solving mathematical problems.) In fact that was the crux of the Turing test which resulted in the Dartmouth proposal. Namely:

"Every aspect of learning or any other feature of intelligence can be so precisely described that a machine can be made to simulate it." (1955)

John Searle states the strong AI hypothesis (with all its behaviourist trappings) in the following way:

The other minds reply (Yale). 'How do you know that other people understand Chinese or anything else? Only by their behaviour. Now the computer can pass the behavioural tests as well as they can (in principle), so if you are going to attribute cognition to other people you must in principle also attribute it to computers.'...” (1980)

Strong AI bites the bullet and denies the distinction between behaviour and mind/intelligence. If a computer acts or behaves as if it's intelligent or has a mind, then it is intelligent or has a mind. In other words, even though I've just written the words “as if”, there's no actual as if about it.

So why worry our pretty little heads about what must lie behind these expressions of mind or intelligence? In true behaviourist fashion, all we really need (or have!) is behaviour.

Sentience and Sapience

When it's said that there's no way that we can know (or tell) that a computer is sentient, it seems incredible. This is usually said about animals or even other human beings. However, logically the same thing can indeed be said about computers; though, admittedly, not with the same force or implications.

Of course other humans can tell us that they're sentient (even if they don't use the words “I'm sapient”). Animals, on the other hand, can hint (as it were) at their sentience. Then again, it's possible that a future computer could do the same.

So let's get a little but more concrete about all this. I just mentioned that the display of intelligence or mind is deemed to be intelligence or mind. And computers certainly display intelligence. For example, computers can solve problems, play games (e.g., chess), prove mathematical theorems, diagnose medical problems, use language and so on. What more do we want?

All these things are undoubtedly displays of intelligence; though are they also displays of mind? However, just as I mentioned the mind-behaviour binary opposition; so we also have the intelligence-mind opposition too. That means we can construct an argument which takes us from behaviour to intelligence; and then from intelligence to mind. Thus:

         i) If a computer behaves intelligently
        ii) then it is intelligent.
       iii) If computer is intelligent
        v) then it must have a mind.

Prima facie it does seem to be the case that when other people do intelligent things, we (as good behaviourists) say that they're intelligent; whereas when the same actions are done by a computer it rarely evokes the same response (at least not of the same kind). After all, doesn't winning a game of chess match most people's varied criteria of a genuine display of intelligence?

References

Searle, John. (1984) Mind, Brains and Science. London: BBC Publications.
-- (1980) 'Minds, Brains, and Programs'. Behavioural and Brain Sciences 3.
J. McCarthy, M. L. Minsky, N. Rochester, C.E. Shannon. (1955) 'A Proposal for the Dartmouth Summer Research Project on Artificial Intelligence'

No comments:

Post a Comment