Monday 27 July 2015

Paul Smolensky: Is Perception or Logical Inference Primary?



What is at the heart of intelligence and cognition?

According to Paul Smolensky, there are two main alternatives or rivals: perception and logical inference.

One can immediately ask if there can be such a simple categorisation, explanation and/or description of something as broad as intelligence or cognition. Of course at a prima facie (empiricist) level one can say that perception doesn’t seem to be cognitive at all. Perception may be a basis for cognition; though is it actually cognition itself?

Logical inference, on the other hand, is clearly cognitive in nature. Yet here again (at least on an empiricist conception) we use logical inferences when responding to our perceptions. Of course perception itself can also be seen as being (or it actually is) cognitively enriched.

Perception

Paul Smolensky (a Professor of Cognitive Science at the Johns Hopkins University and a Partner Researcher at Microsoft Research) deems perception to be “subsymbolic”. Thus if it’s subsymbolic, mustn’t it also be sub-logical or even sub-cognitive? That, of course, raises this question:

Why assume that all cognition (or at least all intelligence) must somehow be symbolic in nature?

That said, x’s being sub-symbolic isn’t the same as it being non-symbolic… Or perhaps it is.

In any case, that which is deemed (by Smolensky) to be subsymbolic is still about the “categorisation of other perceptual processes”. So here is can be seen that perception isn’t viewed as being (as it were) basic or fundamental. It isn’t been said here (as stated earlier) that perception is — in and of itself — a question of logical inferences or cognition. What we have, instead, is the “categorisation [of] perceptual processes”. That means that on this picture there’s categorisation which seems to be above about beyond the perceptions themselves.

Perception and Evolution

There are various things which work to the advantage of seeing things primarily in terms of perception rather than in terms of logical inference.

One way is that logical inference (or, more widely, reasoning) must have come after the “categorisation of perceptual processes” in our evolutionary history. Or as Smolensky puts it:

“An evolutionary argument says that the hard side of the cognitive paradox evolved later, on top of the soft side.”

That must mean that there were cognitive processes which predated the higher processes of logical inference (or reasoning) and indeed of language-use. Surely that must have been the case. Homo sapiens (or the species which grew into homo sapiens) surely couldn’t have been logical reasoners from the very beginning. Again, homo sapiens (or their forbears) couldn’t have started off as language-users or logical reasoners. Indeed all the evidence says that this wasn’t — and indeed couldn’t — have been the case.

Basically, both language and logical inference must have been built upon such things as the (to use Smolensky’s words) categorisation of perceptual processes (as well as upon much else). Thus language use and logical inference — obviously — didn’t occur ex nihilo.

Connectionism, Connectoplasm and Symbols

If this evolutionary account is correct, then it no surprise that

“it is much easier to see how the kind of soft systems that connectionist models represent could be implemented in the nervous system”.

After all, isn’t it the case that our nervous system today is basically as it was before we acquired languages and the skills of logical reasoning? So even though cognition and mentality have changed, our biological hardware hasn’t. Thus if our biological hardware predates symbolic processing, then perhaps our current models should do so too. Symbolisation and computation may be parts of cognition; though the biological nervous system which subserves all this was designed (in the evolutionary sense!) for other things.

Some people believe that connectionists reject mental symbols and everything that goes with them. And, by virtue of that, they also believe that connectionists reject computation — at least as the primary basis of cognition.

In Smolensky’s case, that isn’t entirely true.

Indeed Smolensky talks about “building symbols” out of “connectoplasm”. In his view, symbols arising from connectoplasm is a better idea than symbols arising from… well, he doesn’t really say: from the Language of Thought or something similarly symbol-based? (Symbols don’t arise from the Language of Thought — they constitute it.)

In any event, Smolensky writes:

“With any luck we will even have an explanation how the brain builds symbolic computation. But even if we do not get that directly, it will be the first theory of how to get symbols out of anything that remotely resembles the brain.”

It’s clear here that Smolensky believes that he’s creating a theory (or model) that’s biologically feasible; unlike many of the alternatives. Of course it will need to be said exactly how and why it’s biologically feasible.

As it is, we can firstly cite J.L. McClelland’s account of this issue here:

“One reason for the appeal of PDP [parallel distributed processing] models is their obvious ‘physiological’ flavor: They seem so much more closely tied to the physiology of the brain than are other kinds of informational-processing models. The brain consists of a large number of highly interconnected elements which apparently send very simply excitatory and inhibitory messages to each other and update their excitations on the basis of these simple messages. The properties of the units in many of the PDP models we will be exploring were inspired by basic properties of the neural hardware.”

We also need to know (in Smolensky’s words) “how the brain builds symbolic computation”. In fact we also need to know exactly what Smolensky means by the words “the brain builds symbolic computation”.

The point to stress here is that Smolensky does believe that the brain builds symbols. So, at the very least, symbols are part of Smolensky’s connectionism.

Despite all that, elsewhere in the same paper Smolensky does indeed play down the importance of symbols. Basically, Smolensky wants “formal accounts [of] continuous mathematics” (see here) rather than the “discrete mathematics [of much] traditional symbolic formalism”. In more detail, Smolensky writes:

“[M]y characterisation of the goal of connectionist modelling is to develop formal models of cognitive processes that are based on the mathematics of dynamical systems continuously evolving in time: complex systems of numerical variables governed by differential equations.”

There’s no mention above of symbols or even of quasi-symbols. In fact this account sounds — strange as it may seem - both mechanical and biological in nature. So why shouldn’t the biological also be seen as mechanical and dynamical? And if we’re talking of the mechanical and the dynamical, then it stands to reason that “numerical variables” and “differential equations” — rather than symbols — will be primary. Indeed it seems that in even simpler terms, all this is more a case of the measurement of dynamical systems rather than the symbols themselves (i.e., within a symbol-system — e.g., the mind/brain).

It may appear to the case that causation is also of prime importance in Smolensky’s scheme. That is, we have the numerical measurements of the interplay between the environment (in terms of input) and a dynamical system which results in certain internal states and then certain kinds of output.

Main Reference:

Smolensky, Paul, ‘The Constituent Structure of Connectionist Mental States: A Reply to Fodor and Pylyshyn’ (1988)

[I can be found on Twitter here.]

No comments:

Post a Comment