Sunday, 28 June 2015

Late-20th Century Philosophers on Animal Concepts


 
Michael Tye says that

Having the concept F requires, on some accounts, having the ability to use the linguistic term ‘F’ correctly.” (1990)

That all depends on what precisely the concept F is taken to be. If it’s [infinity], then, yes, indeed we would probably need “to use the linguistic term”; though not if the concept is [cat], and [cat] isn't tied to the word “cat”. Of course I’m using the word “cat” within the square brackets because it’s shorthand for whatever constitutes, say, a dog’s concept [cat]. So perhaps I should symbolise it [C]. (The problem with this that readers wouldn’t then know what object the dog’s concept is about. Therefore I use a linguistic and English term within the brackets.)

If Tye had said

Having concepts require the ability to use linguistic terms correctly.

he would have been, I think, incorrect.

Following on from that sentence, Tye says:

On other accounts, concept possession requires the ability to represent in thought and belief that something falls under the concept…”

There’s no problem with the above, as long as “thought”, “belief” and “falls under a concept” aren't taken sententially or linguistically. There’s no obvious reason why they should be. A dog must think that the rattling dog-chain 'means' that it will be going for a walk. It will believe that a walk will be forthcoming. The dog-chain must fall under the concept [dog chain]. (Or, again, instead of using the English words “dog chain” - which will be unknown to the dog - it will fall under the concept [C], where C simply symbolises whatever constitutes the dog’s concept.)

On the word “thought” itself. Some philosophers are sceptical about animal thought. Take this strange and un-elaborated description of a monkey:

“…while we may be prepared to say that it knows [that it’s safe up a tree], we may be less happy to say that the monkey thinks that it is safe.”

Prima facie, how can the monkey know without thinking? This writer includes reasoning, believing, reflecting, calculating and deliberating as examples of thought. I think monkeys do all these things. (At first I hesitated with the word “calculating”; though only because I over-sophisticated the term by thinking in terms of abstract mathematical calculations.) However, since the main topic is concepts, I can’t go into these broader areas of thought. An interesting question remains however. Is it animals’ lack of concepts that exclude them from all these cognitive states? Or is it that animals have no concepts because they can’t think?

Perhaps what is motivating the idea of “non-conceptual content” is that animals have “experiences”; though they don’t necessarily deploy concepts. Martin Davies writes:

“…the experiences of …certain creatures, who, arguably are not deployers of concepts at all.” (1996)

And later:

“…a creature that does not attain the full glory of conceptualised mentation, yet which enjoys conscious experience with non-conceptual content…”

All the above depends on which animal we're talking about and what Davies means by the word “concept”. Indeed it's hard to fuse “experience” and “non-conceptual content” together in the first place; and not just from a quasi-Kantian position on experience and concepts. If Davies is talking about floor lice, what he says may well be correct. If he’s talking about dogs, monkeys, etc., then I’m not so certain (the intermediary animal cases are, as ever, vague).

Is the fact that animals don’t use a language (or a human language) causing the bias against animal-concept use? Davies doesn’t say. If one takes a Fodorean “language-infested” view of “mentation”, then one would probably agree with Davies. If one were a non-computationalist (or non-Fodorean), one may question the linguistic bias of Davies’s position (e.g. as Paul Churchland does).

An even more explicit example of this linguistic bias can be seen in Christopher Peacocke. He writes:

The representational content of a perceptual experience has to be given by a proposition, or set of propositions, which specifies the way the experience represents the world to be.” (1983)

Peacocke (in the above) is distinguishing “representational content” (which is propositionally specifiable) from pure un-conceptualised sensations. And again, later, Peacocke displays his linguistic (or sentential) bias when he writes:

The content of an experience is to be distinguished from the content of a judgement caused by the experience.”

Not only do we have a dualism of “sensations” (the “contents of experience”) and “judgement” (which is “caused by the experience”), we also have the specifically intellectualist - and probably linguistic - term “judgement”. Peacocke, judging by what he wrote earlier in this paper, means applying “a proposition or set of propositions” to the experience by “judgement”.

Perhaps if everything weren’t so language-infested, Peacocke would have more of a case. If “representational content” is “given by a proposition”, then perhaps the same is true of concepts .

Indeed Peacocke states his own dualism explicitly:

“…we need a threefold distinction [of experience] between sensation, perception, and judgement…”

Peacocke, in note 3, quotes another philosopher to back-up his case. He writes:

“… 'sensation, taken by itself, implies neither the conception nor the belief of any external object…Perception implies an immediate conviction and belief of something external'…”

The reasoning behind Peacocke and Davies’s position may be that if animals (or certain animals) are always non-conceptual creatures, then we too, on a Darwinian perspective, might have – or did - start off as non-conceptual creatures. That is, pure phenomenal consciousness is common to both humans and animals. However, as adults and humans, phenomenal consciousness and sensations are later conceptualised.

There is a dualism (another one) here between phenomenal consciousness and conceptual consciousness. Though if Davies is being Darwinian in accepting that we share phenomenal consciousness with animals, why can’t he be equally Darwinian by accepting that – some – animals share concepts with humans? Why should concepts be sentence-shaped?

Is this a disguised foundationalist dream once again? Davies himself makes a distinction between
.... perceptual content is the same kind of content as the content of judgement and belief…” (1996)

and, alternatively,

  1. “…perceptual content is a distinct kind of content, different from belief content.”

This is dualism in a pure form.

Passage a) is very Davidsonian. Judgements/beliefs and perception are as one. Passage b), on the other hand, gives us “uninterpreted” content, separate (we may say) from “all schemes and science”. Of course I wouldn’t necessarily use the word “science” or even “schemes”; but I would say: Separate from all concepts.

Ned Block also makes his own distinction between “representation” and “intentional representation” (note 4, 1995). He says that that an animal has an experience that is “representational”. However, it's not an “intentional representation”. This is how Block makes the distinction:

#) Intentional representation = “representation under concepts”

##) Representation = “representation without any concepts”

The mistake Block makes appears to be so obvious that I read his passage over again to see what was going on. The mistake is simply this. The animal in question “doesn’t possess the concept of a donut or a torus”. We can accept that. However, the animal may “represent space as being filled in a donut-like way”. Again, that's acceptable. So, yes, this animal won’t have our concept [donut] or our concept [torus]. However, it may have its own concept [C] of the donut and likewise its own concept [C] of the torus. This is why Block allows the animal representations. That is, it “represents space as being filled in a donut-like way without any concepts”. This experience has “representational content without intentional content”. Apparently, it has representations because its experience is of something “donut-like”. But it isn’t intentional or conceptual because it doesn’t have our concept [donut]. Though it may, as I said, have its own.

This appears to be obviously wrong. It displays a linguistic bias and basis for all concepts. And therefore excludes, by definition, all animals from having conceptual content of the said experience. Though, logically, it would mean that a fellow human being without the concept [donut] would only have a representation of the donut, not an intentional representation of it.

Even someone who has the concept [donut] must have experienced the donut under other concepts before he applied the concept [donut]. Not just the basic Kantian concept [object] or [thing]. These are true atomic concepts which are the building blocks of later concepts. However, before the object we call a “donut” fell under the concept [donut], and after it fell under the concept [object] or [thing], other concepts would have been applied or belonged to the donut. For example, [white thing], [round thing], [small round thing], etc. Even the animal, without the concept [white thing] etc. would possibly have its own alternative non-linguistic or non-human alternatives.

I also have a problem with Block’s use of the term “representation”. You can only represent something as something. Or it's a representation of something. Therefore one needs concepts (not necessarily linguistic ones) of something and concepts of a thing as something.

The problem here may be accounted for by what Block says himself (again in note 4). He says that “phenomenal-consciousness isn't an intentional property”. I agree. He also says that “P-conscious content cannot be reduced to or identified with intentional content”. Again, I agree. He also qualifies these distinctions by saying that “intentional differences can make a P-conscious difference”. He also says that “P-consciousness is often representational”. However, he's still hinting at something that I don’t accept. That PC can exist without intentional or representational (that is, conceptual) content. The distinctions he makes are possibly real and worthwhile. However, PC is like a finger which can’t exist without a hand. And the hand, in this case, is conceptual content (concepts). Of course a finger is distinct from a hand; though, as yet, I haven’t seen a functioning finger without a hand.

References

Davies, Martin, 'Externalism and Experience' (1996)
Peacocke, Christopher, 'Sensation and Content of Experience: A Distinction' (1983)
Tye, Michael, 'A Representational Theory of Pains and Their Phenomenal Character' (1990)

 

No comments:

Post a Comment