Monday, 10 October 2016

Material Logic vs. Formal Logic?

Edwin D. Mares displays the problem (if it is a problem) with a purely formal logic by offering us the following example of a valid argument:

                                             The sky is blue.                          
there is no integer n greater than or equal to 3 such that for any non-zero                                                    integers x, y, z, xn = yn + zn.

Mares says that the above “is valid, in fact sound, on the classical logician's definition” (609). It's the argument that is valid; whereas the premise and conclusion are sound (i.e., true). In more detail, the “premise cannot be true in any possible circumstance in which the conclusion is false”.

Clearly the content of the premise isn't connected, semantically, to the semantic content of the conclusion. However, the argument is valid and sound.

So what's the point of the above?

Perhaps no logician would state it for real. He would only do so, as Mares himself does, to prove a point about logical validity. But can't we now ask why it's 'valid' even when the premise and conclusion are true?

Perhaps showing the bare bones of this argument will help. Thus:


Do that look any better? I suppose so. Even though we aren't given the semantic content, both P and Q must be seen to have a truth-value. (In this case, both P and Q are true.) It is saying: P is true. Therefore Q is true. It isn't saying, Q is a consequence of P; or that P entails Q. Basically, we're being told, by the logic, that two true statements can exist together if they don't contradict each another.


There can be cases in which the premises of an argument are all true, and the conclusion is also true, yet, as Stephen Read puts it, “there is an obvious sense in which the truth of the premises does not guarantee that of the conclusion” (237). Ordinarily, of course, the truth of the premises is meant to guarantee the truth of the conclusion. So let's look at Read's example, thus:

i) All cats are animals
ii) Some animals have tails
iii) So some cats have tails.

Clearly, premises i) and ii) are true. Indeed iii) is also true. (Not all cats have tails. And, indeed, according to some logicians, saying 'some' also implies saying 'all'.)

So why is the argument invalid?

It's invalid not for the truth-values of the premises and conclusion; but for another reason.

The reason is that the sets in the argument are, as it were, mixed up. Thus we have [animals], [cats] and, indeed, [animals which have tails]. In other words, it doesn't logically follow from “some animals have tails” that “some cats have tails”. If some animals have tails it might have been the case that cats were animals which didn't have tails. Thus iii) doesn't necessarily follow from ii). (And iii) doesn't follow from i) either.) ii) can be taken as an existential quantification over [animals]. iii), on the other hand, is an existential quantification over [cats]. Thus:

ii) ((Ǝx) (Ax) iii) (Ǝx) (Cx))

Clearly Ax and Cx are quantifications over different classes. It doesn't follow, then, that what's true about [animals] generally is true also of [cats]; even though cats are members of the set [animals]. Thus iii) doesn't follow from ii).

To repeat: even though both premises and the conclusion are all true, the above still isn't a valid argument.

Read himself helps to show this by displaying an argument-form with mutually-exclusive sets. Namely, [cats] and [dogs]. Thus:

i) All cats are animals
ii) Some animals are dogs
iii) So some cats are dogs.

This time, however, the conclusion is false; whereas i) and ii) are true. It's the case that the subset [dogs] belongs to the set [animals]. Some animals are indeed dogs. However, because some animals are dogs, it doesn't follow that “some cats are dogs”. In other words, because dogs are members of the set [animals], that doesn't mean that they're also members of the subclass [cats] simply because cats themselves are also members of the set [animals]. Cats and dogs share animalhood; though they are different subsets of the set [animal]. In other words, what's true of dogs isn't automatically true of cats. (Wouldn't iii) above work better if it were 'some dogs are cats', not 'some cats are dogs'?)

The importance of sets, and their relation to subsets, may be expressed in terms of bracketed predicates. Thus:

[animals [cats [cats with tails]]] 
not-[animals [cats [dogs]]]

Material and Formal Validity

Stephen Read makes a distinction between formal validity and material validity. He does so using this example:

i) Iain is a bachelor
ii) So Iain in unmarried.

(One doesn't ordinarily find an argument with only a single premise.)

The above is materially valid because there's enough semantic material, as it were, in i) to make the conclusion acceptable. After all, if x is a bachelor, he must also be unmarried. Despite that, it's still formally invalid because there isn't enough content in the premise to bring about the conclusion. That is, one can only move from i) to ii) if one already knows that all bachelors are unmarried. We either recognise the shared semantic content or we know that the word 'unmarried man' is a synonym of 'bachelor'. Thus we have to add semantic content to i) in order to get ii). And it's because of this that the overall argument is said to be formally invalid. Nonetheless, because of what I've already said, it is indeed materially valid.

The material validity of the above can also be shown by its inversion, thus:

i) Iain is unmarried
ii) So Iain is a bachelor.

Read makes a distinction by saying that its “validity depends not on any form it exhibits, but on the content of certain expressions in it” (239) Thus, in terms of logical form, it is invalid. In terms of content (or the expressions used), it is valid. This means, obviously, that the following wouldn't work as either a materially or a formally valid argument. Thus:

i) Iain is a bachelor.
ii) So Iain is a footballer.

There's no semantic content in the word 'bachelor' that can be directly tied to the content of the word 'footballer'. Iain may well be a footballer; though the necessary consequence of him be a footballer doesn't follow from his being a bachelor. As it is, the conclusion is false even though the premise is true.
Another way of explaining the material, not formal, validity of the argument above is in terms of what logicians call a “suppressed premise”. This is more explicit than talk of synonyms or shared contents. What the suppressed premise does, in this case, is show the semantic connections between i) and ii). The actual suppressed premise would be the following:

All bachelors are unmarried.

Thus we would actually have the following argument:

i) Iain is a bachelor.
ii) All bachelors are unmarried.
iii) Therefore Iain is unmarried.

It may now be seen more clearly that

i) Iain is unmarried.
ii) So Iain is a bachelor.

doesn't work formally; though it does work materially.

What about this? -

i) All bachelors are unmarried.
ii) So Iain is unmarried.

To state the obvious, this is clearly a bad argument. (It's called an enthymeme.) Indeed it can't really be said to be an argument at all. Nonetheless, this too can be seen to have a suppressed premise. Thus:

i) All bachelors are unmarried.
[Iain is a bachelor.]
ii) So Iain is unmarried.

Now let's take the classic case of modus ponens:

A, if A then B / so B

That means:

A, if A is the case (or true), then B is the case (or true). A is the case, so B must also be the case.

The obvious question here is: What connects A to B (or B to A)? In terms of this debate, is the connection material or formal? Clearly if the content of both A and B isn't given, then it's impossible to answer this question.

We can treat the above as having the aforesaid suppressed premise. Thus:

[Britain's leading politician is the Prime Minister.]
i) Theresa May is Britain's leading politician.
ii) So she is Prime Minister.

In this instance, the premise and conclusion are both true. Yet they're only contingently, not necessarily, connected.


*) Stephen Read makes the formalist position on logic very clear when he states the following:

Logic is now seen – now redefined – as the study of formal consequence, those validities resulting not from the matter and content of the constituent expressions, but from the formal structure.” (240)

We can now ask: What is the point of a logic without material or semantic content? Would all premise, predicate, etc. symbols - not the purely logical symbols - simply be autonyms or self-referential in nature? (Thus all the p's, q's, x's, F's, G's etc. would be self-referential/autonyms.) And what would be left of logic if this were the case? Clearly we could no longer really say that it's about argumentation – or could we? That is, we can still learn about argumentation from schemas/argument-forms which are purely formal in nature. The dots don't always - or necessarily - need to be filled in.


Mares, Edwin D. (2002) 'Relevance Logic'.
Read, Stephen. (1994) 'Formal and Material Consequence'.

Thursday, 6 October 2016

'and' and 'tonk'

'and' and Analytic Validity

In order to understand A.N Prior's use of the neologism 'tonk', we firstly need to understand the way in which he takes the connective of conjunction – namely, 'and'.

Prior makes the counterintuitive claim that “any statement whatever may be inferred, in an analytically valid way, from any other” (130). Prima facie, that raises a question: Does that mean that any statement with any content can be inferred from any other with any content?

The word 'and' (in this logical sense at least) is understood by seeing its use in statements or propositions.

We start off with two propositions: P and Q; which begin as separate entities in this context. Prior argues that we can “infer” P-and-Q from statements P and Q. The former symbolism, “P-and-Q” (i.e., with hyphens) signifies the conjunction; whereas “P and Q” (i.e., without hyphens) signifies two statements taken separately. However, we can infer P-and-Q from any P and Q. That is, from P on its own, and Q on its own, we can infer P-and-Q. In other words, statements P and Q can be joined together to form a compound statement.

Two questions which we can raise now are. One, do the truth-values of both P and Q matter at this juncture? Two, do the contents of both P and Q matter at this point?

In basic logic, the answer to both questions is 'no'. It's primarily because of this that some of the counterintuitive elements of this account become apparent.

For example, Prior says that “for any pair of statements P and Q, there is always a statement R such that given P and given Q we can infer R” (129). The important word to note here is “any” (as in “for any pair of statements P and Q”). This leads to the conclusion (just mentioned) that the truth-values and/or contents of both P and Q don't matter within the logical context of defining the connective 'and'. It's partly because of this that Prior tells us that “we can infer R” from P and Q. Thus:

(P) The sun is in the solar system.
(P & Q) Therefore the sun is in the solar system and peas are green.
(R/Q) Therefore peas are green.

All those statements are true; yet they have unconnected contents and a conclusion which doesn't follow from (the content) of the premises. Similarly with two false premises. Thus:

(P) The sun is in the bathroom.
(P & Q) Therefore the is in the bathroom and peas are blue.
R/Q) Therefore peas are blue.

It's because of this irrelevance of contents and truth-values that R will follow from any P and any Q.

Thus it's no surprise that Prior also says that “given R we can infer P and can also infer Q”. As an this example:

(R/P) Peas are green
(P and Q) Therefore peas are green and the sun is in the solar system.

The difference here is that a compound statement (i.e., P-and-Q) is derived from an atomic statement (i.e., R/P). (Except, in this instance, R should be P and P and Q should be R.) Nonetheless, contents and truth-values still don't matter. Another of putting this is (as in the argument-form above) that premises and conclusions can change places without making a difference.


If we still have problems with Prior's 'tonk', that situation arises because we fail to see that the “meaning” of any connective “is completely given by the rules” (130).

Prior gives the following example of this logical phenomenon:

(P) 2 and 2 are 4.
(Q)Therefore, 2 and 2 are 4 tonk 2 and 2 are 5.
(R/Q)Therefore 2 and 2 are 5.

Clearly the connective 'tonk' is doing something to the proceeding '2 and 2 are 4' - but what? Could it be that 'tonk' seems to means add 1 - at least in this instance? That would mean, however, that 'tonk' is the operation of adding 1, which isn't (really?) a connective of any kind.

The new connective 'tonk' works like the connective 'and'. Or as Prior puts it:

Its meaning is completely given by the rules that (i) from any statement P we can infer any statements formed by joining P to any statement Q by 'tonk'... and that (ii) from any 'tonktive' statement P-tonk-Q we can infer the contained statement Q.” (130)

Thus, at a symbolic level, 'tonk' works like 'and'. And just as Prior symbolised P and Q taken together as P-and-Q; so he takes P and Q taken together with tonk as P-tonk-Q.

In this case, '2 and 2 are 4' (P) is being conjoined with '2 and 2 are 5' (Q). Thus the conclusion, 'therefore, 2 and 2 are 5' (R) follows from '2 and 2 are 5' (Q), though not from '2 and 2 are 4'. In other words, R only needs to follow from either P or Q, not from both. Thus when P and Q are, as it were, tonked, we get: '2 and 2 are 4 tonk 2 and 2 are 5'. And the conclusion is: 'Therefore 2 and 2 are 5'.

To express all this in argument-form, take this example:

(P) Cats have four legs.
(P & Q) Therefore cats have four legs tonk Cats have three legs.
(R/Q) Therefore cats have three legs.

What is 'tonk' doing in the above? It seems to be cancelling out the the statement before (i.e., 'Cats have four legs'). Thus if 'tonk' comes after any P in any compound statement, then Q will cancel out P. If that appears odd (especially with the seeming contradiction), that's simply because, as Prior puts it, “there is simply nothing more to knowing the meaning of ['tonk'] than being able to perform these inferences” (129). In this case, we firstly state P, and then P-tonk-Q (in which Q cancels out P), from which we conclude R.

Nuel D. Belnap helps us understand what's happening here by offering different symbols and a different scheme. Instead of the argument-form above (which includes P, Q and R), we have the following:

i) A ⊢ A-tonk-B
ii) A-tonk-B ⊢ B
iii) A ⊢ B

Quite simply, one can deduce A-tonk-B from A. Then one can deduce B from A-tonk-B. Finally, this means that one can derive B from A.

In our example, by a simple rule of inference, one can derive 'Cats have four legs tonk cats have three legs' (A-tonk-B) from 'Cats have four legs' (A). And then one can derive 'Cat have three legs' (B) from 'Cats have four legs tonk cats have three legs' (A-tonk-B). Finally, one can derive 'Cats have three legs' (B) from 'Cats have four legs' (A).

Belnap claims that an arbitrary creation of a connective (through implicit definition) could or can result in a contradiction. Thus, the '?' in the following

(a ? c) = df  a + c
(b    d)        b + d.

could result in: 2 = 3
                        3    5

However, doesn't Prior's 'Therefore 2 and 2 are 4 tonk 2 and 2 are 5' also contain a contradiction? Prior seems to be stressing the point that in the definitions of connectives, even counterintuitive ones, such contradictions are to be expected. Isn't that the point? 


Belnap, Nuel. (1962) 'Tonk, Plonk and Plink'.
Prior, A.N. (1960) 'The Runabout Inference Ticket'.

Friday, 23 September 2016

E. Brian Davies's Empiricist Account of Real Numbers

*) This commentary is on the relevant parts of E. Brian Davies's book, Science in the Looking Glass.  


At first glance it's difficult to see how mathematics generally, and numbers specifically, have anything to do with what philosophers call “the empirical”. This is also the case for mathematicians and philosophers whom class themselves as “realists” or “Platonists”. Nonetheless, everyone is aware of the fact that maths is applied to the world. Or, at the least, that maths is a useful tool for describing empirical reality.

Nonetheless, empiricists go one step further than this by arguing that mathematics (or, in Davies's case, a real number) is empirical in nature. Or, at the least, that certain types of real number have an empirical status.

At first I had to decide whether to class E. Brian Davies's position “empiricist mathematics” or “mathematical empiricism”. The former is a philosophical position regarding maths. The latter, on the other hand, is a position on empiricism itself. In other words, in order to make one's empiricism more scientific, it would make sense to make it mathematical. Empiricist maths, on the other hand, is a philosophical position one could take on mathematics itself. Although these are different positions, I can only say that both apply to Davies's account.

Small Real Numbers

E. Brian Davies puts his position at its most simple when he says that for a “'counting' number its truth is simply a matter of observation” (81). Here there seems to be a reference to the simple act of counting; which is a psychological phenomenon. By inference it must also refer to what we count. And what we count are empirical objects or other empirical phenomena. That means that empirical objects need to be observed in the psychological act of counting.

Prima facie, it's hard to know what Davies means when he writes that “[s]mall numbers have strong empirical support but huge numbers do not” (116). Even if it means that we can count empirical objects easily enough with numbers, does that, in and of itself, give small numbers “strong empirical support”? Perhaps we're still talking about two completely different/separate things: small numbers and empirical objects. Simply because numbers can be utilised to count objects, does that - on its own - confer some kind of empirical reality on them? We are justified in using numbers for counting; though that may just be a matter of practicality. Again, do small numbers themselves have the empirical nature of objects passed onto them simply by being used in acts of counting?

Did these small numbers exist before “assenting to Peano's axioms”? Davies make it seems as if accepting such axioms is a means to create or construct small numbers. That is, we take the axioms; from which we derive all the small numbers. However, before the creation of these axioms, and the subsequent generation of small numbers as theorems, did the small numbers already exist? A realist would say 'yes'. A constructivist, of some kind, would say 'no'.

Davies appears to put the set-theoretic or Fregean/Cantorean position on numbers in that he writes that that “'counting' numbers exist in some sense” (82). What sense? In the sense that “we can point to many different collections of (say) ten objects, and see that they have something in common” (82). I say Fregean/Cantorean in the sense that the nature of each number is determined by its one-to-one correspondence with other members of other sets.

Prima facie, I can't see how numbers suddenly spring into existence simply because we 'count' the members of one set and them put the members of equal-membered sets in a relation of one-to-one correspondence. How numbers are used can't give them an empirical status. Something is used, sure; though that use doesn't entirely determine its metaphysical nature. (We use pens; though that use of a pen and the pen itself are two different things.)

The other problem is how we 'count' without using numbers? Even if there are "equivalence classes", are numbers still surreptitiously used in the very definition of numbers?

In any case, what these “collections” have in common, according to Davies, is the number of members, which we “see” (rather than count?).

Davies goes on to argue a case for the empirical reality of small real numbers. There is a logical problem here, which he faces.

Davies offers a numerical version of the sorites paradox for vague objects or vague concepts. Let me put his position in argument-form. Thus:

 i) “If one is prepared to admit that 3 exists independently of human society.
ii) “then by adding 1 to it one must believe that 4 exists independently...”
         iii) “[Therefore] the number 1010100 must exist independently.” (82)

This would work better if Davies hadn't used the clause “exists independently of human society”. I say that because it's empirically possible, or psychologically possible, that there must be a finite limit to human counting-processes. Thus counting to 4 is no problem. But counting to 1010100 may not be something “human society” can do.

I mentioned the simpler and more effective argument earlier. Thus:

i) If 3 exists.
ii) Then by adding 1 to 3, 4 must exist.
iii) Therefore, by the repeated additions of 1 to the previously given number, the number 1010100 must also exist.

It may exist; though Davies thinks that mathematics tells us “it is not physically possible to continue repeatedly the argument in the manner stated until one reaches the number 1010100 ” (82).

Extremely Large and Extremely Small Real Numbers

Davies begins his case for what he calls the “metaphysical” or “questionable” nature of extremely large numbers by saying that they “never refer to counting procedures” (67). Instead, “they arise when one makes measurements and then infers approximate values for the numbers”.

The basic idea is that there must be some kind of one-to-one correlation between real numbers and empirical objects. If this isn't forthcoming, then certain real numbers have a “questionable” or “metaphysical” status. (Again, this is like the idea of a one-to-one correspondence between members of one set and the members of another set. This is – or was - a process used to determine the set-theoretic status of numbers.)

From his position on small numbers, Davies also concludes that “huge numbers have only metaphysical status” (116). I don't really understand this. Which position in metaphysics is Davies talking about? His use of the word “metaphysics” makes it sound like some kind of synonym for “lesser” (as in a “lesser status”). However, everything has some kind of metaphysical status, from coffee cups to atoms. Numbers do as well. So it makes no sense to say that “huge numbers have only metaphysical status” until you define what status that is within metaphysics. The phrase should be: “huge numbers only have a … metaphysical status”; with the three dots filled in with some kind of position within metaphysics.

Davies goes on to say similar things about “extremely small real numbers” which “have the same questionable status as extremely big ones”. I said earlier that the word “metaphysical” (within this context) sounded as if it were some kind of synonym for 'lesser'. That conclusion is backed up by Davies using the phrase “questionable status”. Thus a metaphysical status is also a “questionable status”. Nonetheless, I still can't see how the word “metaphysical” can be used in this way. Despite that, I'm happier with the latter locution (“extremely small real numbers have the same questionable status as extremely big ones”), than I am with the former (i.e., “huge numbers have only metaphysical status”).

Since there must be some kind of relation or correspondence between real numbers and empirical things, Davies also sees a problem with extremely small real numbers. It seems that physicists or philosophers may attempt to set up a relation between extremely small numbers and “lengths far smaller than the Planck length” (117). Thus the idea would be that Planck lengths divide up single empirical objects. Small numbers, therefore, correlate with individual empirical objects; whereas extremely small numbers correlate with the various Planck lengths of an object (rather than with objects in the plural).

Davies doesn't appear to think that this approach works. That is because Planck lengths “have no physical meaning anyway” (117). This means that extremely small numbers don't have any empirical support. They have a “questionable” or “metaphysical status”.

Models, Real Numbers and the External World

Davies's general position is that “real numbers were devised by us to help us to construct models of the external world” (131). As I said earlier, does this mean that numbers gain an empirical status simply because they're “used to help us construct models of the external world”? Perhaps, again, even though real numbers are used in this way, that still doesn't give them an empirical status. Can't numbers be abstract platonic objects and still have a role to play in constructing models of the external world? Why do such models and numbers have to be alike in any way? (Though there is the problem, amongst others, of our causal interaction with abstract numbers.)

In terms of a vague analogy. We use cutlery to eat our breakfasts. Yet breakfasts and cutlery are completely different things. Nevertheless, they're both, as empirical objects, in the same ball park. What about using a pen to write about an event in history? A pen is an empirical object; though what about an historical event? Can we say that the pen exists; though the historical event no longer exists? Nonetheless there is a relation between what the pen does and a historical event even though they have two very different metaphysical natures.

As non-physicists, we may also want to know how real numbers “help us to construct models of the external world”. Are the models literally made up of real numbers? If the answer is 'yes', then what does that mean? Do real numbers help us measure the external world via the use of models? That is, do the numbered relations of a model match the unnumbered relations of a object (or bit of the external world)? Would that mean that numbers belong to the external world as much as they belong to the models we have of the external world? Is the world, in other words, numerical? Thus, have we the philosophical right, as it were, to say of the studied objects (or bits of the external world) what we also say about the models of studied objects (or bits of the external world)? Platonists (realists) would say 'yes'. (Perhaps James Ladyman and Donald Ross, or ontic structural realists, would say 'yes' too.)


E. Brian Davies puts the empiricist position on mathematics at its broadest by referring to von Neumann, Quine, Church and Weyl. These mathematicians and philosophers “accepted that mathematics should be regarded as semi-empirical science” (115). Of course saying that maths is “semi-” anything is open to many interpretations. Nonetheless, what Davies says about real numbers, at least in part, clarifies this position.

Davies then brings the debate up to date when he tells us that contemporary mathematicians are “[c]ombining empirical methods with traditional proofs” (114). What's more, “the empirical aspect [is often] leading the way”. Indeed, Davies says, this position is “increasingly common even among pure mathematicians”.

Thursday, 15 September 2016

Kenan Malik's Extended Mind

This is a commentary on Kenan Malik's 'Extended Mind' chapter of his book, Man, Beast and Zombie (2000).

*) Malik offers a syllogistic argument thus:

i) The “human mind is structured by language”.
ii) “Language is public.”
iii) Therefore “the mind is itself is public”.

Kenan Malik characterises “computational theory” as one that “suggests that everything that is necessary for the use of language is stored in each individual mind” (327).

Here we must make a distinction between necessary and sufficient conditions “for the use of language”. It may indeed be the case that “everything that is necessary for the use of language is stored in each individual mind”; yet it may also be the case that such things aren't sufficient for the use of language. In other words, the mechanics for language-use are individualistic; though what follows from that may not be. And what follows from the mechanics of language is, of course, language itself.

Thus Malik's quote from Putnam, that “'no actual language works like that [because] language is a form of cooperative activity, not an essentially individualistic activity'” (328), may not be to the point here. Indeed I find it hard to see what an non-cooperative and individualistic language would be like – even in principle. That must surely imply that Malik, if not Putnam, has mischaracterised Fodor's position. Another way to put this is to say that Fodor is as much an anti-Cartesian and Wittgensteinian as anyone else. The Language of Thought and “computational theory” generally are not entirely individualistic when we take them beyond their physical and non-conscious reality. How could they be?

There's an analogy here between this and the relation between DNA and its phenotypes. Clearly DNA is required for phenotypes. However, DNA and phenotypes aren't the same thing. In addition, environments, not only DNA, also determine the nature of the phenotype.

As I hinted at earlier, Malik's position hints at a debate which has involved Fodor, Putnam and Chomsky.

Malik rejects Fodor's internalism or individualism, as has been said. It was said that Fodor believes that something must predate language-use. So let Fodor explain his own position. Thus: “My view is that you can't learn a language unless you already know one.”

Fodor means something very specific by the clause “unless you already know one”. As he puts it:

It isn't that you can't learn a language unless you've already learned one. The latter claim leads to infinite regress, but the former doesn't.” (385)

In other words, the language of thought isn't learned. It is genetically passed on from previous generations. It is built in to the brains of new-born Homo sapien babies.

Putnam gives a more technical exposition of Fodor's position. He writes:

[Fodor] contends that such a computer, if it 'learns' at all, must have an innate 'programme' for making generalisations in its built-in computer language.”

Secondly, Putnam tackles Fodor's rationalist - or even platonic - position which argues for innate concepts. Putnam continues:

[Fodor] concludes that every predicate that a brain could learn to use must have a translation into the computer language of that brain. So no 'new' concepts can be acquired: all concepts are innate.” (407)

Meanings Ain't in the Head

Because Malik argues that reference to natural phenomena is an externalist affair, and sometimes also scientific, it may follow that non-scientific individuals may not know the full meanings of the words, meanings or concepts within their heads. As Putnam famously put it: “Meaning just ain't in the head.”

Malik gives the example of the words (or mental representations?) 'ash' and 'elm'. Ash and elm trees are natural phenomena. In addition, their nature is determined and perhaps defined by their scientific nature. In other words, the reference-relation is not determined by the appearances of elm and ash tress. This results in a seemingly counterintuitive conclusion. Malik writes:

Many Westerners have a distinct representation of 'ash' and 'elm' in their heads, but they have no idea how to distinguish ash and elm in the real world.”

I said earlier that references to ash and elm trees can't be fully determined by appearances. However, they can be fully distinguished solely by appearances. But that distinction wouldn't be enough to determine a reference-relation. The scientific nature of ash and elm trees must also be taken into account. Thus when it comes to the reference-relation to what philosophers call 'natural kinds' and other natural phenomena, the

knowledge of gardeners, botanists, of molecular biologists, and so on, all play a crucial role in helping me refer to [in this instance] a rose, even though I do not possess their knowledge” (333).

Malik backs up his anti-individualistic theory of language and mind by offering an account of reference which owes much to Kripke and Putnam – certainly to Putnam.

Prima facie it may seem that reference is individualistic or internalist. That is, what determines our words is some kind of relation between it (as it is the mind), and that which it refers to or represents. This means that reference isn't only a matter of the individual mind and the object-of-reference.

Malik, instead, offers what can be seen as a scientific account of reference.

Take his example of the “mental representation” of, as he puts it, 'DNA'. (Does Malik mean word here?) The reference-relation between 'DNA' and DNA is not only a question of what goes on in a mind (or in minds). Indeed “your mental representation of DNA (or mine) is insufficient to 'hook on to' DNA as an object in the world” (328). There's not enough meat, as it were, to make a sufficient reference-relation between 'DNA' and DNA in individual minds alone. Instead the scientific nature of DNA determines reference for all of us – even if we don't know the science.

Malik quotes Putnam again here. Reference for 'DNA' is "socially fixed and not determined simply by conditions of the brain of an individual” (329). Of course something that is scientifically fixed is also “socially fixed”. DNA may be a natural phenomenon; though the fixing of reference for 'DNA' to DNA is a social and scientific matter.


Fodor, Jerry. (1975) 'How There Could Be a Private Language and What It Must Be Like', in (1992) The Philosophy of Mind: Classical Problems, Contemporary Issues.
Putnam, Hilary. (1980) ' What Is Innate and Why: Comments on the Debate', in (1992) The Philosophy of Mind: Classical Problems, Contemporary Issues

Wednesday, 20 April 2016

Scraps of Kant (1)

Prolegomena to Any Future Metaphysics (German edition).jpg
The Unexperienced Soul

In a sense, Kant is quite at one with Hume in that he believes that we never actually experience the self; or, in Kant’s terms, the “soul” (or the “substance of our thinking being”). This is because the soul is the mode through which we experience and is not, therefore, an object of experience. Perhaps it would be like a dog trying to catch its own tail. We can, of course, experience the “cognitions” of the soul; though we can’t experience the soul which has the cognitions. Like all other substances, including the substances of objects, the “substantial itself remains unknown” (978). We can, however, prove “the actuality of the soul” through the “appearances of the internal sense”. This is a proof of the soul, however, not an experience of it.

The Antinomies and Experience

What are the “antinomies”? They are subjects of philosophical dispute that have “equally clear, evident, and irresistible proof”(982) on both sides of the argument. That is, a proposition and its negation are both equally believable and acceptable in terms of rational inquiry.

Kant gives an example of such an argument with two equally weighty sides. One is whether or not the world had a beginning or as existed for eternity. The other is whether or not “matter is infinitely divisible or consists of simple parts” (982). What unites these arguments is that none of them can be solved with the help of experience. In a sense, this is an argument of an empiricist. In addition, according to the empiricism of the logical positivists, these arguments would have been considered non-arguments precisely because they can't be settled or solved by experience. As Kant puts it, such “concepts cannot be given in any experience” (982). It follows that such issues are transcendent to us.

Kant goes into further detail about experience-transcendent (or even evidence-transcendent) facts or possibilities. We can't know, through experience, whether or not the “world” (i.e., the universe) is infinite or finite in magnitude. Similarly, infinite time can't “be contained in experience” (983). Kant also talks about the intelligibility of talking about space beyond universal space or time before universal time. If there were a time before time it would not actually be a time “before” time because time is continuous. And if there were a space beyond universal space, it wouldn’t be “beyond” universal space because there can be no space beyond space itself.

Kant also questions the validity of the notion of “empty time”. That is, time without space and objects within space. This is because he thinks that time, space and objects in space are interconnected. Perhaps Kant believed that time wouldn't pass without objects to, as it were, measure the elapsing of time (through disintegration and growth). Similarly, space without time would be nonsensical, on Kant’s cosmology.

The Unperceived Tree in Space and Time

This is very much like Berkeley’s argument.

Thus, when we imagine a tree unperceived, we are, in fact, imagining it as it is perceived; though perceived by some kind of disembodied mind. Or, as Kant puts it, to represent “to ourselves that experience actually exists apart from experience or prior to it” (983). Thus when we imagine the objects of the senses existing in a “self-subsisting” manner, we are, in fact, imagining them as they would be as they are experienced. That isn't surprising because there's no other way to imagine the objects of the senses.

Space and time, on the other hand, are “modes” through which we represent external objects of the senses. As Bertrand Russell put it, we wear spatial and temporal glasses through which we perceive the world. If we take the glasses off, then, quite simply, space and time would simply disappear. They have no actuality apart from our minds. Appearances must be given up to us in the containers we call “space and time”. Space and time are the vehicles of our experiences of the objects of the senses. In a sense, it seems like a pretty banal truism to say that “objects of the senses therefore exist only in experience” because quite evidently there are no experiences without the senses and our senses themselves determine those experiences.

Freedom and Causal Necessity

“…if natural necessity is referred merely to appearances and freedom merely to things in themselves…” [984]

This position unites Kant with Hume, who also thought that necessity is something that we impose on the world. That is, necessity only belongs to appearances, not to things-in-themselves. This could also be deemed a forerunner of the logical positivist idea that necessity is a result (or product) of our conventions, not of the world itself. Of course, just as conventions belong to minds, so too do appearances belong to minds. Freedom, that is independence from causal necessity, is only found in things-in-themselves. That is, the substance of the mind is also a thing in itself; therefore the mind too is free from causal necessitation. The only things that are subject to causal necessitation are the objects of experience. Things-in- themselves (noumena) are free.

Thus Kant manages to solve a very difficult problem: the problem of determinism. That is, “nature and freedom” can exist together. Nature is not free. However, things-in-themselves (including the mind’s substance) are free. The same things can “be attributed to the very same thing”. That is, human beings are beings of experience and also beings-in-themselves. The experiential side of human nature is therefore subject to causal laws; whereas the mind transcends causal necessitation. We are, therefore, partly free and partly unfree.

Kant has a particular way of expressing what he calls “the causality of reason”. Because reason is free, its cognitions and acts of will can be seen as examples of “first beginnings” (986). A single cognition or act of will is a “first cause”. It's not one of the links in a causal chain. If they were links in such a possibly infinite causal chain, then there would be no true freedom. First beginnings guarantee us freedom of the will and self-generated (or self-caused) cognitions. In contemporary literature, such “first beginnings” are called “originations” and what a strange notion it is! What does it mean to say that something just happens ex nihilo? Would such originations therefore be arbitrary or even chaotic – sudden jolts in the dark of our minds? They would be like the quantum fluctuations in which particles suddenly appear out of the void. Why would such things guarantee us freedom rather than make us the victims of chance?

Knowledge of Things-in-Themselves

Kant both says that we can't know anything about things in themselves, yet he also says that “we are not at liberty to abstain entirely from inquiring into them” (989). So which one is it to be? Can we have knowledge of things-in-themselves or not? Perhaps Kant means that although we can indeed inquire into things in themselves, nevertheless it will be a fruitless endeavour. Or perhaps it's the psychological need to inquire because “experience never satisfies reason fully” (989). Alternatively, though our inquiries into things-in-themselves won't give us knowledge, we can still offer, nevertheless, conjectures or suppositions about such things. That is, we can speculate about the true nature of things-in-themselves; though we'll never have knowledge (in the strict sense) of them.

There are questions that will press upon us despite the fact that answers to them may never be forthcoming. Kant, again, gives his earlier examples of evidence- or experience-transcendent issues such as “the duration and magnitude of the world, of freedom or of natural necessity” (989). However, experience lets us down on these issues. Reason shows us, according to Kant, “the insufficiency of all physical modes of explanation” (989). Can reason truly offer us more?
Again, Kant tells us that we can't be satisfied by the appearances. The

chain of appearances…has…no subsistence by itself…and consequently must point to that which contains the basis of these appearances”. [990].

Of course it's reason itself which will “hope to satisfy its desire for completeness” (990). However, it's not clear whether or not reason can satisfy our yearnings by given us knowledge of things-in-themselves. Yet “we can never cognise these beings of understanding” but “must assume them”. It is reason that “connects them” (991) with the sensible world (and vice versa). It must follow, therefore, that although “we can never cognise these beings of understanding” there must be some alternative way of understanding them. Which way is that?

*) All the notes above are readings of Kant's Prolegomena to Any Future Metaphysics.