Sunday, 1 June 2014

19th Century Inductive & Deductive Logics








General Introduction


Logic became less relevant to other areas of discourse in the early 20th century. More specifically, it became less relevant to philosophy and to the experimental sciences. On the other hand, many 19th philosophers would have actually studied the kind of reasoning that is employed in the experimental sciences. Logic was not seen as autonomous, but something developed by studying actual examples of reasoning. 20th century logic, on the other hand, mainly developed in isolation, but it was still believed that its findings could or would be applicable outside pure logic itself.

The simple reason why deductive logic impressed 20th century mathematical logicians is that in deductive system one moves from truths to further truths. In inductive logic, to the contrary, we move from probabilities to further probabilities. Inductive logic is not air-tight in the manner in which deductive logic is. In deductive logic one essentially accumulates more truth from a given set of truths (e.g., premises, axioms, etc.).

Why, exactly, were these non-deductive forms of reasoning largely ignored in the Fregean and post-Fregean age of mathematical logic? The answer is that they could provide no help to foundational research in mathematics. And, of course, mathematical systems are deductive systems. In addition, it was all part and parcel of the rejection of any form of ‘psychologism’ in logic and philosophy. ‘Pure logic’ (Husserl’s term) does not deal with thought processes and reasonings at all. It deals with timeless logical laws, truths and principles that would be true even if no one had thought or expressed them. Inductive reasoning, however, fundamentally relies on observations. The notion of observation is clearly a psychological one. The same goes for Peircian abduction. Here the abductive act is clearly a psychological phenomenon; even a creative one.


We do not rely that much on deductive logic. This may make it seem strange that mathematical logic has been so important to 20th century philosophers. In a sense, both mathematical logic and deductive logic do not seem to be closely connected to how people actually reason. Now that may be simply because most people do not reason correctly and, consequently, that we should mimic the inferences of deductive logic and mathematical logic. Clearly inductive logic is primarily concerned with observations; whereas deductive logic is not at all concerned with such things. What is the classic take on inductive inference? Firstly we observe a finite amount of objects or events. Then we generalise about such objects. The actual inference itself is one to as-yet-unobserved phenomena that will display the generalised features we have made about observed phenomena. For example, we infer that the next swan that we will see will be white. Now there is an important difference between inductive and deductive inference. Inductive inferences are not sound but deductive inferences are. What does “sound” mean? It means that a sound inference cannot be false given true premises. Deductive inferences must be sound; but inductive inferences need not be sound. Instead inductive inferences are probable rather than sound. In a sense, in inductive logic we never have enough knowledge to fully warrant our conclusions. And that is because no amount of knowledge would render the conclusion sound or warranted. As Flach puts it, in inductive reasonings there will always be “missing knowledge” (682). We must make “educated guesses” as to the nature of this missing knowledge.

A lack of soundness was not seen as a big problem by many 19th century philosophers; especially since that soundness it not available in principle when it comes to inductive logic. In fact, non-sound logic was seen as very useful, according to many 19th century philosophers and logicians (and later ones). And, conversely, sound logic may well be pretty useless at specific times or in particular situations. There is a drawback, however. That is that unsound conclusions may turn out to be false. But even this is not so bad if seen in the light of, say, C.S. Peirce’s ‘fallibilism’ [see Hookway].



Scientific Hypothesis and Induction



In the 19th century the logician De Morgan believed that hypothesis formation is a creative act. It relies on the scientist’s imagination just as much as it relies on logic, facts, or data. The traditional or common view, however, is that a hypothesis is the end result of some kind of inferential process. A process at the end of which we arrive at a hypothesis that can work as a basis for further inferences and reasonings. De Morgan believed that the hypothesis comes at the beginning of all scientific reasonings; not at the end. If the hypothesis were a result of logical reasonings, then according to logic itself, that hypothesis would be contained somehow in the beginning of the logical reasonings, even if the logician didn’t know this or recognise it to be the case. This means that whatever is derived from a set of logical premises must be somehow there from the very beginning. In this logic is no different to mathematics. As mathematicians will say that if there is information contained in the derived theorems that were not implicitly or explicitly contained in the axioms, then the mathematician has gone wrong somewhere. A hypothesis cannot possibly be the end result of a set of inferential process. A hypothesis is a leap in the dark, not a logical result. If it were just a logical result, then in a certain sense science would have never moved forward to new and interesting discoveries. Logic, on the other hand, is more or less the process of unpacking what is contained in the premises, logical truths, principles, axioms, or laws that one begins one’s logical reasonings with. Or Platonistically, the whole of mathematics and logic is already there waiting to be discovered. A hypothesis, on the other hand, does not articulate what is already there. It tells us that if thus and thus is the case, then the hypothesis may explain it.

If that inferential process that resulted in a hypothesis were an inductive inferential process, then the hypothesis would still not be strictly logical in nature. It would be a probable hypothesis. That is, if induction deals with probabilities, then inductive logic is not true logic. True logic deals with certainties and necessities, not with probabilities. Inductive logic may use necessary and certain truths, and even the inferences found in deductive logic, but it does not thereby become a deductive logic because its main task is still to generalise from given phenomena and assert certain probabilities about such phenomena. Induction is more a case of if…then…, than it is this is derivable from that.   If a hypothesis were certain or necessary, or even highly probable, it would not thereby be a hypothesis.

Inductive Logic as the Logic of Truth



Many 19TH century logicians and philosophers asked themselves the following question. What truly distinguishes inductive from deductive logic? The answer is. The former is a logic of truth. Other logics primarily deal with consistency, validity, consequence, etc. For example, certain logics can begin with axioms or premises that are in fact false, but that which is derived from them may still be valid and consistent. Why can’t we also say that whereas the axioms are false, the deductions are still true because they are correctly derived from the axioms or premises? What we can say here is because the notion of truth is so problematic and disparate, then why shouldn’t we call statements that belong to a valid and consistent system ‘true’? It will all depend on what, precisely, we take truth to be. Similarly, the deductions from an inductively derived premise may only be correct. The distinction between ‘correct’ and ‘true’ may be helpful, but we may not nevertheless have good philosophical reasons for making such a distinction. We can say that correct statements are a particular type of true statement, or true statements are a particular kind of correct statement. Some truths are determined by empirical realities; others are determined by the fact that they are correctly derived or deduced from given premises or statements. Does this distinction have any metaphysical or semantic weight? The same may be the case with those Wittgensteinian distinctions between statements that are true and statements that are merely correct [see Wittgenstein’s]. That is, certain things are correct simply because they conform to certain conventional rules, norms or principles. Other things are true in spite of what the community thinks or its norms and rules. Why not simply invert these terms in that ‘truth’ becomes ‘correctness’ and ‘correctness’ become ‘truth’? The point is that although these distinctions are effective in distinguishing certain types of things or statements from one another, they are rarely distinguished from the perspective of metaphysically analysing the nature of truth or correctness. Is the difference just one of use and convenience; or are there strong metaphysical differences that must come into the picture?

Logic as a ‘Theory of Inquiry’



If logic is a ‘theory of inquiry’ (according to both F.C.S. Schiller and Bernard Bosanquet), then this means that it will be inquiry into human thought rather than into the merely formal relations between symbols and statements. The prime pursuit of this logic will be to understand how people actually fall into error. Its prime concern, therefore, will not be logical validity or consistency - these things do not depend on truth. And it is truth, according to Schiller, that is important.

Schiller also argues that science simply isn’t as logical as many people assume. Logical techniques and logical methodologies, in Schiller’s eyes, are not as foolproof as many imagination. He was also arguing that logic alone cannot solve all the problems that bedevil scientific experiment and research. The scientist simply cannot know, a priori, about all the possible “unforeseen objections” that may appear on the scene in the future. There is simply no way at all that these future possibilities can be known or pre-empted beforehand. It is not only objections that are unforeseen, but also

new conditions and unknown possibilities of error.

As a philosopher of time or a metaphysician would tell the scientist or layperson, no matter what happens in the future, the scientist cannot know what will or may happen. Indeed the scientist should apply logical principles to this view of the future. That is, we cannot logically predict any future happening simply because logic deals with present actualities; not future possibilities. Think, for instance, about the impossibility of scientists predicting quantum phenomena in the early 19th century. Think about the susceptibility of the law of excluded middle. Think of the rise of multi-valued logics. And think of all the revolutions in 20th century science as a whole. Hardly any of these could have been prophesised in the 19th century. A traditional empiricist would say that many of these things couldn’t have been predicted if such predictions, by their very nature, were dependent on past and present experiences of certain kinds. And because such things were not experienced in the 19th century, then it follows that they could not be predicted either. This means that no matter how strong or speculative the prediction is, it would still depend on certain empirical facts or experiences. If these things weren’t available to point in the direction of, say, quantum phenomena, then such things could not even be speculated about. Of course it might have been the case that certain speculators predicted these things by, as it were, accident. They might have made a wild leap in the dark. However, if this were the case, then no other scientist would have taken any notice of such speculations because they would have been completely divested of any observational or experimental foundation. The sceptic would not of course want a total justification of the speculators prediction or hypothesis, but he would at least expect such a person to have at least one foot on the ground, even if every other part of his body was in the air. The sceptic realises that unsubstantiated or unjustified hypotheses are vital in science. However, such hypotheses should not belong to another universe, as it were.


Bradley’s Critique of Mill’s ‘Inductive Method’




Most syllogisms work from universal statements to statements about particulars. J.S. Mill turned this on its head by working, so he thought, from particulars to general statements. Bradley, however, said that Mill did not in fact move from particulars to the general, but from the general to the general. Bradley gives the Millian example of from ‘this burnt’ and ‘that burnt’ to ‘this other thing will burn’. The general statement in this case would be that

Things of sort X will burn in such and such conditions.

Bradley, however, says that there are general ideas or universals hidden there from the start. For example, how does Mill know that his ‘this’ and ‘that’ are examples of the same type of thing? According to Bradley, he can only do so by utilising the universal resemblance. Not only does he use the universal resemble to connect his this and that, but he must realise that they are similar precisely because they share certain features other than the property burning under certain conditions. Not only does Mill use the universal resemblance, he also uses the universals that individuate and connect different objects as being in fact objects of the same type. Universals are actually used to talk about what Mill takes to be ‘pure particulars’. They could be taken as particulars in the sense that they set the ball rolling in this particular case of inductive inference. But they are only particulars because they are used as a starting point in an inquiry; not because they are genuine particulars. They are not in fact genuine particulars. They are generalities that just happen to be used, for whatever reason, as the starting point of a particular inquiry. This means that Mill could quite easily have used other particulars; even other particulars in a similar circumstance or ones that involved the same objects/events. In that sense, Mill’s particulars are only particulars to him at a certain point in time. They are not genuine particulars simpliciter. The context makes them particular; not the nature of the particulars themselves.

What Bradley says about Mill is that he actually picks out, quite arbitrarily, what he takes to be a particular. The situations that Mill refers to actually contain many ‘generalities’. It is just the case that Mill selects certain things to be taken as particulars. But if such a selection process actually took place, then it could only be carried out if Mill simply ignored the other general properties that were there in that particular situation. If Mill effectively ignores many general properties, then these general properties must have been there at the beginning of his inquiry. And to select one general property as a particular property, the use of general terms and concepts must have been used by Mill to do so. Not only that, but in order to distinguish the particular property from all the other general properties is to use general terms and concepts not only to determine the nature of the general properties in the situation, but also to determine the property that he takes to be a particular. In order to become a particular, the particular too needs to be taken as general in some ways otherwise there would be no way of distinguishing it from all the other surrounding general properties. A particular property can only be taken as a particular if also taken as an exemplification of various general features. The particular can only be distinguished from those general properties by applying general concepts and terms to that particular. So Mill’s inductive ‘method’, according to Bradley, only gets off the ground by the inductivist “excluding one or the other of these properties” from the inductive inquiry that is under way. And, as we’ve said, the operation of inductive exclusion can only happen when the inductivist uses general concepts and terms to individuate these surrounding properties and also to individuate what it is that he is taking to be a particular. The inductivist can only conduct his inductive inquiry if he uses many general terms and concepts which help him get his inquiry going. It would not even get going if the inductivist didn’t use any general terms or concepts.


A Short Technical Digression on Peircian Abduction



Peircian abduction, prima facie, may seem indistinguishable from induction. An abduction somehow explains certain observations. In other words, it is a hypothesis. Or, the other way round, from such an abductive hypothesis we can know what kind of observations to expect given pre-existing data. Unlike induction, an abductive argument will begin with some kind of generalisation:

All the beans from this bag are white.
These beans are white.
Therefore, these beans are from this bag.

The second premise moves to the particular. The conclusion, in this case, in a sense fuses the first and second premises. That is, because all the beans in the bag are white, these particular white beans may be from that bag. In the above example, it is not yet known where the white beans have come from. The conclusion, given in the first premise, hypothesises the possibility that given all the beans in the bag are white, then these particular white beans must also be from the bag. The first premise can itself be seen as the conclusion of a previous inductive argument. From observations of many particular white beans it might have been concluded that all the beans in the bag must be white. Or, to use Flach’s terms, the first premise of the abductive argument gives us the inductive “general rule”. The abductive part of the argument, as it were, will be the abductive inference that the particular white beans in front of the observer will probably be from the bag of white beans. In this instance, abduction takes over where induction left off.

References and Further Reading

Bosanquet, B. – (1888) Logic
Bradley, F.H. – (1883) The Principles of Logic
Hookway, C – (1985) Peirce, London
Husserl, E. – (1900/1) Logical Investigations
Flach, P.A. – (2002) ‘Modern Logic and its Role in the Study of Logic’, in A Companion to Philosophical Logic, ed. D. Jacquette, Blackwell Publishers
Frege, G. – (1884/1959) The Foundations of Arithmetic, trans. J.L. Austin, Oxford: Blackwell
George, R. and Van Evra, J. – (2002) ‘The Rise of Modern Logic’, in A Companion to Philosophical Logic, ed. D. Jacquette, Blackwell Publishers
Mill, J.S. – (1865) A System of Logic, various editions
Peirce, C.S. – (1931/1974) Collected Papers, ed. C. Hartshorne and P. Weiss, Harvard University Press
Wittgenstein, L – (1967) Remarks on the Foundations of Mathematics, ed. R. Rhees, etc., Blackwell

No comments:

Post a Comment