Friday 21 April 2017

Integrated Information Theory: Information (4)



The word 'information' has massively different uses; some of which tend to differ strongly from the ones we use in everyday life. Indeed we can use the words of Claude E. Shannon to back this up:

"It is hardly to be expected that a single concept of information would satisfactorily account for the numerous possible applications of this general field." [1949]

The most important point to realise is that minds (or observers) are usually thought to be required to make information information. However, information is also said to exist without minds/observers. It existed before minds and it will exist after minds. This, of course, raises lots of philosophical and semantic questions.

It may help to compare information with knowledge. The latter requires a person, mind or observer. The former (as just stated), may not.

Integrated information theory's use of the word 'information' receives much support in contemporary physics. This support includes how such things as particles and fields are seen in informational terms. As for dynamics: if there's an event which affects a dynamic system, then that event can read as information.

Indeed in the field called pancomputationalism (just about) anything can be deemed to be information. In these cases, that information could be represented and modelled as a computational system.

Consciousness as Integrated Information

It's undoubtedly the case that Guilio Tononi believes that consciousness simply is information. Thus, if that's an identity statement, then we can invert it and say that information is consciousness. In other words, 

consciousness (or experience) = information

Consciousness doesn't equal just any kind of information; though any kind of information (embodied in a system) may be conscious to some extent.

Tononi believes that an informational system can be divided into its parts. Its parts contain information individually. The whole of the system also has information. The information of the whole system is over and above the combined information of its parts. That means that such extra information (of that informational system) must emerge from the information contained in its parts. This, then, seems to be a commitment to some kind of emergentism.

The mathematical measure of that information (in an informational system) is φ (phi). Not only is the system more than its parts: that system also has degrees of informational integration. The higher the informational integration, the more likely that informational system will be conscious. Or, alternatively, the higher the degree of integration, the higher the degree of consciousness.

Emergence from Brain Parts?

Again, we can argue that the IIT position on what it calls “phi” is a commitment to some form of emergence in that an informational system is - according to Christof Koch - “more than the sum of its parts”. This is what he calls “synergy”. Nonetheless, a system can be more than the sum of its parts without any commitment to strong emergence. After all, if four matches are shaped into a square, then that's more than an arbitrary collection of matches; though it's not more than the sum of its parts. (Four matches scattered on the floor wouldn't constitute a square.) However, emergentists have traditionally believed that consciousness is more than the sum of its/the brain's (?) parts. Indeed, in a strong sense, it can even be said that consciousness itself has no parts. Unlike water and its parts (individual H20 molecules), consciousness is over and above what gives rise to it (whatever that is). It's been seen as a truly emergent phenomenon. Water isn't, strictly speaking, strongly emergent from H20 molecules. It's a large collection of H2O molecules. (Water = H20 molecules.) Having said, in a sense, it can be said that water does weakly emerge from a large collection of H20 molecules.

The idea of the whole being more than the sum of its parts has been given concrete form in the example of the brain and its parts. IIT tells us that the individual neurons, ganglia, amygdala, visual cortex, etc. each have “non-zero phi”. This means that if they're taken individually, they're all (tiny) spaces of consciousness unto themselves. However, if you lump all these parts together (which is obviously the case with the human brain), then the entire brain has more phi than each of its parts taken individually; as well as more phi than each of its parts taken collectively. Moreover, the brain as a whole takes over (or “excludes”) the phi of the parts. Thus the brain, as we know, works as a unit; even if there are parts with their own specific roles (not to mention the philosopher's “modules”).

Causation and Information

Information is both causal and structural.

Say that we've a given structure (or pattern) x. That x has a causal effect on structure (or pattern) y. Clearly x's effect on y can occur without minds. (At least if you're not an idealist or an extreme anti-realist/verificationist.)

Instead of talking about x and y, let's give a concrete example instead.

Take the pattern (or structure) of a sample of DNA. That DNA sample causally affects and then brings about the development (in particular ways) of the physical nature of a particular organism (in conjunction with the environment, etc.). This would occur regardless of observers. That sample of DNA contains (or is!) information. The DNA's information causally brings about physical changes; which, in some cases, can themselves be seen as information.

Some commentators also use the word “representation” within this context. Here information is deemed to be “potential representation”. Clearly, then, representations are representations to minds or observers; even if the information - which will become a representation - isn't so. Such examples of information aren't designed at all (except, as it were, by nature). In addition, just as information can become a representation, so it can also become knowledge. It can be said that although a representation of information may be enriched with concepts and cognitive activity; this is much more the case with information in the guise of knowledge.

Panpsychism?

The problem with arguing that consciousness is information is that information is everywhere: even basic objects (or systems) have a degree of information. Therefore such basic things (or systems) must also have a degree of consciousness. Or, in IIT speak, all such things (systems) have a “φ value”; which is the measure of the degree of information (therefore consciousness) in the system. Thus David Chalmers' thermostat [1997] will thus have a degree of consciousness (or, for Chalmers, proto-experience).

It's here that we enter the territory of panpsychism. Not surprisingly, Tononi is happy with panpsychism; even if his position isn't identical to Chalmers' panprotopsychism.

Scott Aaronson, for one, states one problem with the consciousness-is-everywhere idea in the following:

[IIT] unavoidably predicts vast amounts of consciousness in physical systems that no sane person would regard as particularly ‘conscious’ at all: indeed, systems that do nothing but apply a low-density parity-check code, or other simple transformations of their input data. Moreover, IIT predicts not merely that these systems are ‘slightly’ conscious (which would be fine), but that they can be unboundedly more conscious than humans are.”

Here again it probably needs to be stated that if consciousness = information (or that information – sometimes? - equals consciousness), then consciousness will indeed be everywhere.

***************************************

Add-on: John Searle on Information

How can information be information without minds or observers?

John Searle denies that there can be information without minds/observers. Perhaps this is simply a semantic dispute. After all, the things which pass for information certainly exist and they've been studied - in great detail! - from an informational point of view. However, they don't pass Searle's following tests; though that may not matter very much.

Take, specifically, Searle's position as it was expressed in a 2013 review (in The New York Review of Books) of Christoff Koch’s book Consciousness. In that piece Searle complained that IIT depends on a misappropriation of the concept [information]:

[Koch] is not saying that information causes consciousness; he is saying that certain information just is consciousness, and because information is everywhere, consciousness is everywhere. I think that if you analyze this carefully, you will see that the view is incoherent. Consciousness is independent of an observer. I am conscious no matter what anybody thinks. But information is typically relative to observers...

...These sentences, for example, make sense only relative to our capacity to interpret them. So you can’t explain consciousness by saying it consists of information, because information exists only relative to consciousness.” [2013]

If information is the propagation of cause and effect within a given system, then John Searle's position must be wrong. Searle may say, then, that such a thing isn't information until it becomes information in a mind or according to observers. (Incidentally, there may be anti-realist problems with positing systems which are completely free of minds.)

Searle argues that causes and effects - as well as the systems to which they belong - don't have information independently of minds. However, that doesn't stop it from being the case that this information can become information because of direct observations of that information.

Anthropomorphically, the system communicates to minds. Or minds read the system's messages.

Searle's position on information can actually be said to be a position on what's called Shannon information. This kind of information is “observer-relative information”. In other words, it doesn't exist as information until an observer takes it as information. Thus when a digital camera takes a picture of a cat, each photodiode works in casual isolation from the other photodiodes. In other words, unlike the bits of consciousness, the bits of a photograph (before it's viewed) aren't integrated. Only when a mind perceives that photo are the bits integrated.

IIT, therefore, has a notion of “intrinsic information”.

Take the brain's neurons. Such things do communicate with each other in terms of causes and effects. (Unlike photodiodes?) It's said that the brain's information isn't observer-relative. Does this contradict Searle's position? IIT is talking about consciousness as information not being relative to other observers; though is it relative to the brain and consciousness itself?

There's an interesting analogy here which was also cited by Searle. In his arguments against Strong Artificial Intelligence (strong AI) and the mind-as-computer idea, he basically states that computers – like information - are everywhere. He writes:

... the window in front of me is a very simple computer. Window open = 1, window closed = 0. That is, if we accept Turing’s definition according to which anything to which you can assign a 0 and a 1 is a computer, then the window is a simple and trivial computer.” [1997]

Clearly, in these senses, an open and shut window also contains information. Perhaps it couldn't be deemed a computer if the window's two positions didn't also contain information. Thus, just as the window is only a computer to minds/observers, so too is that window's information only information to minds/observers. The window, in Searle speak, is an as-if computer which contains as-if information. And so too is Chalmers' thermometer and Koch's photodiode.

Here's Searle again:

"I say about my thermostat that it perceives changes in the temperature; I say of my carburettor that it knows when to enrich the mixture; and I say of my computer that its memory is bigger than the memory of the computer I had last year."

Another Searlian (as well as Dennettian) way of looking at thermostats and computers is that we can take an “intentional stance” towards them. We can treat them - or take them - as intentional (though inanimate) objects. Or we can take them as as-if intentional objects.

The as-if-ness of windows, thermostats and computers is derived from the fact that these inanimate objects have been designed to perceive, know and memorise. Though this is only as-if perception, as-if knowledge, and as-if memory. Indeed it is only as-if information. Such things are dependent on human perception, human knowledge, and human memory. Perception, knowledge and memory require real - or intrinsic - intentionality; not as-if intentionality. Thermostats, windows, and computers have a degree of as-if intentionality, derived from (our) intrinsic intentionality. However, despite all these qualifications of as-if intentionality, as-if intentionality is still ‘real’ intentionality (according to Searle); though it's derived from actual intentionality.

References

Searle, John (1997) The Mystery of Consciousness.
Tononi, Guilio (2015) 'Integrated Information Theory'.

*) Next: 'Integrated Information Theory: Panpsychism' (5)


Monday 27 March 2017

Integrated Information Theory: From Consciousness to the Brain (2)



Integrated Information Theory (IIT) demands a physical explanation of consciousness. This rules out, for example, entirely functional explanations; as well as unwarranted correlations between consciousness and the physical. Indeed if consciousness is identical to the physical (not merely correlated with it or caused by it), then clearly the physical (as information, etc.) is paramount in the IIT picture.

All this is given a quasi-logical explanation in terms of axioms and postulates. That is, there must be identity claims between IIT's "axioms of consciousness" and postulates about the physical. Moreover, the axioms fulfill the role of premises. These premises lead to the physical postulates.

So what is the nature of that relation between these axioms and their postulates? How do we connect, for example, a conscious state with a neuroscientific explanation of that conscious state? How is the ontological/explanatory gap crossed?

As hinted at earlier, the identity of consciousness and the physical isn't a question of the latter causing or bringing about the former. Thus, if x and y are identical, then x can't cause y and y can't cause x. These identities stretch even as far as phenomenology in that the phenomenology of conscious state x at time t is identical with the physical properties described by the postulates at time t.

More technically, Giulio Tononi (2008) identifies conscious states with integrated information. Moreover, when information is integrated (by whichever physical system – not only the brain) in a complicated enough manner (even if minimally complicated), that will be both necessary and sufficient to constitute (not cause or create) a conscious state or experience.

Explaining IIT's identity-claims (between the axioms of consciousness and the physical postulates) can also be done by stating what Tononi does not believe about consciousness. Tononi doesn't believe that

i) the brain's physical features (described by the postulates) cause or bring about consciousness.
ii) the brain's physical features (described by the postulates) are both necessary and sufficient for consciousness.

Causality

Where we have the physical, we must also have the causal. And indeed IIT stresses causality. If consciousness exists (as the first axiom states), then it must be causal in nature. It must “make a causal difference”. Thus epiphenomenalism, for one, is ruled out.

Again, consciousness itself must have causal power. Therefore this isn't a picture of the physical brain causing consciousness (or even subserving consciousness). It is said, in IIT, that “consciousness exists from its own perspective”. This means that a conscious state qua conscious state (or experience qua experience) must have causal power both on itself and on its exterior. Indeed the first axiom (of existence) and its postulate require that a conscious state has what's called a “cause-effect power”. It must be capable of having an affect on behaviour or actions (such a picking something up) as well as a “power over itself”. (Such as resulting in a modification of a belief caused by that conscious state?) This, as started earlier, clearly rules out any form of epiphenomenalism.

Now does this mean that a belief (as such) has causal powers? Does this mean that the experience of yellow has – or could have – causal powers? Perhaps because beliefs aren't entirely phenomenological, and spend most of their time in the “belief box” (according to non-eliminative accounts), then they aren't a good candidate for having causal powers in this phenomenological sense. However, the experience of yellow is a casual power if it can cause a subject to pick up, say, a lemon (qua lemon).

From Consciousness to Brain Again

Even if IIT starts with consciousness, it's very hard, intuitively, to see how it would be at all possible to move to the postulated physical aspects (not bases or causes) of a conscious state. How would that work? How, even in principle, can we move from consciousness (or phenomenology) to the physical aspects of that consciousness state? If there's a ontological/explanatory gap between the physical and the mental; then there may be/is an ontological gap/explanatory gap between consciousness and the physical. (There'll also be epistemological gaps.) So how does this IIT inversion solve any of these problems?

The trick is supposed to be pulled off by an analysis the phenomenology of a conscious state (or experience) and then accounting for that with the parallel state of the physical system which is the physical aspect of that conscious state. (Think here of Spinoza and Donald Davidson's "anomalous monism" – or substance monism/conceptual dualism - is which a single substance has two "modes".) But what does that mean? The ontological/explanatory gap, sure enough, shows its face here just as much as it does anywhere else in the philosophy of consciousness. Isn't this a case of comparing oranges with apples – only a whole lot more extreme?

An additional problem is to explain how the physical modes/aspects of a conscious state must be “constrained” by the properties of that conscious state (or vice versa?). Again, what does that actually mean? In theory it would be easy to find some kind of structural physical correlates of a conscious state. The problem would be to make sense of - and justify - those correlations. For example, I could correlate my wearing black shoes with Bradford City winning away. Clearly, in this instance “correlation doesn't imply causation”. However, if IIT doesn't accept that the physical causes conscious states, but that they are conscious states (or a mode therefore), then, on this example, my black shoes may actually be Bradford City winning at home (rather than the shoes causing that win)... Of course shoes and football victories aren't modes/aspects of the same thing. Thus the comparison doesn't work.

It doesn't immediately help, either, when IIT employs (quasi?)-logical terms to explain and account for these different aspects/modes of the same thing. Can we legitimately move from the axioms of a conscious experience to the essential properties (named “postulates”) of the physical modes/aspects of that conscious experience?

Here we're meant to be dealing with the "intrinsic" properties of experience which are then tied to the (intrinsic?) properties of the physical aspects/modes of that experience. Moreover, every single experience is meant to have its own axiom/s.

Nonetheless, if an axiomatic premise alone doesn't deductively entail (or even imply) its postulate, then why call it an “axiom” at all?

Tononi (2015) explains this is terms of "inference to the best explanation" (otherwise called abduction). Here, instead of a strict logical deduction from a phenomenological axiom to a physical postulate, the postulates have (merely) statistical inductive support. Tononi believes that such an abduction shows us that conscious systems have “cause-effect power over themselves”. Clearly, behavioural and neuroscientific evidence may/will show this to be the case.

Conclusion

Sceptically it may be said that the "ontological gap" (or the "hard problem") appears to have been bridged (or even solved) by mere phraseology. What I mean by this is that IIT identifies a conscious state with physical things in the brain. (Namely, the physical elements and dynamics of the brain.) These things are measurable. Thus, if that's the case, then a conscious state is measurable in that the dynamical and physical reality of the brain (at a given time) is measurable. Indeed in IIT it's even said that something called the “phi metric” can “quantify consciousness”.

Is the hard problem of consciousness solved merely through this process of identification?

The IIT theorist may reply: What more do you want?! However, then we can reply: 

Correlations between conscious states and brain states (or even the brain's causal necessitation of a conscious state) aren't themselves explanations of consciousness. 

Indeed isn't the identification of conscious states with the physical and dynamical elements of the brain what philosophers have done for decades? Do IIT's new technical/scientific terms (as well as references to “information”) give us anything fundamentally new in this long-running debate on the nature of consciousness?

*) Next: 'Integrated Information Theory: Structure (3)'

References

Tononi, Giulio Tononi (2008) 'Consciousness as Integrated Information: a Provisional Manifesto'.

Friday 17 March 2017

Integrated Information Theory: the Cartesian Foundation (1)



At a prima facie level, integrated information theory (IIT) is utterly Cartesian. Sure, it's Cartesianism in a contemporary scientific guise; though Cartesian nonetheless. This isn't to say that IIT simply reasserts, for example, the Cogito or Descartes' seemingly deductive style of reasoning. Though, despite that, the Cogito etc. can also be said to be resurrected (in contemporary terms) in IIT. However, Descartes moved from (his) mind and journeyed to his body and then to the external world. IIT moves from consciousness to the brain. At least as viewed from one angle. 1

On one hand, IIT inverts many 20th century (Anglo-American) ways of dealing with consciousness in that it's said that it moves from consciousness to arrive at the physical; rather than starting with the physical in order to attempt to arrive at consciousness. It can also be that IIT moves to the physical only after it's got its Cartesian house in order. 

The Cogito, of course, was the starting point of Descartes' enterprise.

On the other hand, what is certainly not Cartesian is that it's also said (or implied) that IIT begins with neuroscience/the brain and then journeys to consciousness. This, of course, directly contradicts what was said in the last paragraph.

In which case, if IIT also sees conscious states (or experiences) as “immediate and direct”, then how can neuroscience come first? This may depend on what's meant by the idea that neuroscience must (or does) come first. Even if a given neuroscientific basis (or reality) were necessary and sufficient for consciousness, that still wouldn't mean (philosophically) that such a reality must come first. Thus coming first or second may not matter. As long as consciousness (its immediacy, directness and phenomenology) and the neuroscience are both seen as being part of the same problem or reality; talk of the primary and the secondary can't be that important.

To get back to Descartes.

Let's take the use of the words “axiom” and “postulate” in IIT to begin with.

This implies a kind of Cartesian deductivism; though, in IIT's case, I find the words a little strained in that the moves from the axioms of consciousness to the postulates of its physical mode (or is it a substrate?) are never, strictly speaking, logical.

IIT's first Cartesian axiom is “the axiom of existence”. This is seen as being “self-evident”. Giulio Tononi describes the first axiom:

Consciousness is real and undeniable; moreover, a subject’s consciousness has this reality intrinsically; it exists from its own perspective.” [2015]

The only non-Cartesian aspect of the above (as it seems to me) is the claim that consciousness “exists from its own perspective”. Indeed it's hard to work out exactly what that means; at least as it's expressed in this bare form.

In any case, it's clear that the nature of IIT is, again, explicitly Cartesian. Tononi, for example, also says that

“consciousness exists: each experience is actual—indeed, that my experience here and now exists (it is real) is the only fact I can be sure of immediately and absolutely.” [2015]

Isn't this the Cogito written in a more contemporary manner? In other words, that which many 20th century scientists and philosophers have out-rightly denied (or seen as "unscientific") is here at the very beginning of the philosophical enterprise.

Tononi then takes the Cogito in directions not explicitly taken (or written about) by Descartes himself. That is, Tononi says that his

experience exists from its own intrinsic perspective, independent of external observers (it is intrinsically real or actual)”. [2015]

In this internalist/individualist manner, Tononi speaks of consciousness as being “independent of external observers”.

Functionalism

If you take this Cartesian approach to consciousness, then one automatically rules out certain alternative theories of mind which have been alive and well in the history of the philosophy of mind (at least in the late 20th century).

For example, IIT rules out functionalism.

Functionalism (or at least functionalism at its most pure) has a notion of mental functions and behaviour which effectively rules out experience; or, at the least, it rules out (or ignores) the phenomenological reality of consciousness.

The major and perhaps most obvious problem with functionalism (at least vis-a-vis consciousness and phenomenology) was best expressed by Christof Koch in 2012. He claimed that much work in the philosophy of mind utilised “models that describe the mind as a number of functional boxes”. That's fair enough as it stands; except for the fact that these boxes are “magically endowed with phenomenal awareness”. Sure, the functional boxes may exist, and they may have much explanatory power (in the philosophy of mind); yet what about such things as Koch's “phenomenal awareness” and the controversial qualia?

One functionalist problem with an entirely Cartesian position on mind (or consciousness) is that it's indeed the case the consciousness seems to be direct and immediate. However, to some functionalists, this is only a “seeming”. (Daniel Dennett would probably call it an illusion.) In other words, simply because a conscious experience (or state) appears to us to be direct and immediate, that doesn't automatically mean that it is direct and immediate.

According to some functionalisms, even this immediate and direct phenomenology doesn't need to go beyond functionality (or mental functions). In other words, it's still all (or primarily) about functions. More clearly, this sense of immediacy and directness is itself a question of mental functions. In this case, the mental function which is our belief - and our disposition to believe - that consciousness is immediate and direct!

That belief also needs to be accounted for in functionalist terms. (In the way that the immediacy and directness of an experience itself may require a functionalist explanation.) That is, why is it the case the an experience seems direct and immediate to us? What function does that belief (or experience of an experience) serve?

A conscious state (or experience) may seem to be direct and immediate simply because we believe that it's direct and immediate. Moreover, we also have a long-running disposition to believe that it's direct and immediate. Or, again, the sense that an experience (or a mental state) is direct and immediate (or the experience that an experience is direct and immediate) doesn't automatically mean that it is. 

Doesn't this position leave out the phenomenological factors (or reality) of an experience (or conscious state) which are above and beyond their being direct and immediate? On the one had, there's the phenomenological reality of an experience. And, on the other hand, there's its apparent (or real) directness and immediacy. The two aren't the same even if they always occur together.

*****************************************

Note

1 It can be said, in retrospect, that it would have been more accurate for Descartes to have said that he started with consciousness rather than with the “existence of the self” (or the “I think”). After all, the self/I is much more of a postulation than brute consciousness.

References

Koch, Christoph (2012). Consciousness: Confessions of a Romantic Reductionist, MIT Press.
Tononi, Giulio (2015), Scholarpedia.




Friday 30 December 2016

Post-analytic Philosophy?


i) Introduction
ii) Objective Truth?
iii) Philosophy Must Be Political?
iv) Richard Rorty

The term 'post-analytic philosophy' was first used in the mid-1980s. At that time it referred to those philosophers who were indebted to analytic philosophy, but who, nonetheless, believed that they'd moved on from it (for whatever reasons).

The term seems, prima facie, odd. After all, how can philosophers be 'post'- or anti-analysis? Surely even most examples of post-analytic philosophy will contain analyses of sorts. (This isn't necessarily to say that philosophy must consist entirely of analysis.)

Thus, the term must instead refer to the tradition (in a broad sense) of analytic philosophy. But which aspects of that tradition? Which particular philosophers? Did all analytic philosophers have an philosophical essence in common? And let's not forget that philosophical analysis occurred well before the analytic tradition got under way. (What is it that Hume, Hobbes, Aquinas, etc. did if it wasn't - at least in part - analysis?)

The above are all problems which, to some extent, subside once the history and use of the term 'post-analytic philosophy' is studied.

However, it is indeed analysis that some philosophers seem to have a problem with. Or, rather, perhaps it's more accurate to say 'philosophical analysis' rather than the simple 'analysis'. This is obviously the case because the words 'philosophical analysis' are more particular than 'analysis' and it may/will contain assumptions as to what philosophical analysis actually is.

Objective Truth?

If we want to put meat on what post-analytic philosophers see to be the problem (or simply a problem) with analytic philosophy, it's best to consult late-20th century and contemporary American pragmatism. This school is itself seen as being part of the post-analytic movement (i.e., which isn't a determinate or real school).

Many would say that such American pragmatists have a problem with the very notion of objective truth, realism and representationalism. They are things they see as being an idée fixe throughout the history of philosophy. And this, indeed, is no less the case when it came to 20th-century analytic philosophy.

A personal objection to this is that I've hardly read a single analytic philosopher mention - or use - the words “objective truth”. (I have read, however, Peter van Inwagen's 'Objectivity'.) Then again, it can easily be countered that a philosopher needn't use the actual words “objective truth” in order for him to be committed to the notion of objective truth. In other words, perhaps he simply calls it by another name.1

In any case, the position that objective truth doesn't exist (or that it's not a worthy aim in philosophy) goes alongside a stress on the contingency of cognitive activity, the importance of convention and utility, and, indeed, the idea that human (or social) progress can never be ignored – not even in philosophy. Nonetheless, here again I don't see how there's an automatic (or prior) problem with accepting all this and still engaging in analytic philosophy (or in philosophical analysis).

For example and in very basic terms, one could offer a philosophical analysis of philosophical analysis (or some part thereof). And then, as a result, see philosophical problems with such philosophical analysis. Despite that, such a philosopher would still be in the domain of analytic philosophy (or of philosophical analysis).

Strangely enough, Richard Rorty seems to agree with this position. Or, at the least, he says something similar. In an interview conducted by Wayne Hudson and Win van Reijen, Rorty states:

"I think that analytic philosophy can keep its highly professional methods, the insistence on detail and mechanics, and just drop its transcendental project. I'm not out to criticize analytic philosophy as a style. It's a good style. I think the years of superprofessionalism were beneficial." 

I said the position is “similar” to the one advanced by Rorty. It's similar in the sense that an analytic philosopher needn't “drop [his] transcendental project”. That is, an analytic philosopher may be fully aware of Rorty's positions/arguments (or the general positions of post-analytic philosophers) and still be committed to the transcendental project. (Of course we'd need to know what Rorty means by the words transcendental project”.)

Philosophy Must be Political?

It seems that the position of many post-analytic philosophers is primarily political - or at least primarily social – in nature. Hilary Putnam (1985), for example, has said that analytic philosophy has “come to the end of its own project—the dead end”. That can be taken to mean that philosophy should connect itself more thoroughly with other academic disciplines. Or, more broadly, that analytic philosophy should connect itself with culture or society as a whole.

The problem is that, on and off, analytic philosophy has already connected itself to many other disciplines. (Admittedly, that's been more the case since the 1980s and the rise of cognitive science.) To give just a couple of examples: the logical positivists connected themselves to science (or at least to physics). And, to give another example, philosophers in the 19th century connected themselves to logic, mathematics and, again, to science. This non-ostentatious “interdisciplinary” nature of philosophy has been the case, in fact, throughout the history of philosophy.

One can also say that philosophy can connect itself to other disciplines - and even culture as a whole - and still remain analytic philosophy. Philosophers can still practice philosophical analysis. (This, again, raises the question as to what analytic philosophy - or philosophical analysis - actually is.)

A philosopher may also ask why he should connect himself to other disciplines - never mind to something as vague (or as broad) as culture. In other words, a philosopher must have philosophical reasons as to why this would be a good thing, just as a philosopher must have philosophical reasons as to why it's a bad thing. That means that there'll be philosophical angles to this very debate. However, it can be added, those angles needn't always be philosophical in nature.

Another slant on this philosophy-society "binary opposition" is that it is argued that analytic philosophy is too professional and therefore too narrow. In other words, analytic philosophers are over-concerned with very tiny, narrow and specialised problems which have almost zero connection to society as a whole or indeed to anything else.

More technically and philosophically, it can also be argued that certain central commitments and assumptions of analytic philosophy have been shown to be indefensible. (Hence Putnam's own words quoted earlier.)

Yet all disciplines can be said to concerned with narrow or specialised issues or concerns. Yet this is an accusation more often aimed at analytic philosophy than at any other academic subject.

Richard Rorty

Richard Rorty appears to be talking about analytic philosophy as it was in the past (say, the 1950s to the 1970s), not as it is today or as it has been since, say, the 1980s.

Take the view that analytic philosophy has as its primary aim a form of knowledge which grounds all other forms of knowledge. This is odd. It's true that much traditional philosophy has placed various philosophical domains in the position of what used to be called First Philosophy. (It was once metaphysics, then epistemology, then philosophy of language, then philosophy of mind...) However, in the 20th century this has been far from the case. Indeed philosophers - throughout the 20th century - have argued against the nature of a first philosophy.

Take naturalists (e.g., the logical positivists, then Quine): arguably, they placed science (or simply physics) in the role of first philosophy. (Although such naturalists saw physics as being primary, that isn't in itself a commitment to also seeing it as some kind of first philosophy.)

It must be said that just as Rorty's post-philosophy is a philosophical position, so too is the Wittgensteinian attempt to “dissolve” and then disregard philosophical problems (if not philosophy itself). This position can be said to be held by Putnam and John McDowell, as well as by Rorty. (A more specific example of this would be the “problem” of how mind and language are connected to the world.)2

In any case, there's just as strong case for arguing that Rorty's later position was more a case of post-philosophy than post-analytic philosophy. In other words, like Heidegger and Derrida, Rorty had a problem with the whole damn show that is Western philosophy. And, here again, it can be argued that Rorty's position was more political (or social) than strictly philosophical. That said, a position that rejects philosophy in toto can't help being philosophical – in some or many ways – itself, as Rorty would have no doubt happily admitted. (Jacques Derrida did admit this.)

Notes

1) If you say that an argument (or a single statement) is “warranted and therefore assertible”, then is that a case of being wedded to the notion of objective truth? Or is the notion of warranted assertibility a different species entirely?


2) I put the word “problem” in scare quotes because the very stance of seeing such problems as problems means - according to Rorty, Derrida, Heidegger, etc. - that we've fallen prey to particular philosophical “style of thinking”. However, that position too would be a philosophical position.