Friday, 21 April 2017

Integrated Information Theory: Information (4)



The word 'information' has massively different uses; some of which tend to differ strongly from the ones we use in everyday life. Indeed we can use the words of Claude E. Shannon to back this up:

"It is hardly to be expected that a single concept of information would satisfactorily account for the numerous possible applications of this general field." [1949]

The most important point to realise is that minds (or observers) are usually thought to be required to make information information. However, information is also said to exist without minds/observers. It existed before minds and it will exist after minds. This, of course, raises lots of philosophical and semantic questions.

It may help to compare information with knowledge. The later requires a person, mind or observer. The former (as just stated), doesn't.

Integrated information theory's use of the word 'information' receives much support in contemporary physics. This support includes how such things as particles and fields are seen in informational terms. As for thermodynamics: if there's an event which affects a dynamic system, then that too can read as information.

Indeed in the field called pancomputationalism, (just about) anything can be deemed to be information. In these cases, that information could be represented and modelled as a computational system.

Consciousness as Integrated Information

It's undoubtedly the case that Guilio Tononi believes that consciousness simply is information. Thus, if that's an identity statement, then we can invert it and say that information can be conscious(ness). In other words, consciousness/experience = information.

Consciousness doesn't equal just any kind of information; though any kind of information (embodied in a system) may be conscious to some extent.

Tononi believes that an informational system can be divided into its parts. Its parts contain information individually. The whole of the system also has information. The information of the whole system is over and above the combined information of its parts. That means that such extra information (of that informational system) must emerge from the information contained in its parts. This, then, is surely a commitment to at least some kind of emergentism.

The mathematical measure of that information (in an informational system) is φ (phi). Not only is the system more than its parts: that system also has degrees of informational integration. The higher the informational integration, the more likely that informational system will be conscious. Or, alternatively, the higher the degree of integration, the higher the degree of consciousness.

Emergence from Brain Parts?

Again, we can argue that the IIT position on what it calls “phi” is a commitment to some form of emergence in that an (informational) system is, according to Christof Koch, “more than the sum of its parts”. This is what he calls “synergy”. Nonetheless, a system can be more than the sum of its parts without any commitment to strong emergence. After all, if four matches are shaped into a square, then that's more than a collection of matches; though it's not more than the sum of its parts. (Four matches scattered on the floor wouldn't constitute a square.) However, emergentists have traditionally believed that consciousness is more than the sum of its/the brain's (?) parts. Indeed, in a strong sense, it can even be said that consciousness itself has no parts. Unlike water and its parts (individual H20 molecules), consciousness is over and above what gives rise to it (whatever that is). It's been seen as a truly emergent phenomenon. Water isn't, strictly speaking, strongly emergent from H20 molecules. It's a large collection of H2O molecules. (Water = H20 molecules.) Having said, in a sense, it can be said that water does weakly emerge from a large collection of H20 molecules.

The idea of the whole being more than the sum of its parts has been given concrete form in the example of the brain and its parts. IIT tells us that the individual neuron, ganglia, amygdala, visual cortex, etc. each have “non-zero phi”. This means that if they're taken individually, they're all (tiny) spaces of consciousness unto themselves. However, if you lump all these parts together (which is obviously the case with the human brain), then the entire brain has more phi than each of its parts individually; as well as more phi than each of its parts taken collectively. Moreover, the brain as a whole takes over (or “excludes”) the phi of the parts. Thus the brain, as we know, works as a unit; even if there are parts with their own specific roles (not to mention the philosopher's “modules”).

Causation and Information

Information is both causal and structural.

Say that we've a given structure (or pattern) X. That X has a causal effect on structure (or pattern) Y. Clearly X's effect on Y can occur without minds. (At least if you're not an idealist or an extreme anti-realist/verificationist.)

Instead of talking about X and Y, let's give a concrete example instead.

Take the pattern (or structure) of a sample of DNA. That DNA sample causally affects and then brings about the development (in particular ways) of the physical nature of a particular organism (in conjunction with the environment, etc.). This would occur regardless of observers. That sample of DNA contains (or is!) information. The DNA's information causally brings about physical changes; which, in some cases, can themselves be seen as information.

Some commentators also use the word “representation” within this context. Here information is deemed to be “potential representation”. Clearly, then, representations are representations to minds or observers; even if the information - which will become a representation - isn't so. Such examples of information aren't designed at all (except, as it were, by nature). In addition, just as information can become a representation, so it can also become knowledge. It can be said that although a representation of information may be enriched with concepts and cognitive activity; this is much more the case with information in the guise of knowledge.

Panpsychism?

The problem with arguing that consciousness is information is that information is everywhere and even basic objects (or systems) have a degree of information. Therefore such basic things (or systems) must also have a degree of consciousness. Or, in IIT speak, all such things (systems) have a “φ value”; which is the measure of the degree of information (therefore consciousness) in the system. Thus David Chalmers' thermostat [1997] will have a degree of consciousness (or, for Chalmers, proto-experience).

It's here that we enter the territory of panpsychism. Not surprisingly, Tononi is happy with panpsychism; even if it's not identical to, say, Chalmers' panprotopsychism.

Scott Aaronson, for one, states one problem with the consciousness-is-everywhere idea in the following quotation:

[IIT] unavoidably predicts vast amounts of consciousness in physical systems that no sane person would regard as particularly ‘conscious’ at all: indeed, systems that do nothing but apply a low-density parity-check code, or other simple transformations of their input data. Moreover, IIT predicts not merely that these systems are ‘slightly’ conscious (which would be fine), but that they can be unboundedly more conscious than humans are.”

Here again it probably needs to be stated that if consciousness = information (or that information – sometimes? - equals consciousness), then consciousness will indeed be everywhere.

***************************************

Add-on: John Searle on Information


How can information be information without minds or observers?

John Searle denies that there can be information without minds/observers. Perhaps this is simply a semantic dispute. After all, the things which pass for information certainly exist and they've been studied - in great detail! - from an informational point of view. Sure, they don't pass Searle's following tests; though that may not matter very much.

Take, specifically, Searle's position as it was expressed in a 2013 review (in The New York Review of Books) of Christof Koch’s book Consciousness. In that piece Searle complained that IIT depends on a misappropriation of the concept [information]:

[Koch] is not saying that information causes consciousness; he is saying that certain information just is consciousness, and because information is everywhere, consciousness is everywhere. I think that if you analyze this carefully, you will see that the view is incoherent. Consciousness is independent of an observer. I am conscious no matter what anybody thinks. But information is typically relative to observers...

...These sentences, for example, make sense only relative to our capacity to interpret them. So you can’t explain consciousness by saying it consists of information, because information exists only relative to consciousness.” [2013]

If information is the propagation of cause and effect within a given system, then John Searle's position must be wrong. Searle may say, then, that such a thing isn't information until it becomes information in a mind or according to observers. (Incidentally, there may be anti-realist problems with positing systems which are completely free of minds.)

Searle argues that causes and effects and the system which they belong to don't have information independently of minds. However, that doesn't stop it from being the case that this information can become information because of direct observations of that information.

Anthropomorphically, the system communicates to minds. Or minds read the system's message.

Searle's position on information can actually be said to be a position on what's called Shannon information. This kind of information is “observer-relative information”. In other words, it doesn't exist as information until an observer takes it as information. Thus when a digital camera takes a picture of a cat, each photodiode works in casual isolation from the other photodiodes. In other words, unlike the bits of consciousness, the bits of a photograph (before it's viewed) aren't integrated. Only when a mind perceives that photo are the bits integrated.

IIT, therefore, has a notion of “intrinsic information”.

Take the brain's neurons. Such things do communicate with each other in terms of causes and effects. (Unlike photodiodes?) It's said that the brain's information isn't observer-relative. Does this contradict Searle's position? IIT is talking about consciousness as information not being relative to other observers; though it is relative to the brain and consciousness itself?

There's an interesting analogy here which was also cited by Searle. In his arguments against Strong Artificial Intelligence (strong AI) and the mind-as-computer idea, he basically states that computers – like information - are everywhere. He writes:

... the window in front of me is a very simple computer. Window open = 1, window closed = 0. That is, if we accept Turing’s definition according to which anything to which you can assign a 0 and a 1 is a computer, then the window is a simple and trivial computer.” [1997]

Clearly, in these senses, an open and shut window also contains information. Perhaps it couldn't be deemed a computer if the window's two positions didn't also contain information. Thus, just as the window is only a computer to minds/observers, so too is that window's information only information to minds/observers. The window, in Searle speak, is an as-if computer which contains as-if information. And so too is Chalmers' thermometer and Koch's photodiode.

Here's Searle again:

"I say about my thermostat that it perceives changes in the temperature; I say of my carburettor that it knows when to enrich the mixture; and I say of my computer that its memory is bigger than the memory of the computer I had last year."

Another Searlian way of looking at thermostats and computers is that we can take an “intentional stance” towards them. We can treat them - or take them - as intentional (though inanimate) objects. Or we can take them as as-if intentional objects.

The as-if-ness of windows, thermostats and computers is derived from the fact that these inanimate objects have been designed to perceive, know and memorise. Though this is only as-if perception, as-if knowledge, and as-if memory. Indeed it is only as-if information. Such things are dependent on human perception, human knowledge, and human memory. Perception, knowledge and memory require real - or intrinsic - intentionality, not as-if intentionality. Thermostats, windows, and computers have a degree of as-if intentionality, derived from (our) intrinsic intentionality. However, despite all these qualifications of as-if intentionality, as-if intentionality is still ‘real’ intentionality; though derived from actual intentionality.

References

Searle, John (1997) The Mystery of Consciousness.
Tononi, Guilio (2015) 'Integrated Information Theory'.

*) Next: 'Integrated Information Theory: Panpsychism' (5)


Saturday, 8 April 2017

Integrated Information Theory: Is Consciousness Structured? (3)


Giulio Tononi

I first came across references to “structure” (within an analysis of consciousness) in a work by David Chalmers. In his 'Facing Up to the Hard Problem of Consciousness' (1995), Chalmers writes, for example, that “we can use facts about neural processing of visual information to indirectly explain the structure of colour space”. (Note the word “indirectly”, which, from what follows in this piece, proves to be important.) He then says that “the structure of experience will also be explained” (634). Nonetheless, Chalmers also says:

There are properties of experience, such as the intrinsic nature of a sensation of red, that cannot be fully captured in a structural description” (633).

Prima facie, it seems odd to say, as Giulio Tononi and Chalmers do, that “[c]onsciousness is structured”. It intuitively seems like a Rylean “category mistake”. That is, didn't William James and Wilfrid Sellars (as well as others) say that consciousness is “grainless”? (James also referred to the “stream of consciousness”, which is a related idea.)

In simple terms, Giulio Tononi tells us that “consciousness has composition”. That means that consciousness (or a particular experience) is composed of different things. What things? Tononi says that these things include “color and shape”. These components “structure visual experience”. Indeed it's this structure which “allows for [the] various distinctions” we'll make later in this piece.

What is an Experience or Mental State?

Does Tononi try to make things a little too neat and tidy; especially since he's putatively attempting to be doing a (new) kind of phenomenology; even if that phenomenology is (eventually?) anchored to neuroscience

What is a single experience anyway? Is there such a thing? (The same goes for a single mental state.) If it's difficult to think in terms of a single experience, then Tononi's “axioms” of each experience are, by definition, problematic. Can we say, as Tononi does, that “each experience is irreducible to non-interdependent, disjoint subsets of phenomenal distinctions”? Do we have this “integration”, as Tononi calls it?

Similarly, what is “a whole visual scene”? Where are its boundaries? Does it have boundaries? Why should it have boundaries? Tononi justifies his belief in an integrated single experience (or “visual scene”) when he says that he experiences “not the left side of the visual field independent of the right side (and vice versa)”. It's difficult to understand what that means. Why can't one experience the left side of a visual scene and then the right side of that visual scene? If such a visual scene is defined as including both a left side and a right side then, by definition, that experience (or scene) must include both sides. Wouldn't this be a question of definition and not one of phenomenology? The scene may have both a left and a right side; though why must the experience (or visual scene) be the same as the description of that scene? In more detail, a table has a right side and a left side; though must the experience itself (of that table) also include a left and a right side?

This may all depend on how cognitively enriched an experience is taken to be. On a Kantian reading, experiences are cognitively enriched with concepts/categories and cognitive activity. They're more than mere “sense impressions”, “sense-data” or “sensory information” (depending on the jargon one chooses). Strictly, one applies the concept [table] to a table. In that sense, perhaps one also applies [table legs], [table top], etc. to a table. Either way, that experience is cognitively enriched.

An Experience of a Blue Book


Tononi says that “each experience is composed of multiple phenomenological distinctions, elementary or higher-order”. What does that mean? Tononi writes:

For example, within one experience I may distinguish a book, a blue color, a blue book, the left side, a blue book on the left, and so on.”

This is tricky. What exactly do we do when we “distinguish a book, a blue color, a blue book, the left side, a blue book on the left”? Distinguishing x from y (or even x in itself) is surely a cognitive act. (An act of mental will, as it were.) Do we make these discrete distinctions in experience (or in every experience)? Do we distinguish at all?

We have an experience of a blue book “which is on the left”. However, that's not the same as saying that these elements are cognitively distinguished as being separate from one another. In a sense we have a solid and grainless experience of a blue book, which is indeed on the left. We don't (necessarily) distinguish the blueness of the book and it's being on the left side. We could do so. However, a pure or simple experience of a blue book on the left side doesn't demand or require any cognitive act of distinguishing x from y or a blue book from its surroundings.

Tononi makes a similarly case – one of a specific experience (or visual scene) not being capable of being reduced to its individual components – when he talks about “seeing a blue book”. This too is “irreducible” to “seeing a book without the color blue, plus the color blue without the book”. At first this is difficult to decipher and a little strange. John Horgan expresses this position when he says that “[e]xclusion [Tononi's technical term] helps explain why we don’t experience consciousness as a jumble of mini-sensations” [2015].

Sure, when we experience a blue book, we don't – cognitively! - separate the blueness of the book from the book itself. The blue book is, therefore, a single package; or, in Tononi's word, “irreducible”. What must we conclude from this? We can cognitively distinguish the book's blueness from the book itself. (We can also distinguish the front cover from the book itself.) Nonetheless, Tononi is saying that the phenomenological experience (or visual scene) isn't like that. Thus even though Kantian experience (or perception) can be – or is – cognitively enriched, that's not the same as saying that the book's blueness is distinguished from the book itself in a (Kantian) experience or perception of a blue book. That distinction can be made, of course, though it's not part of the experience of a blue book itself. The distinction, if it's made, will come later (if it follows at all).

Despite this seeming agreement with Tononi, what must we conclude from all this? Specifically, what is the blue-book experience's (with its axioms) relation to its “physical postulates”? How does the purity and grainlessness of that experience take us to Tononi's physical postulates?

An Experience of a Blue Book in a Bookcase

Prima facie, Tononi seems to be making the same points about what he calls “exclusion” as he made earlier about another technical term of his - “integration”. This time, instead of blueness and a book making a single package, he now says that when we experience a bookcase which has a blue book within it, we don't make an experiential distinction between that bookcase and that blue book.

Tononi also emphasises the “distinction blue/not blue”. More precisely, in this experience of a bookcase (rather than a blue book), that experience is “one of which is a blue book, but I am not having an experience with less content—say, one lacking the phenomenal distinction blue/not blue, or colored/not colored”. This use of positives and their negations is puzzling. Tononi appears to be saying that part of this experience of a bookcase (with a blue book) includes “the phenomenal distinction blue/not blue, or colored/not colored”. That is, this experience doesn't “lack[]” that precise “phenomenal distinction”. Yet earlier it seemed that basic experience didn't have any phenomenal distinctions – they are grainless. Making distinctions between blue and not blue, and between coloured and not coloured, are surely acts of cognition; which were seemingly excluded when Tononi talked about “integration”.

It can of course be said that an experience which makes “the phenomenal distinction blue/not blue” isn't cognitive in nature. More precisely, that phenomenal distinction between blue and not blue isn't cognitive at all – it's “phenomenal” or experiential. I simply see or experience that the aforesaid book is blue, not not-blue. That may mean that the phenomenal reality is pure or non-cognitive; though the analysis of this experience (which includes a distinction between blue and not blue) is evidently impure – it's cognitive in nature. How could it not be? Tononi is breaking down the experience of a blue book in a bookcase into its constituents after the fact (as it were). This isn't done during the actual experience; only after.

An Experience of the Word 'Because'

There are other justifications of Tononi's position on “integration”.

He cites the example of “the experience of seeing the word 'BECAUSE' written in the middle of a blank page”. His phenomenological analysis of this (if it is phenomenological) is that this experience of the word 'BECAUSE' is “irreducible to an experience of seeing 'BE' on the left plus an experience of seeing 'CAUSE' on the right”. That may happen; though surely it's not definitive of an experience of the word 'BECAUSE' in the middle of a page. Though we don't ordinarily distinguish the 'BE' from the 'CAUSE'. Perhaps if we did, we'd also distinguish the 'B' from the 'E', the 'E' from the 'C' and so on.

Even if we don't distinguish the 'BE' from the 'CAUSE' – what is that fact meant to tell us? Unless it tells us something about right- and left-eye vision working together. That is, the left eye (or the right-side of the brain) distinguishes the grapheme 'BE', and the right eye (or the left-side of the brain) distinguishes the grapheme 'CAUSE' at one and the same time. Perhaps Tononi's analysis is proved to be true by the neuroscience or neurophysiology that's responsible for such grainlessness. But would that be an analysis of a phenomenological experience of the word 'BECAUSE'?

The Speed and Length of an Experience

The axiom of exclusion includes some pretty bizarre properties. Yes, they are properties which exclude other properties. Thus if “my experience flows at a particular speed”, then it can't flow at another speed. Tononi even cites a speed (or, at the least, a possible speed) when he says that “each experience encompassing say a hundred milliseconds or so”. Is Tononi mixing up (or fusing) speed and length here? For example, say that one experiences a bus moving at 40 miles-an-hour for two minutes. Here the speed and the length that speed is maintained are two different things; though, in this case, they are related to one another.

In any case, the axiom of exclusion tells us that this particular experience is not minutes or hours long. Though it seems clear that a particular experience can be minutes- or hours-long. (Such a Sting's 24-hour orgasm?)

How are these speeds measured? I doubt that they're measured phenomenologically, as it were. Thus do neuroscientists, psychologists or cognitive scientists measure them? If so, how do they do so? And if the neuroscientist (or psychologist/cognitive scientist) has to tell us the speed of and length of an experience, then surely these aren't phenomenological data. Yet Tononi (or IIT) is supposed to move from the phenomenology (or the axioms of consciousness) to the the physical postulates. Of course it may be the case that Tononi's IIT fuses phenomenological data with neuroscientific data at such points. Though even if that's the case, it can't be (strictly speaking) a journey from consciousness (at time t) to the physical (at time t1).

The Binding Problem

We've all heard of the “binding problem”.

Tononi claims (or does he?) that he moves from consciousness to the physical when dealing with the structure of consciousness. Thus in terms of structure, this concerns, for example, the fact that a “simple experience like viewing a cue ball unites different elements such as color, shape, and size”. (We can also call this “the unity of conscious experience”.) Thus “any theory of consciousness will need to make sense of how this happens”. True; though surely only neurophysiology can answer this question, not a phenomenological analysis of consciousness (or the experiencing of a cue ball).

How can the binding problem be solved in an a priori manner? This is surely the way a phenomenological analysis would need to proceed. Yet we're meant (in IIT) to move from the axioms of consciousness, and the structure of experience, to the physical postulates. It seems, however, that when it comes to the binding problem, we would need to move in the opposite direction – from neurophysiology to consciousness.

However, I say elsewhere (in my 'Integrated Information Theory: From Consciousness to the Brain') that the direction of the arrow may not matter that much – or at all. It may not matter if we move from consciousness to the physical or from the physical to consciousness. What matters is that both consciousness (specifically phenomenology) and the physical (or the brain) are included in the analysis. Though that would mean that we don't have a pure phenomenology here. Perhaps that doesn't matter either. Tononi, after all, doesn't claim to Edmund Husserl or even that he's a phenomenologist; just as he doesn't claim to be a Cartesian when he talks about the given nature of the axioms of consciousness. What he does say is that both phenomenological analysis and an acceptance of the Cartesian givens (i.e., axioms) – not only neuroscience - are important to any theory of consciousness. Some would say that this is evidently so!

**************************************
Note

1) Giulio Tononi's prose style doesn't help us here. It's highly technical and, well, a little lifeless. (At least in the pieces I've relied on.) He doesn't make much of an effort to simplify what it is he's saying. Perhaps that's not required of academic works.

He doesn't seem to offer arguments either. Instead, he makes statements. Sure, arguments have led to his statements; though where does that leave the layman? The same is true of Tononi when seen on video or giving a seminar. He makes lots of statements and does very little philosophy. Not only that: he seems a little too confident for his own good.

The superb science writer, John Horgan, also takes this position. Horgan says that “[o]ne challenge posed by IIT is obscurity”. Indeed, according to Horgan, Tononi “acknowledged that IIT takes a while to 'seep in'”. Thus he concludes that “[p]opular accounts usually leave me wondering what I’m missing”. That doesn't seem to be correctly articulated, however. That is, the academic prose on IIT is obscure; though the popular accounts “leave [us] wondering what [we're] missing”. That's not the same thing.

References

Horgan, John (2015) 'Can Integrated Information Theory Explain Consciousness?' (Scientific American).
Tononi, Giulio (2015) 'Integrated information theory'.


*) Next: 'Integrated Information Theory: Information' (4)


Monday, 27 March 2017

Integrated Information Theory: From Consciousness to the Brain (2)



Integrated Information Theory (IIT) demands a physical explanation of consciousness. This rules out, for example, entirely functional explanations; as well as unwarranted correlations between consciousness and the physical. Indeed if consciousness is identical to the physical (not merely correlated with it or caused by it), then clearly the physical (as information, etc.) is paramount in the IIT picture.

All this is given a quasi-logical explanation in terms of axioms and postulates. That is, there must be identity claims between IIT's "axioms of consciousness" and postulates about the physical. Moreover, the axioms fulfill the role of premises. These premises lead to the physical postulates.

So what is the nature of that relation between an axiom and its postulate? How do we connect, for example, the conscious state with the neuroscientific explanation of that conscious state? How is the ontological/explanatory gap crossed?

As hinted at earlier, the identity of consciousness and the physical isn't a question of the latter causing or bringing about the former. Thus, if x and y are identical, then x cannot cause y and y cannot cause x. These identities stretch even as far as phenomenology in that the phenomenology of consciousness at time t is identical with the physical properties described by the postulates at time t.

More technically, Giulio Tononi (2008) identifies conscious states with integrated information. Moreover, when information is integrated (by whichever physical system – not only the brain) in a complicated enough manner (even if minimally complicated), that will be both necessary and sufficient to constitute (not cause or create) a conscious state or experience.

Explaining IIT's identity-claims (between the axioms of consciousness and the physical postulates) can also be done by stating what Tononi does not believe about consciousness. Tononi doesn't believe that

i) the brain's physical features (described by the postulates) cause or bring about consciousness.
ii) the brain's physical features (described by the postulates) are both necessary and sufficient for consciousness.

Causality

Where we have the physical, we must also have the causal. And indeed IIT stresses causality. If consciousness exists (as the first axiom states), then it must be causal in nature. It must “make a causal difference”. Thus epiphenomenalism, for one, is ruled out.

Again, consciousness itself must have causal power. Therefore this isn't a picture of the physical brain causing consciousness or even subserving consciousness. It is said, in IIT, that “consciousness exists from its own perspective”. This means that a conscious state qua conscious state (or experience qua experience) must have causal power both on itself and on its exterior. Indeed the first axiom (of existence) and its postulate require that a conscious state has what's called a “cause-effect power”. That is, it must be capable of having an effect on behaviour or actions (such a picking something up) as well as a “power over itself”. (Such as resulting in a modification of a belief caused by that conscious state?) This, as started earlier, clearly rules out any form of epiphenomenalism.

Now does this mean that a belief has causal powers (as such)? Does this mean that the experience of yellow has – or could have – causal powers? Perhaps because beliefs aren't entirely phenomenological, and spend most of their time in the “belief box” (according to non-eliminative accounts), then they aren't a good candidate for having causal powers in this phenomenological sense. However, the experience of yellow is a casual power if it can cause a subject to pick up, say, a lemon (qua lemon).

From Consciousness to Brain Again

Even if IIT starts with consciousness, it's very hard, intuitively, to see how it would be at all possible to move to the postulated physical aspects (not bases or causes) of a conscious state. How would that work? How, even in principle, can we move from consciousness (or phenomenology) to the physical aspects of that consciousness state? If there's a ontological/explanatory gap between the physical and the mental; then there may be/is an ontological gap/explanatory gap between consciousness and the physical. (There'll also be epistemological gaps.) So how does this IIT inversion solve any of these problems?

The trick is supposed to be pulled off by an analysis the phenomenology of a conscious state (or experience) and then accounting for that with the parallel states of the physical system which is the physical aspect of that conscious state. (Think here of Spinoza and Donald Davidson's "anomalous monism" – or substance monism/conceptual dualism - is which a single substance has two "modes".) But what does that mean? The ontological/explanatory gap, sure enough, shows its face here just as much as it does anywhere else in the philosophy of consciousness. Isn't this a case of comparing oranges with apples – only a whole lot more extreme?

An additional problem is to explain how the physical modes/aspects of a conscious state must be “constrained” by the properties of that conscious state (or vice versa?). Again, what does that actually mean? In theory it would be easy to find some kind of structural physical correlates of a conscious state. The problem would be to make sense of - and justify - those correlations. For example, I could correlate my wearing black shoes with Bradford City winning away. Clearly, in this instance “correlation doesn't imply causation”. However, if IIT doesn't accept that the physical causes conscious states, but that they are conscious states (or a mode therefore), then, on this example, my black shoes may actually be Bradford City winning at home (rather than the shoes causing that win)... Of course shoes and football victories aren't modes/aspects of the same thing. Thus the comparison doesn't work.

It doesn't immediately help, either, when IIT employs (quasi?)-logical terms to explain and account for these different aspects/modes of the same thing. Can we legitimately move from the axioms of a conscious experience to the essential properties (named “postulates”) of the physical modes/aspects of that conscious experience?

Here we're meant to be dealing with the "intrinsic" properties of experience which are then tied to the (intrinsic?) properties of the physical aspects/modes of that experience. Moreover, every single experience is meant to have its own axiom/s.

Nonetheless, if an axiomatic premise alone doesn't deductively entail (or even imply) its postulate, then why call it an “axiom” at all?

Tononi (2015) explains this is terms of "inference to the best explanation" (otherwise called abduction). Here, instead of a strict logical deduction from a phenomenological axiom to a physical postulate, the postulates have (merely) statistical inductive support. Tononi believes that such an abduction shows us that conscious systems have “cause-effect power over themselves”. Clearly, behavioural and neuroscientific evidence may/will show this to be the case.

Conclusion

Sceptically it may be said that the "ontological gap" (or the "hard problem") appears to have been bridged (or even solved) by mere phraseology. What I mean by this is that IIT identifies a conscious state with physical things in the brain. (Namely, the physical elements and dynamics of the brain.) These things are measurable. Thus, if that's the case, then a conscious state is measurable in that the dynamical and physical reality of the brain (at a given time) is measurable. Indeed in IIT it's even said that something called the “phi metric” can “quantify consciousness”.

Is the hard problem of consciousness solved merely through this process of identification?

The IIT theorist may reply: What more do you want?! However, then we can reply: Correlations between conscious states and brain states (or even the brain's causal necessitation of a conscious state) aren't themselves explanations of consciousness. Indeed isn't the identification of conscious states with the physical and dynamical elements of the brain what philosophers have done for decades? Do IIT's new technical/scientific terms, and references to “information”, give us anything fundamentally new in this long-running debate on the nature of consciousness?

*) Next: 'Integrated Information Theory: Structure (3)'

References

Tononi, Giulio Tononi (2008) 'Consciousness as Integrated Information: a Provisional Manifesto'.

Friday, 17 March 2017

Integrated Information Theory: the Cartesian Foundation (1)



At a prima facie level, integrated information theory (IIT) is utterly Cartesian. Sure, it's Cartesianism in a contemporary scientific and philosophical guise; though Cartesian nonetheless. This isn't to say that IIT simply reasserts, for example, the Cogito or Descartes' seemingly deductive style of reasoning. Though, despite that, the Cogito etc. can also be said to be resurrected in contemporary terms in IIT. However, Descartes moved from (his) mind and journeyed to his body and then to the external world. IIT moves from consciousness to the brain. At least as viewed from one angle. 1

On one hand, IIT inverts many 20th century (Anglo-American) ways of dealing with consciousness in that it's said that it moves from consciousness to arrive at the physical (rather than starting with the physical in order to attempt to arrive at consciousness). It can also be that IIT moves to the physical only after it's got its Cartesian house in order. 

The Cogito, of course, was the starting point of Descartes' enterprise.

On the other hand, what is certainly not Cartesian is that it's also said (or implied) that IIT begins with neuroscience/the brain and then journeys to consciousness. This, of course, directly contradicts what was said in the last paragraph.

In which case, if IIT also sees conscious states (or experiences) as “immediate and direct”, then how can neuroscience come first? This may depend on what's meant by the idea that neuroscience must (or does) come first. Even if a given neuroscientific basis (or reality) were necessary and sufficient for consciousness, that still wouldn't mean (philosophically) that such a reality must come first. Thus coming first or second may not matter. As long as consciousness (its immediacy, directness and phenomenology) and the neuroscience are both seen as being part of the same problem or reality, talk of the primary and the secondary can't be that important.

To get back to Descartes.

Let's take the use of the words “axiom” and “postulate” in IIT to begin with.

This implies a kind of Cartesian deductivism; though, in IIT's case, I find the words a little strained in that the moves from the axioms of consciousness to the postulates of its physical substrate (is it a substrate?) are never, strictly speaking, logical.

IIT's first Cartesian axiom is “the axiom of existence”. This is seen as being “self-evident”. Giulio Tononi describes the first axiom:

Consciousness is real and undeniable; moreover, a subject’s consciousness has this reality intrinsically; it exists from its own perspective.” [2015]

The only non-Cartesian aspect of the above (as it seems to me) is the claim that consciousness “exists from its own perspective”. Indeed it's hard to work out exactly what that means; at least as it's expressed in this bare form.

In any case, it's clear that the nature of IIT is, again, explicitly Cartesian. Tononi, for example, also says that

Consciousness exists: each experience is actual—indeed, that my experience here and now exists (it is real) is the only fact I can be sure of immediately and absolutely.” [2015]

Isn't this the Cogito written in a more contemporary manner? In other words, that which many 20th century scientists and philosophers have out-rightly denied (or seen as "unscientific") is here at the very beginning of the philosophical enterprise.

Tononi then takes the Cogito in directions not explicitly taken (or written about) by Descartes himself. That is, Tononi says that his

experience exists from its own intrinsic perspective, independent of external observers (it is intrinsically real or actual)”. [2015]

In this internalist/individualist manner, Tononi speaks of consciousness as being “independent of external observers”.

Functionalism

If you take this Cartesian approach to consciousness, then one automatically rules out certain alternative theories of mind which have been alive and well in the history of the philosophy of mind (at least in the late 20th century).

For example, IIT rules out functionalism.

Functionalism (or at least functionalism at its most pure) has a notion of mental functions and behaviour which effectively rules out experience; or, at the least, it rules out (or ignores) the phenomenological reality of consciousness.

The major and perhaps most obvious problem with functionalism (at least vis-a-vis consciousness and phenomenology) was best expressed by Christoph Koch in 2012. He claimed that much work in the philosophy of mind utilised “models that describe the mind as a number of functional boxes”. That's fair enough as it stands; except for the fact that these boxes are “magically endowed with phenomenal awareness”. Sure, the functional boxes may exist, and they may have much explanatory power (in the philosophy of mind); yet what about such things as Koch's “phenomenal awareness” and the controversial qualia?

One functionalist problem with an entirely Cartesian position on mind (or consciousness) is that it's indeed the case the consciousness seems to be direct and immediate. However, to some functionalists, this is only a “seeming”. (Daniel Dennett would probably call it an illusion.) In other words, simply because a conscious experience (or state) appears to us to be direct and immediate, that doesn't automatically mean that it is direct and immediate.

According to some functionalisms, even this immediate and direct phenomenology doesn't need to go beyond functionality (or mental functions). In other words, it's still all (or primarily) about functions. More clearly, this sense of immediacy and directness is itself a question of mental functions. In this case, the mental function which is our belief - and our disposition to believe - that consciousness is immediate and direct!

That belief also needs to be accounted for in functionalist terms. (In the way that the immediacy and directness of an experience itself may require a functionalist explanation.) That is, why is it the case the an experience seems direct and immediate to us? What function does that belief (or experience of an experience) serve?

A conscious state (or experience) may seem to be direct and immediate simply because we believe that it's direct and immediate. Moreover, we also have a long-running disposition to believe that it's direct and immediate. Or, again, the sense that an experience (or a mental state) is direct and immediate (or the experience that an experience is direct and immediate) doesn't automatically mean that it is. 

Doesn't this position leave out the phenomenological factors (or reality) of an experience (or conscious state) which are above and beyond their being direct and immediate? That is, on the one had, there's the phenomenological reality of an experience. And, on the other hand, there's its apparent (or real) directness and immediacy. The two aren't the same even if they always occur together.

*****************************************

Note

1 It can be said, in retrospect, that it would have been more accurate for Descartes to have said that he started with consciousness rather than with the “existence of the self” (or the “I think”). After all, the self/I is much more of a postulation than brute consciousness.

References

Koch, Christoph (2012). Consciousness: Confessions of a Romantic Reductionist, MIT Press.
Tononi, Giulio (2015), Scholarpedia.