senses

Why is the world vivid in mania, but bleak in depression?

Down in a hole
Image by ParanoidMonk via Flickr

ResearchBlogging.org
No, I am not speaking metaphorically. Quite literally,there has been accumulating evidence that sense are sharpened and have great acuity in mania while they are dulled in depression and the effects can be seen within the same individual over time as he/she suffers from manic/depressive episodes.

The latest study to add to this literature is by Bubl et al that found that depressive people’s brain registered lesser contrast than that registered by normal control brains when presented with same black and white images. They used pattern electroretinogram (PERG) to find whether the contrast gains registered by depressive retinas (those suffering from MDD) were different from those of controls and they found a strong and significant association with the severity of the depression.

I have covered earlier studies that found that sense of taste was compromised in depression (and enhanced in mania) and similarly that the sense of smell showed similar effects. Some snippets from the earlier posts:

What this means is that if you increase the amount of serotonin in the brain, then the capacity to detect sweet and bitter tastes is increased; if you increase noradrenaline levels those of detecting salty and bitter tastes is augmented; while a general increase in anxiety leads to better bitter taste detection. This also means that an anxiety state produces more bitter taste perception whereas a depressive state (characterized by low serotonin) is marked by bland sense of taste with marked inability to detect sweet and bitter tastes. A stressed state , marked by abundance of noradrenaline, would however lead to more salty and bitter taste perception.

and…

In one of my earlier post on depression, I had commented on the fact that those suffering from depression have less sensitivity to sweet and bitter tastes and as such may compensate by eating more sugar thus leading to the well documented diabetes – depression linkage.

In a new study it has just been discovered that not only depressives have bland sense of taste, their sense of smell is also diminished and they may make compensations by using greater amounts of perfume. Overall it seems that those suffering from depression will have bland subjective experience of flavor(which is a combination of both smell and taste) and thus may even not really find what they eat to be tasty.

Further on, I speculate prophetically that blander vision will also be found:

To me, this is an important finding. To my knowledge no research has been done in other sense modalities (like vision), but there is every reason to think that we may discover a bland sense of vision in depression. Why do I surmise so? this is because there is extensive literature available regarding the manic state and how things seem ‘vivid’ during that state including visual vividness. If depression is the converse of Mania, it follows that a corresponding blandness of vision should also be observed in those who are clinically depressed.

We also know that in extreme or psychotic forms of Mania, auditory hallucinations may arise. I am not suggesting that hallucinations are equal to vividness, but I would definitely love to see studies determining whether the auditory sense is heightened in Mania (maybe more absolute pitch perception in Mania) and a corresponding loss of auditory absolute pitch perception in depression. If so found, it may happen that music literally becomes subdued for people with depression and they sort of do not hear the music present in everyday life!

Whether other sense like touch, vestibular/ kinesthetic , proprioception (a heightened sense of which may give rise to eerie out-pf-body experiences in Mania) are also diminished in depression is another area where research may be fruitful.

Of course I have also speculated about the others senses and would love to hear studies supporting/contradicting this thesis. But given that senses are attenuated in depression and exaggerated in mania the question remains why? Which brings me to the topic of this post- why is the world bleak /bland to a depressive and vivid for a manic?

This was also the question asked by Mark Changizi (@Mark_Changizi) on twitter with respect to this new study uncovered today and I replied that this may be due to broaden-and-build theory being applied to sensory domain or sensory gating phenomenon differentially acting in manic/ depressive states, while Mark was of the opinion that it might be the result of physiological arousal with arousal being the variable of interest controlling whether the sense remain acute or dull?

I do not see the two views necessarily contradictory and it may be that chronic affect per se activates arousal and that is the mediating variable involved in its effect on senses; and we can design experiments to resolve this by measuring the effect of state sadness/ happiness/arousal on visual acuity (if the effects of state manipulations are big enough); howsoever, I woudl like to elaborate on my broaden and build theory.

In the cognitive, psychological and psychosocial domains the broaden and build theory of positive affect is more or less clearly elaborated and delineated. I wish to extend this to the sensory domain. I propose that chronic positive affect signals to our bodies/brains that we can afford to make our attention more diffuse, let senses be perceived more vividly as we have more resources available to process incoming data; conversely in a chronic low affect state we might like to conserve resources by narrowing focus/ literally narrowing the range of sensory inputs/reducing the sensitivity of sense organs and pool those resources elsewhere.

I know this is just a hypothesis , but I am pretty convinced and would love to hear the results of experiments anyone conducts around this theory.
Bubl, E., Kern, E., Ebert, D., Bach, M., & Tebartz van Elst, L. (2010). Seeing Gray When Feeling Blue? Depression Can Be Measured in the Eye of the Diseased Biological Psychiatry, 68 (2), 205-208 DOI: 10.1016/j.biopsych.2010.02.009

Enhanced by Zemanta

Major conscious and unconcoscious processes in the brain: part 3: Robot minds

This article continues my series on major conscious and unconscious processes in the brain. In my last two posts I have talked about 8 major unconscious processes in the brain viz sensory, motor, learning , affective, cognitive (deliberative), modelling, communications and attentive systems. Today, I will not talk about brain in particular, but will approach the problem from a slightly different problem domain- that of modelling/implementing an artificial brain/ mind.

I am a computer scientist, so am vaguely aware of the varied approaches used to model/implement the brain. Many of these use computers , though not every approach assumes that the brain is a computer.

Before continuing I would briefly like to digress and link to one of my earlier posts regarding the different  traditions of psychological research in personality and how I think they fit an evolutionary stage model . That may serve as a background to the type of sweeping analysis and genralisation that I am going to do. To be fair it is also important to recall an Indian parable of how when asked to describe an elephant by a few blind man each described what he could lay his hands on and thus provided a partial and incorrect picture of the elephant. Some one who grabbed the tail, described it as snake-like and so forth.

With that in mind let us look at the major approaches to modelling/mplementing the brain/intelligence/mind. Also remember that I am most interested in unconscious brain processes till now and sincerely believe that all the unconscious processes can, and will be successfully implemented in machines.   I do not believe machines will become sentient (at least any time soon), but that question is for another day.

So, with due thanks to @wildcat2030, I came across this book today and could immediately see how the different major approaches to artificial robot brains are heavily influenced (and follow) the evolutionary first five stages and the first five unconscious processes in the brain.
The book in question is ‘Robot Brains: Circuits and Systems for Conscious Machines’ by Pentti O. Haikonen and although he is most interested in conscious machines I will restrict myself to intelligent but unconscious machines/robots.

The first chapter of the book (which has made to my reading list) is available at Wiley site in its entirety and I quote extensively from there:

Presently there are five main approaches to the modelling of cognition that could be used for the development of cognitive machines: the computational approach (artificial intelligence, AI), the artificial neural networks approach, the dynamical systems approach, the quantum approach and the cognitive approach. Neurobiological approaches exist, but these may be better suited for the eventual explanation of the workings of the biological brain.

The computational approach (also known as artificial intelligence, AI) towards thinking machines was initially worded by Turing (1950). A machine would be thinking if the results of the computation were indistinguishable from the results of human thinking. Later on Newell and Simon (1976) presented their Physical Symbol System Hypothesis, which maintained that general intelligent action can be achieved by a physical symbol system and that this system has all the necessary and sufficient means for this purpose. A physical symbol system was here the computer that operates with symbols (binary words) and attached rules that stipulate which symbols are to follow others. Newell and Simon believed that the computer would be able to reproduce human-like general intelligence, a feat that still remains to be seen. However, they realized that this hypothesis was only an empirical generalization and not a theorem that could be formally proven. Very little in the way of empirical proof for this hypothesis exists even today and in the 1970s the situation was not better. Therefore Newell and Simon pretended to see other kinds of proof that were in those days readily available. They proposed that the principal body of evidence for the symbol system hypothesis was negative evidence, namely the absence of specific competing hypotheses; how else could intelligent activity be accomplished by man or machine? However, the absence of evidence is by no means any evidence of absence. This kind of ‘proof by ignorance’ is too often available in large quantities, yet it is not a logically valid argument. Nevertheless, this issue has not yet been formally settled in one way or another. Today’s positive evidence is that it is possible to create world-class chess-playing programs and these can be called ‘artificial intelligence’. The negative evidence is that it appears to be next to impossible to create real general intelligence via preprogrammed commands and computations.

The original computational approach can be criticized for the lack of a cognitive foundation. Some recent approaches have tried to remedy this and consider systems that integrate the processes of perception, reaction, deliberation and reasoning (Franklin, 1995, 2003; Sloman, 2000). There is another argument against the computational view of the brain. It is known that the human brain is slow, yet it is possible to learn to play tennis and other activities that require instant responses. Computations take time. Tennis playing and the like would call for the fastest computers in existence. How could the slow brain manage this if it were to execute computations?

The artificial neural networks approach, also known as connectionism, had its beginnings in the early 1940s when McCulloch and Pitts (1943) proposed that the brain cells, neurons, could be modelled by a simple electronic circuit. This circuit would receive a number of signals, multiply their intensities by the so-called synaptic weight values and sum these modified values together. The circuit would give an output signal if the sum value exceeded a given threshold. It was realized that these artificial neurons could learn and execute basic logic operations if their synaptic weight values were adjusted properly. If these artificial neurons were realized as hardware circuits then no programs would be necessary and biologically plausible artificial replicas of the brain might be possible. Also, neural networks operate in parallel, doing many things simultaneously. Thus the overall operational speed could be fast even if the individual neurons were slow. However, problems with artificial neural learning led to complicated statistical learning algorithms, ones that could best be implemented as computer programs. Many of today’s artificial neural networks are statistical pattern recognition and classification circuits. Therefore they are rather removed from their original biologically inspired idea. Cognition is not mere classification and the human brain is hardly a computer that executes complicated synaptic weight-adjusting algorithms.

The human brain has some 10 to the power of 11 neurons and each neuron may have tens of thousands of synaptic inputs and input weights. Many artificial neural networks learn by tweaking the synaptic weight values against each other when thousands of training examples are presented. Where in the brain would reside the computing process that would execute synaptic weight adjusting algorithms? Where would these algorithms have come from? The evolutionary feasibility of these kinds of algorithms can be seriously doubted. Complicated algorithms do not evolve via trial and error either. Moreover, humans are able to learn with a few examples only, instead of having training sessions with thousands or hundreds of thousands of examples. It is obvious that the mainstream neural networks approach is not a very plausible candidate for machine cognition although the human brain is a neural network.

Dynamical systems were proposed as a model for cognition by Ashby (1952) already in the 1950s and have been developed further by contemporary researchers (for example Thelen and Smith, 1994; Gelder, 1998, 1999; Port, 2000; Wallace, 2005). According to this approach the brain is considered as a complex system with dynamical interactions with its environment. Gelder and Port (1995) define a dynamical system as a set of quantitative variables, which change simultaneously and interdependently over quantitative time in accordance with some set of equations. Obviously the brain is indeed a large system of neuron activity variables that change over time. Accordingly the brain can be modelled as a dynamical system if the neuron activity can be quantified and if a suitable set of, say, differential equations can be formulated. The dynamical hypothesis sees the brain as comparable to analog feedback control systems with continuous parameter values. No inner representations are assumed or even accepted. However, the dynamical systems approach seems to have problems in explaining phenomena like ‘inner speech’. A would-be designer of an artificial brain would find it difficult to see what kind of system dynamics would be necessary for a specific linguistically expressed thought. The dynamical systems approach has been criticized, for instance by Eliasmith (1996, 1997), who argues that the low dimensional systems of differential equations, which must rely on collective parameters, do not model cognition easily and the dynamicists have a difficult time keeping arbitrariness from permeating their models. Eliasmith laments that there seems to be no clear ways of justifying parameter settings, choosing equations, interpreting data or creating system boundaries. Furthermore, the collective parameter models make the interpretation of the dynamic system’s behaviour difficult, as it is not easy to see or determine the meaning of any particular parameter in the model. Obviously these issues would translate into engineering problems for a designer of dynamical systems.

The quantum approach maintains that the brain is ultimately governed by quantum processes, which execute nonalgorithmic computations or act as a mediator between the brain and an assumed more-or-less immaterial ‘self’ or even ‘conscious energy field’ (for example Herbert, 1993; Hameroff, 1994; Penrose, 1989; Eccles, 1994). The quantum approach is supposed to solve problems like the apparently nonalgorithmic nature of thought, free will, the coherence of conscious experience, telepathy, telekinesis, the immortality of the soul and others. From an engineering point of view even the most practical propositions of the quantum approach are presently highly impractical in terms of actual implementation. Then there are some proposals that are hardly distinguishable from wishful fabrications of fairy tales. Here the quantum approach is not pursued.

The cognitive approach maintains that conscious machines can be built because one example already exists, namely the human brain. Therefore a cognitive machine should emulate the cognitive processes of the brain and mind, instead of merely trying to reproduce the results of the thinking processes. Accordingly the results of neurosciences and cognitive psychology should be evaluated and implemented in the design if deemed essential. However, this approach does not necessarily involve the simulation or emulation of the biological neuron as such, instead, what is to be produced is the abstracted information processing function of the neuron.

A cognitive machine would be an embodied physical entity that would interact with the environment. Cognitive robots would be obvious applications of machine cognition and there have been some early attempts towards that direction. Holland seeks to provide robots with some kind of consciousness via internal models (Holland and Goodman, 2003; Holland, 2004). Kawamura has been developing a cognitive robot with a sense of self (Kawamura, 2005; Kawamura et al., 2005). There are also others. Grand presents an experimentalist’s approach towards cognitive robots in his book (Grand, 2003).

A cognitive machine would be a complete system with processes like perception, attention, inner speech, imagination, emotions as well as pain and pleasure. Various technical approaches can be envisioned, namely indirect ones with programs, hybrid systems that combine programs and neural networks, and direct ones that are based on dedicated neural cognitive architectures. The operation of these dedicated neural cognitive architectures would combine neural, symbolic and dynamic elements.

However, the neural elements here would not be those of the traditional neural networks; no statistical learning with thousands of examples would be implied, no backpropagation or other weight-adjusting algorithms are used. Instead the networks would be associative in a way that allows the symbolic use of the neural signal arrays (vectors). The ‘symbolic’ here does not refer to the meaning-free symbol manipulation system of AI; instead it refers to the human way of using symbols with meanings. It is assumed that these cognitive machines would eventually be conscious, or at least they would reproduce most of the folk psychology hallmarks of consciousness (Haikonen, 2003a, 2005a). The engineering aspects of the direct cognitive approach are pursued in this book.

Now to me these computational approaches are all unidimensional-

  1. The computational approach is suited for symbol-manipulation and information-represntation and might give good results when used in systems that have mostly ‘sensory’ features like forming a mental represntation of external world, a chess game etc. Here something (stimuli from world) is represented as something else (an internal symbolic represntation).
  2. The Dynamical Systems approach is guided by interactions with the environment and the principles of feedback control systems and also is prone to ‘arbitrariness’ or ‘randomness’. It is perfectly suited to implement the ‘motor system‘ of brain as one of the common features is apparent unpredictability (volition) despite being deterministic (chaos theory) .
  3. The Neural networks or connectionsim is well suited for implementing the ‘learning system’ of the brain and we can very well see that the best neural network based systems are those that can categorize and classify things just like ‘the learning system’ of the brain does.
  4. The quantum approach to brain, I haven’t studied enough to comment on, but the action-tendencies of ‘affective system’ seem all too similar to the superimposed,simultaneous states that exits in a wave function before it is collapsed. Being in an affective state just means having a set of many possible related and relevant actions simultaneously activated and then perhaps one of that decided upon somehow and actualized. I’m sure that if we could ever model emotion in machine sit would have to use quantum principles of wave functions, entanglemnets etc.
  5. The cognitive approach, again I haven’t go a hang of yet, but it seems that the proposal is to build some design into the machine that is based on actual brain and mind implemntations. Embodiment seems important and so does emulating the information processing functions of neurons. I would stick my neck out and predict that whatever this cognitive approach is it should be best able to model the reasoning and evaluative and decision-making functions of the brain. I am reminded of the computational modelling methods, used to functionally decompose a cognitive process, and are used in cognitive science (whether symbolic or subsymbolic modelling) which again aid in decision making / reasoning (see wikipedia entry)

Overall, I would say there is room for further improvement in the way we build more intelligent machines. They could be made such that they have two models of world – one deterministic , another chaotic and use the two models simulatenously (sixth stage of modelling); then they could communicate with other machines and thus learn language (some simulation methods for language abilities do involve agents communicating with each other using arbitrary tokens and later a language developing) (seventh stage) and then they could be implemented such that they have a spotlight of attention (eighth stage) whereby some coherent systems are amplified and others suppressed. Of course all this is easier said than done, we will need at least three more major approaches to modelling and implementing brain/intelligence before we can model every major unconscious process in the brain. To model consciousness and program sentience is an uphill task from there and would definitely require a leap in our understandings/ capabilities.

Do tell me if you find the above reasonable and do believe that these major approaches to artificial brain implementation are guided and constrained by the major unconscious processes in the brain and that we can learn much about brain from the study of these artificial approaches and vice versa.

Major conscious and unconcoscious processes in the brain

Today I plan to touch upon the topic of consciousness (from which many bloggers shy) and more broadly try to delineate what I believe are the important different conscious and unconscious processes in the brain. I will be heavily using my evolutionary stages model for this.

To clarify myself at the very start , I do not believe in a purely reactive nature of organisms; I believe that apart from reacting to stimuli/world; they also act , on their own, and are thus agents. To elaborate, I believe that neuronal groups and circuits may fire on their own and thus lead to behavior/ action. I do not claim that this firing is under voluntary/ volitional control- it may be random- the important point to note is that there is spontaneous motion.

  1. Sensory system: So to start with I propose that the first function/process the brain needs to develop is to sense its surroundings. This is to avoid predators/ harm in general. this sensory function of brain/sense organs may be unconscious and need not become conscious- as long as an animal can sense danger, even though it may not be aware of the danger, it can take appropriate action – a simple ‘action’ being changing its color to merge with background. 
  2. Motor system:The second function/ process that the brain needs to develop is to have a system that enables motion/movement. This is primarily to explore its environment for food /nutrients. Preys are not going to walk in to your mouth; you have to move around and locate them. Again , this movement need not be volitional/conscious – as long as the animal moves randomly and sporadically to explore new environments, it can ‘see’ new things and eat a few. Again this ‘seeing’ may be as simple as sensing the chemical gradient in a new environmental.
  3. Learning system: The third function/process that the brain needs to develop is to have a system that enables learning. It is not enough to sense the environmental here-and-now. One needs to learn the contingencies in the world and remember that both in space and time. I am inclined to believe that this is primarily pavlovaion conditioning and associative learning, though I don’t rule out operant learning. Again this learning need not be conscious- one need not explicitly refer to a memory to utilize it- unconscious learning and memory of events can suffice and can drive interactions. I also believe that need for this function is primarily driven by the fact that one interacts with similar environments/con specifics/ predators/ preys and it helps to remember which environmental conditions/operant actions lead to what outcomes. This learning could be as simple as stimuli A predict stimuli B and/or that action C predicts reward D .
  4. Affective/ Action tendencies system .The fourth function I propose that the brain needs to develop is a system to control its motor system/ behavior by making it more in sync with its internal state. This I propose is done by a group of neurons monitoring the activity of other neurons/visceral organs and thus becoming aware (in a non-conscious sense)of the global state of the organism and of the probability that a particular neuronal group will fire in future and by thus becoming aware of the global state of the organism , by their outputs they may be able to enable one group to fire while inhibiting other groups from firing. To clarify by way of example, some neuronal groups may be responsible for movement. Another neuronal group may be receiving inputs from these as well as say input from gut that says that no movement has happened for a time and that the organism has also not eaten for a time and thus is in a ‘hungry’ state. This may prompt these neurons to fire in such a way that they send excitatory outputs to the movement related neurons and thus biasing them towards firing and thus increasing the probability that a motion will take place and perhaps the organism by indulging in exploratory behavior may be able to satisfy hunger. Of course they will inhibit other neuronal groups from firing and will themselves stop firing when appropriate motion takes place/ a prey is eaten. Again nothing of this has to be conscious- the state of the organism (like hunger) can be discerned unconsciously and the action-tendencies biasing foraging behavior also activated unconsciously- as long as the organism prefers certain behaviors over others depending on its internal state , everything works perfectly. I propose that (unconscious) affective (emotional) state and systems have emerged to fulfill exactly this need of being able to differentially activate different action-tendencies suited to the needs of the organism. I also stick my neck out and claim that the activation of a particular emotion/affective system biases our sensing also. If the organism is hungry, the food tastes (is unconsciously more vivid) better and vice versa. thus affects not only are action-tendencies , but are also, to an extent, sensing-tendencies.
  5. Decisional/evaluative system: the last function (for now- remember I adhere to eight stage theories- and we have just seen five brain processes in increasing hierarchy) that the brain needs to have is a system to decide / evaluate. Learning lets us predict our world as well as the consequences of our actions. Affective systems provide us some control over our behavior and over our environment- but are automatically activated by the state we are in. Something needs to make these come together such that the competition between actions triggered due to the state we are in (affective action-tendencies) and the actions that may be beneficial given the learning associated with the current stimuli/ state of the world are resolved satisfactorily. One has to balance the action and reaction ratio and the subjective versus objective interpretation/ sensation of environment. The decisional/evaluative system , I propose, does this by associating values with different external event outcomes and different internal state outcomes and by resolving the trade off between the two. This again need not be conscious- given a stimuli predicting a predator in vicinity, and the internal state of the organism as hungry, the organism may have attached more value to ‘avoid being eaten’ than to ‘finding prey’ and thus may not move, but camouflage. On the other hand , if the organisms value system is such that it prefers a hero’s death on battlefield , rather than starvation, it may move (in search of food) – again this could exist in the simplest of unicellular organisms.

Of course all of these brain processes could (and in humans indeed do) have their conscious counterparts like Perception, Volition,episodic Memory, Feelings and Deliberation/thought. That is a different story for a new blog post!

And of course one can also conceive the above in pure reductionist form as a chain below:

sense–>recognize & learn–>evaluate options and decide–>emote and activate action tendencies->execute and move.

and then one can also say that movement leads to new sensation and the above is not a chain , but a part of cycle; all that is valid, but I would sincerely request my readers to consider the possibility of spontaneous and self-driven behavior as separate from reactive motor behavior. 

Movement and perception disorders : a case for dissolution?

I have touched upon the work of Hughlings-Jackson earlier, albeit very obliquely, and readers familiar with with his work will know the immense contributions he has made to the understanding of epilepsy and other neurological disorders. I was recently reading the Croonian Lectures on the Evolution and dissolution of human nervous system and I encourage my readers to read the 3 lectures in their entirety. Let me briefly try to summarize his approach to brain first:

Hughligs Jackson believed that the brain had evolved. Also that the human brain is heterogeneous with three distinct evolutionary distinct components that were perfected in evolutionary dissimilar times: in this sense he sort of laid the groundwork for the Triune Brain theory of Paul MacLean.

He also believed that these three evolutionary distinct (logical) components of the human brain were hierarchical in nature and that all that these centers really did was representation of impressions and movements or re-representation of that initial representation (in successively higher centers). He also proposed that lower centers were more simple, more organized, more automatic and more reflexive in nature; while the highest centers were the least automatic, least organized , but the most complex and the least reflex-like in nature.

As these centers evolved one after the other, each such center has a positive function that only it can provide and it also inhibits some of the functions that were earlier provided by the lower layers; or in other words keeps the lower layers in check.

He also believed in the concept of dissolution: whereby when a higher center is not working properly then this would result in the lower centers asserting their autonomy. The loss of the higher layer/ center would not only result in the loss of function associated with that center (negative symptoms) ; but by freeing the autonomic activity of lower center from higher inhibition, it would also lead to some new functions to be experienced (positive symptoms). Thus a dissolution that affects the third or highest layer , would free the intermediate layer to produce some positive effects, and because of unavailability of the higher layer function would also lead to loss of some functionality.

He most fully developed these ideas in association with epilepsy patients, in which he believed, that the epileptiform seizure or discharge leads to inactivity of higher layers (1, 2 or all 3) and inappropriate activity in lower layers, thereby produce different degrees of negative and positive symptoms/ behaviors. My earlier post did contain references to this.

We find evidence for the truth of most of his ideas in today’s neuroscience developments.

This time I will like to touch upon how he himself had, sort of extended the triune brain, to an eight stage brain and how he delineated eight different levels of dissolution, each progressively of a more severe level than the earlier one , while he related the concept of dissolution in the Croonian lectures (lecture 1) with the help of movement disorders.

Before we proceed, it is instructive to note that Jackson believed in two levels of consciousness: subject and object- the former related to awareness of impressions; while the latter to movements. In simpler terms , he believed that we could discuss, movement (and volitional) related stuff separate from perceptual stuff and I’ll stick to that distinction in this post.

I’ll first quote at length from him (I have reformatted the stuff, so please read the original lecture for a balanced view):

I now come to give examples of dissolution. I confess that I have selected cases which illustrate most definitely, not pretending to be able to show that all the diseases of which we have a large clinical knowledge exemplify the law of dissolution. However, I instance very common cases, or cases in which the pathology has been well worked out; they are cases dependent on disease at various levels from the bottom to the top of the central nervous system. Most of them are examples of local dissolution.

  1. Starting at the bottom of the central nervous system, the first example is the commonest variety of progressive muscular atrophy. We see here that atrophy begins in the most voluntary limb, the arm; it affects first the most voluntary part of that limb, the hand, and first of all the most voluntary part of the hand; it then spreads to the trunk, in general to the more antomatic parts. To speak of a lower level of evolution in this case is almost to state a barren truism. At a stage when the muscles of the hand only are wasted, there is atrophy of the first or second dorsal anterior horn; the lower level of evolution is made up of the higher anterior horns for muscles of the arm. This statement, however, is worth making, for it shows clearly that by higher and lower is meant anatomico-physiologically higher or lower.
  2. Going a stage higher we come to hemiplegia, owing to destruction of part of a plexus in the mid-region of the brain. Choosing the commonest variety of hemiplegia, we say that there is loss of more or fewer of the most voluntary movements of one side of the body; we find that the arm, the more voluntary limb, suffers the more and longer; we find, too, that the most voluntary part of the face suffers more than the rest of the face. Here we must speak particularly of the lower level of evolution remaining; strictly we should say collateral and lower. We note that although unilateral movements (the more voluntary) are lost, the more automatic (the bilateral) are retained. Long ago this was explained by Broadbent. Subsequent clinical researches are in accord with his hypothesis. The point of it is that the bilateral movements escape in cases of hemiplegia in spite of destruction of some of the nervous arrangements representing them; the movements are doubly represented—that is, in each half of the brain. Hemiplegia is a clear case of dissolution, loss of the most voluntary movements of one side of the body with persistence of the more automatic movements.
  3. The next illustration is paralysis agitans. Apart from all speculation as to the seat of this disease, the motorial disorder illustrates dissolution well. In most cases the tremor affects the arm first, begins in the hand, and in the thumb and index-finger. The motorial disorder in this disease becomes bilateral; in an advanced stage paralysis agitans is double hemiplegia with rigidity—is a two-sided dissolution.
  4. Next we speak of epileptiform seizures which are unquestionably owing to disease in the midregion of the brain (middle motor centers). Taking the commonest variety, we see that the spasm mostly begins in the arm, nearly always in the hand, and most frequently in the thumb or index-finger, or both; these two digits are the most voluntary parts of the whole body.
  5. . [The next illustration was by cases of temporary paralysis after epileptiform seizures.]
  6. Cborea is a disease in which the limbs (the most voluntary parts) are affected more than the trunk (the more automatic parts), and the arms (the more voluntary limbs) suffer more than the legs. The localization of this disease has not been made out;symptomatically, however, it illustrates dissolution. Chorea has a special interest for me. The great elaborateness of the movements points to disease “high up” —to disease on a high level of evolution. Twenty years ago, from thinking on its peculiarities, it occurred to me that some convolutions represent movements. A view I have taken ever since.
  7. Aphasia. This well illustrates the doctrine of dissolution, and in several ways. We will consider a case of complete speechlessness. (a) There is loss of intellectual (the more voluntary) language, with persistence of emotional (the more automatic) language. In detail the patient cannot speak, and his pantomime is of a very simple kind; yet, on the other hand, he smiles, frowns, varies the tones of his voice (be may be able to sing), and gesticulates as well as ever. Gesticulation, which is an emotional manifestation, must be distinguished from pantomime, which is part of intellectual language. (b) The frequent persistence of “Yes“ and “No“ in the case of patients who are otherwise entirely speechless is a fact of extreme significance. We see that the patient has lost all speech, with the exception of the two most automatic of all verbal utterances. “Yes“and “No“ are evidently most general, for they assent to or dissent from any statement. In consequence of being frequently used, the correlative nervous arrangements are of necessity highly organized, and, as a further consequence, they are deeply automatic. (c) A more important, though not more significant, illustration is that the patient who cannot get out a word in speech nevertheless understands all that we say to him. Plainly this shows loss of a most voluntary service of words, with persistence of a more automatic service of words. We find illustrations in small corners. (d) There are three degrees of the utterance “No“ by aphasics. A patient may use it emotionally only—a most automatic service; another patient may also be able to reply correctly with it—a less automatic, but still very automatic service. (Here there is some real speech.) There is a still higher use of it, which some aphasics have not got. A patient who can reply “No“ to a question may be unable to say ‘No“ when told to do so. You ask the aphasic, “is your name Jones?“ he replies “No.“ You tell him to say “No,“ he tries and fails. You ask, “Are you a hundred years old ?“ He replies “No.“ You tell him to say “No.“ He cannot. Whilst not asserting that the inability to say “No“ when told is a failure in language, it is asserted that such inability with retention of power to use the word in reply illustrates dissolution. (e) A patient who is speechless may be unable to put out his tongue when told to do so; that he knows what is wanted is sometimes shown by his putting his finger in his mouth to help out the organ. That the tongue is not paralyzed in the ordinary sense is easily proved. The patient swallows well, which he could not do if bis tongue were as much paralysed as “it pretends to be.“ Besides, on other occasions he puts out his tongue, for example, to catch a stray crumb. Here is a reduction to a more automatic condition; there is no movement of the tongue more voluntary than that of putting it out when told. [The lecturer then remarked on swearing and on the utterance of other and innocent ejaculations by aphasics, remarking that some of these utterances had elaborate propositional structure but no propositional value. The patients could not repeat, say, what under excitement they uttered glibly and well. He spoke next of the frequent retention of some recurring utterance by aphasics, such as “Come on to me.“ These were not, from the mouth of the aphasic, of any propositional value, were not speech. He had no explanation to offer of these, but stated the hypothesis that they were the words the patient was uttering, or was about to utter, at the time he was taken ill.]
  8. . So far I have spoken of local dissolution occurring on but one half of the nervous system on different levels. Coming to the highest centers I speak of uniform dissolution—of cases in which all divisions of these centers are subjected to the same evil influence. I choose some cases of insanity. In doing this I am taking up the most difficult of all nervous diseases. I grant that it is not possible to show in detail that they exemplify the principle of dissolution, but choosing the simplest of these most complex cases we rnay show clearly that they illustrate it in general. I take a very common-place example—delirium in acute non-cerebral disease. This, scientifically regarded, is a case of insanity. In this, as in all other cases of insanity, it is imperative to take equally into account not only the dissolution but the lower level of evolution that remains. The patient‘s condition is partIy negative and partly positive. Negatively, he ceases to know that be is in hospital, and ceases to recognise persons about him. In other words, he is lost to his surroundings, or, in equivalent terms, he is defectively conscious. We must not say that he does not know where he is because he is defectively conscious; his not knowing where he is is itself defect of consciousness. The negative mental state signifies, on the physical side, exhaustion, or loss of function, somehow caused, of some highest nervous arrangements of his highest centers. We may conveniently say that it shows loss of function of the topmost layer of his highest centers. No one, of course, believes that the highest centers, or any other centers, are in layers; but the supposition will simplify exposition. The other half of his condition is positive. Besides his not knowings, there are his wrong knowings. He imagines himself to be at home or at work, and acts as far as practicable as if he were; ceasing to recognize the nurse as a nurse, he takes her to be his wife. This, the positive part of his condition, shows activity of the second layer of his highest centers; but which, now that the normal topmost layer is out of function, is the then highest layer; his delirium is the “survival of the fittest states,“ on his then highest evolutionary level. Plainly, he je reduced to a more automatic condition. Being (negatively) lost, from loss of function of the highest, latest developed, and least organized, to his present “real,“ surroundings, he (positively) talks and acts as if adjusted to some former “ideal“ surroundings, necessarily the more organized.

This to me seems very promising: I am a die-hard fan of the eight stage evolutionary/ developmental model whereby the first five stages are more similar, the next two on a qualitatively different level; while the last or eighth one takes one a notch higher up the octave to a different qualitative level altogether, although resembling or analogous to the first stage to an extent.

I keep mapping analogies between the different stages evident in different developmental / evolutionary processes and this piece of puzzle fits in nicely.

I’ll now like to speculate a bit. I’ll first restrict myself to movement/action planning, execution and control. I believe that the regions of the brain involved in this activity are (in a heirarchichal order) :

  1. Frontal Cortex (supplementary motor area) : decides which action to initiate/ plans and co-ordinates with respect to complex actions involving say both hands. More involvement in ‘voluntary’ actions
  2. Primary Motor cortex : Actual execution of the intended/ chosen action.
  3. Pre-motor cortex:responsible for motor guidance of movements especially with respect to external cues
  4. Parietal cortex:responsible for transforming visual information into motor commands
  5. Somatosensory cortex: this too is involved in motor circuits ; see this too.(synapses to and from this go to cerebellum/ basal ganglia): probably involved in triggering visual information related to the action. I am tempted to replace this with Thalamus and I just might do that after some more research!!
  6. Basal Ganglia: a set of structures that are involved in gross motor control
  7. Cerebellum: a structure involved in fine motor control
  8. Brain Stem: a structure involve in controlling vital involuntary movements like breathing, heart beat etc. these movements are neither voluntary nor automatic; they are involuntary and thus a notch different.

Now coming back to the disorders of the movement delineated by hughlings-Jackson, we can readily see some correspondences. The Basal Ganglia abnormality leading to Huntington’s Chorea is clearly at level 6. The primary motor cortex lesion leading to stage 2 hemiplegia is also well established. The epileptofm seizure related spasms and temporary paralysis just after that may be plausibly related to lesions of parietal and somatosnesory cortices. The lesion of pre-motor area may give rise to alien hand syndrome (to be distinguished from Anarchic hand syndrome), whereby you grab any object in sight compulsively. The hughlinghs-jackson description of Parkinsonisms at level 3 does not really gel here as Parkinsonism is more of a basal ganglia problem. Similarly PMA (progressive muscular atrophy) is no longer a valid diagnosis, so it may not map to SMA lesion or dysfunction. SMA dysfunction or lesion may instead produce syndromes like the mirror hand syndrome , in which both hands are used for the same action, though only one hand would have sufficed. It is interesting to note that this mirror hand syndrome is conceptualized today as freeing of SMA inhibition of PMA, thus allowing parallel planning of the same action. Similarly level 7 lesions of cerebellum may be more related to Ataxia rather than aphaisas.

Despite the above problems with the above conceptualization, I find the efforts of Jackson is the right direction and ahead of hist time.

I’ll now end with a teaser of things to come. It is related to disorders of phenomenal consciousness classified by Thomas Metzinger in Being No One, and they are as follows :

Deviant phenomenal models of reality

  1. Agnosia
  2. Neglect
  3. Blindsight
  4. Hallucinations
  5. Dreams

To me they follow the same 5 stage process, with each stage analogous to the movement related disorder.More about that later.

‘A’ is for a RED apple and ‘V’ is for a PURPLE van!

New research has unearthed that the grapheme-color synesthesia is not idiosyncratic , but follows some typical patterns. Grapheme – color synesthesia is one of the common types of synesthesia wherein one sees color associated with visualizing an alphabet / letter. Thus, whenever one see the alphabet ‘A’ one may also have a perception of color ‘red’. Till now, it was believed that this association of colors with alphabets was random and idiosyncratic; but new research has now revealed that it follows a pattern with most synesthetes more likely to associate typical colors with alphabets and for example report ‘A’ as red and V as ‘purple’.

Jamie Ward’s team that found this phenomenon speculates that the hue could be associated with the frequency of the word. Thus, as ‘A’ is a frequently used word it is associated with a common color ‘red’. ‘V’ which is infrequently used in the lexicon is associated with a similar infrequently encountered color purple. I am not sure how their new study is different from their earlier study that also found thus association and I believe that there would be some truth to their theory. however, the science daily article also talks about saturation. So I though I would jump in.

Colors can be conceptualized as per the HSV/ HSL or HSB system and understood in terms of hue , saturation and value/ brightness. I would personally be inclined to interpret the ‘A’ is red and ‘V’ is purple mapping as the outcome of a mapping of the alphabet order (a, b, c, ….x, y, z) to the color order in the rainbow / hue dimension (VIBGYOR). ‘A’ is one end ofthe spec trim and thus red in color, while ‘V’ is on another end of spectrum and thus more likely to be ‘violet’ in color. The frequency of usage of the alphabet should ideally map to brightness/ value of the synesthete color as in color space value is mapped to the amount of light reflected. saturation or ‘purity’ of color is a bit difficult to map onto the alphabet; but one could venture forth and suggest it has to do with how ‘pure’ the alphabet is ….is it always pronounced in one way….or are their multiple pronunciations associated with the same alphabet.

Mapping a linear progression of hues along VIBGYOR axis to alphabet order or numeral oredr is not that hard to envisage or visualize. If neurons of adjacent colorotopic and lexicotopic maps (assuming there are such maps for color and lexicon in the brain) in the brain overlap/ cross-over we would have the phenomenon of grapheme-color synestehesia that accounted for the commonalities in hues and alphabet association. However, we just know of retinotopic sort of maps in brains and these fit in with our existing knowledge. How the brain stores information about saturation/ value and correspondingly frequency and purity of alphabets and maps between the too, can lead to novel insights as to how information is stored in the brain.

I am excited and believe that we are on verge of breaking new ground ( I haven’t read the new Jamie ward paper though yet) and I have my own theories on why color is so important and may provide us many more clues (color and music are two most interesting phenomenon I believe). Are you excited? Do you have any theories?

PS: I just found that Jamie Ward is writing a book called “The Frog who Croaked Blue: Synaesthesia and the Mixing of the Senses” in which he recounts the experience of a synesthete who heard frog croaks as blue and chirping of cricket as red. To me this immediately conjures up the colortopic map with red at one end (high, feminine, shrill noises) and blue at the other (more manly, bass noise). This mapping of sound with colors may again follow the hue, saturation and value (three dimensions) with loudness of sound being proportional to the value of color being perceived and the hue and pitch mapped. Also , this may be an idiosyncratic experience, or this may be true of the species as a whole that we map more shrill noises to red and soothing and duller sounds to blue/ violet.

Go to Top